From zashah at kth.se Thu Feb 1 08:05:03 2018 From: zashah at kth.se (Zeeshan Ali Shah) Date: Thu, 1 Feb 2018 11:05:03 +0300 Subject: [Openstack-operators] Dedicated Network node ? Message-ID: In Kilo the installation guide did contains the section of installing a dedicated network node. in current Pike release it mentioned to configure controller as network node. https://docs.openstack.org/neutron/pike/install/controller-install-option2- ubuntu.html is this for a particular reason ? any guide for Pike to have a dedicated Network node ? -- Regards Zeeshan Ali Shah System Administrator - PDC HPC PhD researcher (IT security) Kungliga Tekniska Hogskolan +46 8 790 9115 http://www.pdc.kth.se/members/zashah -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Thu Feb 1 18:42:27 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 01 Feb 2018 18:42:27 +0000 Subject: [Openstack-operators] Dedicated Network node ? In-Reply-To: References: Message-ID: I think that indeed make sens as because now a day most the installations tend to either be within a dockerized solution or a relatively fair amont (3/4) of large hw nodes. One situation that would require dedicated hw would be a very large installation requiring you to lower the network pression on your controllers or docker workers in order to avoid noisy behaviors. Le jeu. 1 févr. 2018 à 09:05, Zeeshan Ali Shah a écrit : > > > In Kilo the installation guide did contains the section of installing a > dedicated network node. > > in current Pike release it mentioned to configure controller as network > node. > > https://docs.openstack.org/neutron/pike/install/controller-install-option2-ubuntu.html > > is this for a particular reason ? > > any guide for Pike to have a dedicated Network node ? > -- > > Regards > > Zeeshan Ali Shah > System Administrator - PDC HPC > PhD researcher (IT security) > Kungliga Tekniska Hogskolan > +46 8 790 9115 > http://www.pdc.kth.se/members/zashah > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Thu Feb 1 20:34:48 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Thu, 1 Feb 2018 21:34:48 +0100 Subject: [Openstack-operators] Dedicated Network node ? In-Reply-To: References: Message-ID: <1B871B14-7338-411F-973C-48A7C00C8B0F@betacloud-solutions.de> > On 1. Feb 2018, at 19:42, Flint WALRUS wrote: > > I think that indeed make sens as because now a day most the installations tend to either be within a dockerized solution or a relatively fair amont (3/4) of large hw nodes. > > One situation that would require dedicated hw would be a very large installation requiring you to lower the network pression on your controllers or docker workers in order to avoid noisy behaviors. Depending on the firewall concept, dedicated networks make sense in smaller environments using Docker. Christian. -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From gael.therond at gmail.com Thu Feb 1 20:42:48 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 01 Feb 2018 20:42:48 +0000 Subject: [Openstack-operators] Dedicated Network node ? In-Reply-To: <1B871B14-7338-411F-973C-48A7C00C8B0F@betacloud-solutions.de> References: <1B871B14-7338-411F-973C-48A7C00C8B0F@betacloud-solutions.de> Message-ID: Don’t get it, are you talking about the Neutron FWaaS ? If so, yes it would avoid the so called High traffic situation that I talked about on baremetal. Docker is less subject to such issue as the workload would so be distributed among multiple nodes thanks to LBaaS, LB Solution such as HAProxy/Vendor solutions or even ingress technics. I would however agree for small docker clusters but I highly doubt that professional company would indeed choose to host such services without having correctly forecasted enough hardware. Le jeu. 1 févr. 2018 à 21:34, Christian Berendt < berendt at betacloud-solutions.de> a écrit : > > > > On 1. Feb 2018, at 19:42, Flint WALRUS wrote: > > > > I think that indeed make sens as because now a day most the > installations tend to either be within a dockerized solution or a > relatively fair amont (3/4) of large hw nodes. > > > > One situation that would require dedicated hw would be a very large > installation requiring you to lower the network pression on your > controllers or docker workers in order to avoid noisy behaviors. > > Depending on the firewall concept, dedicated networks make sense in > smaller environments using Docker. > > Christian. > > -- > Christian Berendt > Chief Executive Officer (CEO) > > Mail: berendt at betacloud-solutions.de > Web: https://www.betacloud-solutions.de > > Betacloud Solutions GmbH > Teckstrasse 62 / 70190 Stuttgart / Deutschland > > Geschäftsführer: Christian Berendt > Unternehmenssitz: Stuttgart > Amtsgericht: Stuttgart, HRB 756139 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Thu Feb 1 21:48:32 2018 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Thu, 1 Feb 2018 22:48:32 +0100 Subject: [Openstack-operators] Dedicated Network node ? In-Reply-To: References: <1B871B14-7338-411F-973C-48A7C00C8B0F@betacloud-solutions.de> Message-ID: > On 1. Feb 2018, at 21:42, Flint WALRUS wrote: > > Don’t get it, are you talking about the Neutron FWaaS ? If so, yes it would avoid the so called High traffic situation that I talked about on baremetal. I expressed myself in a misleading way. Depending on the firewall / security concept and / or the network implementation, a separation of network nodes and controller nodes is required. Regardless of the size of the environment and the load / high traffic in the network. Christian. -- Christian Berendt Chief Executive Officer (CEO) Mail: berendt at betacloud-solutions.de Web: https://www.betacloud-solutions.de Betacloud Solutions GmbH Teckstrasse 62 / 70190 Stuttgart / Deutschland Geschäftsführer: Christian Berendt Unternehmenssitz: Stuttgart Amtsgericht: Stuttgart, HRB 756139 From gael.therond at gmail.com Thu Feb 1 21:50:22 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Thu, 01 Feb 2018 21:50:22 +0000 Subject: [Openstack-operators] Dedicated Network node ? In-Reply-To: References: <1B871B14-7338-411F-973C-48A7C00C8B0F@betacloud-solutions.de> Message-ID: Interesting, could you provide an example ? Le jeu. 1 févr. 2018 à 22:48, Christian Berendt < berendt at betacloud-solutions.de> a écrit : > > > > On 1. Feb 2018, at 21:42, Flint WALRUS wrote: > > > > Don’t get it, are you talking about the Neutron FWaaS ? If so, yes it > would avoid the so called High traffic situation that I talked about on > baremetal. > > I expressed myself in a misleading way. > > Depending on the firewall / security concept and / or the network > implementation, a separation of network nodes and controller nodes is > required. Regardless of the size of the environment and the load / high > traffic in the network. > > Christian. > > -- > Christian Berendt > Chief Executive Officer (CEO) > > Mail: berendt at betacloud-solutions.de > Web: https://www.betacloud-solutions.de > > Betacloud Solutions GmbH > Teckstrasse 62 / 70190 Stuttgart / Deutschland > > Geschäftsführer: Christian Berendt > Unternehmenssitz: Stuttgart > Amtsgericht: Stuttgart, HRB 756139 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at ubuntu.com Fri Feb 2 13:08:38 2018 From: james.page at ubuntu.com (James Page) Date: Fri, 02 Feb 2018 13:08:38 +0000 Subject: [Openstack-operators] [charms] Migrating HA control plane by scaling up and down In-Reply-To: References: Message-ID: Hi Sandor On Fri, 26 Jan 2018 at 13:32 Sandor Zeestraten wrote: > Hey OpenStack Charmers, > > We have a Newton deployment on MAAS with 3 controller machines running all > the usual OpenStack controller services in 3x HA with the hacluster charm > in LXD containers. Now we'd like migrate some of those OpenStack services > to 3 larger controller machines. Downtime of the services during the > migration is not an issue. > > My current plan is test something like this: > * Add the new controller machines to the model > * Increase the cluster_count from 3 to 6 on the hacluster charm of the > services in question > * Add units to the service to LXD containers on the new controller machine > * Wait for things to deploy and cluster > * Decrease the cluster_count from 6 to 3 > * Remove units on the old controller > > Questions: > 1. Is there a preferred way to migrate OpenStack services deployed by > charms? > I think your approach above is how I've always envisioned deployments would migrate services between machines. > 2. Does the plan above look somewhat sane? > Yes - esp the bit about increasing the cluster_count on the hacluster applications *prior* to adding the new units to the cluster. Failure todo this results in some confused behaviour from corosync. > 3. If yes to the above, does the order of changing the cluster_count and > adding/removing units matter? I've seen this bug for example: > https://bugs.launchpad.net/charm-hacluster/+bug/1424048 > Set cluster_count before adding the units (I remember that bug). > 4. Anything to keep in mind for scaling up and down the rabbitmq and > percona clusters? > For PXC bear in mind that if you have a large amount of data in your database, its going to take a bit of time and generate some load in terms of disk IO and network IO whilst the new unit receive a full state transfer. I'd also recommend testing your migration process first - this does mean having some spare kit around to standup a second cloud and then testing the migration. I believe the above should work fine, but I'd not want you to trip over the edge case/bug we've not seen during testing in your production environment. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Fri Feb 2 14:58:49 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 2 Feb 2018 09:58:49 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <57306d1a-529b-3907-7c5a-a9b46057b236@gmail.com> <37aa1d2c-8e80-8cf3-2a92-444916b3b80f@gmail.com> Message-ID: <52b1a23c-777c-6c3a-e124-1d06c33faaa0@gmail.com> On 01/29/2018 06:48 PM, Mathieu Gagné wrote: > On Mon, Jan 29, 2018 at 8:47 AM, Jay Pipes wrote: >> >> What I believe we can do is change the behaviour so that if a 0.0 value is >> found in the nova.conf file on the nova-compute worker, then instead of >> defaulting to 16.0, the resource tracker would first look to see if the >> compute node was associated with a host aggregate that had the >> "cpu_allocation_ratio" a metadata item. If one was found, then the host >> aggregate's cpu_allocation_ratio would be used. If not, then the 16.0 >> default would be used. >> >> What do you think? >> >> Best, >> -jay >> > > I don't mind this kind of fix. > I just want to make sure developers will be able to support it in the > future and not yank it out of frustration. =) > If a proper fix can be found later, I don't mind. > But I want to make sure I don't end up in a "broken" version where I > would need to skip 1-2 releases to get a working fix. FYI, I've added both the "nova-compute doesn't overwrite the first host aggregate's allocation ratio with the 'default' values" thing as well as the CLI tool to modify batches of resource provider inventory allocation ratios at a time to the PTG agenda: https://etherpad.openstack.org/p/nova-ptg-rocky Best, -jay From jaypipes at gmail.com Fri Feb 2 15:27:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Fri, 2 Feb 2018 10:27:34 -0500 Subject: [Openstack-operators] [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <57306d1a-529b-3907-7c5a-a9b46057b236@gmail.com> <37aa1d2c-8e80-8cf3-2a92-444916b3b80f@gmail.com> Message-ID: On 01/29/2018 06:30 PM, Mathieu Gagné wrote: > So lets explore what would looks like a placement centric solution. > (let me know if I get anything wrong) > > Here are our main concerns/challenges so far, which I will compare to > our current flow: > > 1. Compute nodes should not be enabled by default > > When adding new compute node, we add them to a special aggregate which > makes scheduling instances on it impossible. (with an impossible > condition) > We are aware that there is a timing issue where someone *could* > schedule an instance on it if we aren't fast enough. So far, it has > never been a major issue for us. > If we want to move away from aggregates, the "enable_new_services" > config (since Ocata) would be a great solution to our need. > I don't think Placement needs to be involved in this case, unless you > can show me a better alternative solution. Luckily, this one's easy. No changes would need to be made. The enable_new_services option should behave the same. Placement does not do service status checking. When the scheduler receives a request to determine a target host for an instance, it first asks the Placement service for a set of compute nodes that meet the resource requirements of that instance [1]. The Placement service returns a list of candidate compute nodes that could fit the instance. The scheduler then runs that list of compute nodes through something called the servicegroup API when it runs the ComputeFilter [2]. This filters out any compute nodes where the nova-compute service is disabled on the host. So, no changes needed here. [1] https://github.com/openstack/nova/blob/master/nova/scheduler/manager.py#L122 [2] https://github.com/openstack/nova/blob/master/nova/scheduler/filters/compute_filter.py#L45 > 2. Ability to move compute nodes around through API (without > configuration management system) > > We use aggregate to isolate our flavor series which are mainly based > on cpu allocation ratio. > This means distinct capacity pool of compute nodes for each flavor series. > We can easily move around our compute nodes if one aggregate (or > capacity pool) need more compute nodes through API. > There is no need to deploy a new version of our configuration management system. > > Based on your previous comment, Nova developers could implement a way > so configuration file is no longer used when pushing ratios to > Placement. > An op should be able to provide the ratio himself through the Placement API. Yes. The following will need to happen in order for you to continue operating your cloud without any changes: nova-compute needs to stop overwriting the inventory records' allocation ratio to the value it sees in the nova-compute's nova.conf file for ratios that are 0.0. Instead, nova-compute needs to look for the first host aggregate that has a corresponding allocation_ratio metadata item and use that if found for the inventory record's. I'm working on a spec for the above and have added this as an agenda item for the PTHG. In addition to the above, we are also looking at making the Placement service "oslo.policy-ified" :) This means that the Placement service needs to have RBAC rules enforced using the same middleware and policy library as Nova. This is also on the PTG agenda. > Some questions: > * What's the default ratio if none is provided? (none through config > file or API) Depends on the resource class. CPU is 16.0, RAM is 1.5 and disk is 1.0. > * How can I restrict flavors to matching hosts? Will Placement respect > allocation ratios provided by a flavor and find corresponding compute > nodes? I couldn't find details on that one in previous emails. Allocation ratios are not provided by the flavor. What I think you meant is will Placement respect allocation ratios made on the host aggregate. And the answer to that is: no, but it won't need to. Instead, it is *nova-compute* that needs to "respect the host aggregate's allocation ratios". That's what needs to change and what I describe above. > Some challenges: > * Find a way to easily visualise/monitor available capacity/hosts per > capacity pool (previously aggregate) ack. This is a good thing to discuss in Dublin. > 3. Ability to delegate above operations to ops If we do the "make sure nova-compute respects host aggregate allocation ratios", things will continue to be operational as you have had it since Folsom. > With aggregates, you can easily precisely delegate host memberships to > ops or other people using the corresponding policies: > * os_compute_api:os-aggregates:add_host > * os_compute_api:os-aggregates:remove_host Yep, we're not changing any of that. > And those people can be autonomous with the API without having to > redeploy anything. Understood. > Being able to manipulate/administer the hosts through an API is golden > and therefore totally disagree with "Config management is the solution > to your problem". Understood, and we're trying our best to work with you to keep the functionality you've gotten accustomed to. > With Placement API, there is no fine grain control over who/what can > be done through the API. (it's even hardcoded to the "admin" role) Yep. That's the oslo-policy integration task mentioned above. On the PTG agenda. > So there is some work to be done here: > * Remove hardcoded "admin" role from code. Already being work on by > Matt Riedemann [1] > * Fine grain control/policies for Placement API. > > The last point needs a bit more work. If we can implement control on > resource providers, allocations, usages, traits, etc, I will be happy. > In the end, that's one of the major "regression" I found with > placement. I don't want a human to be able to do more than it should > be able to do and break everything. > > So I'm not ready to cross that river yet, I'm still running Mitaka. > But I need to make sure I'm not stuck when Ocata happens for us. Well, more likely it's going to be Rocky that you will need to fast-forward upgrade to, but we'll cross that river when we need to. Best, -jay > [1] https://review.openstack.org/#/c/524425/ > > -- > Mathieu > From salvatore.cmp at gmail.com Mon Feb 5 10:36:02 2018 From: salvatore.cmp at gmail.com (Salvatore Campanella) Date: Mon, 5 Feb 2018 11:36:02 +0100 Subject: [Openstack-operators] OpenContrail Integration with an existing OpenStack Message-ID: Goodmorning! I have a little question for you. I'm trying to integrate OpenContrail as SDN Controller with an existing OpenStack installation. Do I need to ensure OpenStack network state is clean before integrate OpenContrail with my existing Openstack? Any suggestion? Thanks Salvatore -------------- next part -------------- An HTML attachment was scrubbed... URL: From rovanleeuwen at ebay.com Mon Feb 5 12:02:20 2018 From: rovanleeuwen at ebay.com (Van Leeuwen, Robert) Date: Mon, 5 Feb 2018 12:02:20 +0000 Subject: [Openstack-operators] OpenContrail Integration with an existing OpenStack In-Reply-To: References: Message-ID: > Do I need to ensure OpenStack network state is clean before integrate OpenContrail with my existing Openstack? > Any suggestion? What do you mean by “clean”? Contrail will ignore all information in the neutron db. Any incoming neutron api call will be translated to a contrail-api call. Contrail itself does not use the Mysql database but has its own (Cassandra) databases So any router/subnet/network/interface etc object that currently exists in the neutron/mysql database is ignored. That basically means any object that is using “stock-neutron” will break since it will need to be re-created on contrail. Cheers, Robert van Leeuwen -------------- next part -------------- An HTML attachment was scrubbed... URL: From salvatore.cmp at gmail.com Mon Feb 5 14:13:36 2018 From: salvatore.cmp at gmail.com (Salvatore Campanella) Date: Mon, 5 Feb 2018 15:13:36 +0100 Subject: [Openstack-operators] OpenContrail Integration with an existing OpenStack In-Reply-To: References: Message-ID: Hi Robert, > What do you mean by “clean”? I already have some object that currently exists in the neutron/mysql db and I would like to integrate OpenContrail with my existing OpenStack installation, but I didn't know what the expected behavior would be. To be clearer: I didn't know if I would have lost the objects instanced previously or if OpenContrail requires that I delete all objects in the neutron db before the integration procedure. So, thanks for your explanation! Cheers, Salvatore 2018-02-05 13:02 GMT+01:00 Van Leeuwen, Robert : > > Do I need to ensure OpenStack network state is clean before integrate > OpenContrail with my existing Openstack? > > > Any suggestion? > > > > What do you mean by “clean”? > > > > Contrail will ignore all information in the neutron db. > Any incoming neutron api call will be translated to a contrail-api call. > > Contrail itself does not use the Mysql database but has its own > (Cassandra) databases > > So any router/subnet/network/interface etc object that currently exists > in the neutron/mysql database is ignored. > > That basically means any object that is using “stock-neutron” will break > since it will need to be re-created on contrail. > > > > Cheers, > > Robert van Leeuwen > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Feb 6 00:07:17 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 05 Feb 2018 18:07:17 -0600 Subject: [Openstack-operators] Wanted: Operators willing to assist with COA Review Message-ID: <5A78F1B5.4060609@openstack.org> The COA is scheduled to be upgraded from Newton to Pike, and we're looking for people to help us identify existing questions that need to change to reflect the changes between the releases. If you or someone on your team recently took the COA and can offer specific feedback or assistance, we're looking for your help! Please contact jimmy at openstack.org or anne at openstack.org for more info. This is a timely effort, so your prompt response is appreciated! Thanks! Jimmy McArthur From mriedemos at gmail.com Tue Feb 6 03:00:42 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Mon, 5 Feb 2018 21:00:42 -0600 Subject: [Openstack-operators] [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> Message-ID: <2ce313c6-90ff-9db9-ab0f-4b573c0f472b@gmail.com> Given the size and detail of this thread, I've tried to summarize the problems and possible solutions/workarounds in this etherpad: https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu For those working on this, please check that what I have written down is correct and then we can try to make some kind of plan for resolving this. On 1/16/2018 3:24 PM, melanie witt wrote: > Hello Stackers, > > This is a heads up to any of you using the AggregateCoreFilter, > AggregateRamFilter, and/or AggregateDiskFilter in the filter scheduler. > These filters have effectively allowed operators to set overcommit > ratios per aggregate rather than per compute node in <= Newton. > > Beginning in Ocata, there is a behavior change where aggregate-based > overcommit ratios will no longer be honored during scheduling. Instead, > overcommit values must be set on a per compute node basis in nova.conf. > > Details: as of Ocata, instead of considering all compute nodes at the > start of scheduler filtering, an optimization has been added to query > resource capacity from placement and prune the compute node list with > the result *before* any filters are applied. Placement tracks resource > capacity and usage and does *not* track aggregate metadata [1]. Because > of this, placement cannot consider aggregate-based overcommit and will > exclude compute nodes that do not have capacity based on per compute > node overcommit. > > How to prepare: if you have been relying on per aggregate overcommit, > during your upgrade to Ocata, you must change to using per compute node > overcommit ratios in order for your scheduling behavior to stay > consistent. Otherwise, you may notice increased NoValidHost scheduling > failures as the aggregate-based overcommit is no longer being > considered. You can safely remove the AggregateCoreFilter, > AggregateRamFilter, and AggregateDiskFilter from your enabled_filters > and you do not need to replace them with any other core/ram/disk > filters. The placement query takes care of the core/ram/disk filtering > instead, so CoreFilter, RamFilter, and DiskFilter are redundant. > > Thanks, > -melanie > > [1] Placement has been a new slate for resource management and prior to > placement, there were conflicts between the different methods for > setting overcommit ratios that were never addressed, such as, "which > value to take if a compute node has overcommit set AND the aggregate has > it set? Which takes precedence?" And, "if a compute node is in more than > one aggregate, which overcommit value should be taken?" So, the > ambiguities were not something that was desirable to bring forward into > placement. > > __________________________________________________________________________ > OpenStack Development Mailing List (not for usage questions) > Unsubscribe: OpenStack-dev-request at lists.openstack.org?subject:unsubscribe > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev -- Thanks, Matt From massimo.sgaravatto at gmail.com Tue Feb 6 05:44:17 2018 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 6 Feb 2018 06:44:17 +0100 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects Message-ID: Hi I want to partition my OpenStack cloud so that: - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy I read that CERN addressed this use case implementing the ProjectsToAggregateFilter but, as far as I understand, this in-house developments eventually wasn't pushed upstream. So I am trying to rely on the AggregateMultiTenancyIsolation filter to create 2 host aggregates: - the first one including C1, C2, ... Cx and with filter_tenant_id=p1, p2, .., pn - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1.. pm But if I try to specify the long list of projects, I get:a "Value ... is too long" error message [*]. I can see two workarounds for this problem: 1) Create an host aggregate per project: HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 etc 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates and having each flavor visible only by a set of projects, and tagged with a specific string that should match the value specified in the correspondent host aggregate Is this correct ? Can you see better options ? Thanks, Massimo [*] # nova aggregate-set-metadata 1 filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298 ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id. Value: ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298. u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' is too long (HTTP 400) (Request-ID: req-b971d686-72e5-4c54-aaa1-fef5eb7c7001) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Feb 6 09:26:11 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 06 Feb 2018 09:26:11 +0000 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: Aren’t CellsV2 more adapted to what you’re trying to do? Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto < massimo.sgaravatto at gmail.com> a écrit : > Hi > > I want to partition my OpenStack cloud so that: > > - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx > - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy > > I read that CERN addressed this use case implementing the > ProjectsToAggregateFilter but, as far as I understand, this in-house > developments eventually wasn't pushed upstream. > > So I am trying to rely on the AggregateMultiTenancyIsolation filter to > create 2 host aggregates: > > - the first one including C1, C2, ... Cx and with filter_tenant_id=p1, p2, > .., pn > - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1.. pm > > > But if I try to specify the long list of projects, I get:a "Value ... is > too long" error message [*]. > > I can see two workarounds for this problem: > > 1) Create an host aggregate per project: > > HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 > HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 > etc > > 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates and > having each flavor visible only by a set of projects, and tagged with a > specific string that should match the value specified in the correspondent > host aggregate > > Is this correct ? Can you see better options ? > > Thanks, Massimo > > > > [*] > # nova aggregate-set-metadata 1 > filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298 > ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id. > Value: > ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298. > u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' > is too long (HTTP 400) (Request-ID: > req-b971d686-72e5-4c54-aaa1-fef5eb7c7001) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Feb 6 09:41:12 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 06 Feb 2018 09:41:12 +0000 Subject: [Openstack-operators] Octavia LBaaS - networking requirements Message-ID: Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider network instead of a neutron L3 vxlan ? Is Octavia specifically relying on L3 networking or can it operate without neutron L3 features ? I didn't find anything specifically related to the network requirements except for the network itself. Thanks guys! -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Tue Feb 6 09:53:42 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Tue, 6 Feb 2018 11:53:42 +0200 Subject: [Openstack-operators] Octavia LBaaS - networking requirements In-Reply-To: References: Message-ID: Hi Flint, I think, Octavia expects reachibility between components over management network, regardless of network's technology. On 2/6/18 11:41 AM, Flint WALRUS wrote: > Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider > network instead of a neutron L3 vxlan ? > > Is Octavia specifically relying on L3 networking or can it operate > without neutron L3 features ? > > I didn't find anything specifically related to the network > requirements except for the network itself. > > Thanks guys! > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Feb 6 09:57:06 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 06 Feb 2018 09:57:06 +0000 Subject: [Openstack-operators] Octavia LBaaS - networking requirements In-Reply-To: References: Message-ID: Ok, that’s what I was understanding from the documentation but as I couldn’t find any information related to the L3 specifics I prefer to have another check that mine only x) I’ll have to install and operate Octavia within an unusual L2 only network and I would like to be sure I’ll not push myself from the cliff :-) Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka a écrit : > Hi Flint, > > I think, Octavia expects reachibility between components over management > network, regardless of network's technology. > > > On 2/6/18 11:41 AM, Flint WALRUS wrote: > > Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider > network instead of a neutron L3 vxlan ? > > Is Octavia specifically relying on L3 networking or can it operate without > neutron L3 features ? > > I didn't find anything specifically related to the network requirements > except for the network itself. > > Thanks guys! > > > _______________________________________________ > OpenStack-operators mailing listOpenStack-operators at lists.openstack.orghttp://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From massimo.sgaravatto at gmail.com Tue Feb 6 10:15:52 2018 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Tue, 6 Feb 2018 11:15:52 +0100 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: Thanks for your answer. As far as I understand CellsV2 are present in Pike and later. I need to implement such use case in an Ocata Openstack based cloud Thanks, Massimo 2018-02-06 10:26 GMT+01:00 Flint WALRUS : > Aren’t CellsV2 more adapted to what you’re trying to do? > Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto < > massimo.sgaravatto at gmail.com> a écrit : > >> Hi >> >> I want to partition my OpenStack cloud so that: >> >> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx >> - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy >> >> I read that CERN addressed this use case implementing the >> ProjectsToAggregateFilter but, as far as I understand, this in-house >> developments eventually wasn't pushed upstream. >> >> So I am trying to rely on the AggregateMultiTenancyIsolation filter to >> create 2 host aggregates: >> >> - the first one including C1, C2, ... Cx and with filter_tenant_id=p1, >> p2, .., pn >> - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1.. >> pm >> >> >> But if I try to specify the long list of projects, I get:a "Value ... is >> too long" error message [*]. >> >> I can see two workarounds for this problem: >> >> 1) Create an host aggregate per project: >> >> HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 >> HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 >> etc >> >> 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates >> and having each flavor visible only by a set of projects, and tagged with a >> specific string that should match the value specified in the correspondent >> host aggregate >> >> Is this correct ? Can you see better options ? >> >> Thanks, Massimo >> >> >> >> [*] >> # nova aggregate-set-metadata 1 filter_tenant_id= >> ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019, >> a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397, >> d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071, >> ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f, >> 2b92483138dc4a61b1133c8c177ff298 >> ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id. >> Value: ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019, >> a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397, >> d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071, >> ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f, >> 2b92483138dc4a61b1133c8c177ff298. u'ee1865a76440481cbcff08544c7d580a, >> 1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565, >> b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e, >> e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a, >> 29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' is >> too long (HTTP 400) (Request-ID: req-b971d686-72e5-4c54-aaa1- >> fef5eb7c7001) >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at gmail.com Tue Feb 6 10:38:17 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 06 Feb 2018 10:38:17 +0000 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: If you’re willing to, I could share with you a way to get a FrankeinCloud using a Docker method with kolla to get a pike/queens/whatever cloud at the same time that your Ocata one. Le mar. 6 févr. 2018 à 11:15, Massimo Sgaravatto < massimo.sgaravatto at gmail.com> a écrit : > Thanks for your answer. > As far as I understand CellsV2 are present in Pike and later. I need to > implement such use case in an Ocata Openstack based cloud > > Thanks, Massimo > > 2018-02-06 10:26 GMT+01:00 Flint WALRUS : > >> Aren’t CellsV2 more adapted to what you’re trying to do? >> Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto < >> massimo.sgaravatto at gmail.com> a écrit : >> >>> Hi >>> >>> I want to partition my OpenStack cloud so that: >>> >>> - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx >>> - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy >>> >>> I read that CERN addressed this use case implementing the >>> ProjectsToAggregateFilter but, as far as I understand, this in-house >>> developments eventually wasn't pushed upstream. >>> >>> So I am trying to rely on the AggregateMultiTenancyIsolation filter to >>> create 2 host aggregates: >>> >>> - the first one including C1, C2, ... Cx and with filter_tenant_id=p1, >>> p2, .., pn >>> - the second one including Cx+1 ... Cy and with filter_tenant_id=pn+1.. >>> pm >>> >>> >>> But if I try to specify the long list of projects, I get:a "Value ... is >>> too long" error message [*]. >>> >>> I can see two workarounds for this problem: >>> >>> 1) Create an host aggregate per project: >>> >>> HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 >>> HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 >>> etc >>> >>> 2) Use the AggregateInstanceExtraSpecsFilter, creating two aggregates >>> and having each flavor visible only by a set of projects, and tagged with a >>> specific string that should match the value specified in the correspondent >>> host aggregate >>> >>> Is this correct ? Can you see better options ? >>> >>> Thanks, Massimo >>> >>> >>> >>> [*] >>> # nova aggregate-set-metadata 1 >>> filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298 >>> ERROR (BadRequest): Invalid input for field/attribute filter_tenant_id. >>> Value: >>> ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298. >>> u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' >>> is too long (HTTP 400) (Request-ID: >>> req-b971d686-72e5-4c54-aaa1-fef5eb7c7001) >>> >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >>> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jaypipes at gmail.com Tue Feb 6 13:48:34 2018 From: jaypipes at gmail.com (Jay Pipes) Date: Tue, 6 Feb 2018 08:48:34 -0500 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: <89b6ccc5-65eb-7db5-fe52-c51e1f8ff48e@gmail.com> On 02/06/2018 04:26 AM, Flint WALRUS wrote: > Aren’t CellsV2 more adapted to what you’re trying to do? No, cellsv2 are not user-facing nor is there a way to segregate certain tenants on to certain cells. Host aggregates are the appropriate way to structure this grouping. Best, -jay > Le mar. 6 févr. 2018 à 06:45, Massimo Sgaravatto > > a > écrit : > > Hi > > I want to partition my OpenStack cloud so that: > > - Project p1, p2, .., pn can use only compute nodes C1, C2, ... Cx > - Projects pn+1.. pm can use only compute nodes Cx+1 ... Cy > > I read that CERN addressed this use case implementing the > ProjectsToAggregateFilter but, as far as I understand, this in-house > developments eventually wasn't pushed upstream. > > So I am trying to rely on the  AggregateMultiTenancyIsolation filter > to create  2 host aggregates: > > - the first one including C1, C2, ... Cx and with > filter_tenant_id=p1, p2, .., pn > - the second one including Cx+1 ... Cy and with > filter_tenant_id=pn+1.. pm > > > But if I try to specify the long list of projects, I get:a "Value > ... is too long" error message [*]. > > I can see two workarounds for this problem: > > 1) Create an host aggregate per project: > > HA1 including CA1, C2, ... Cx and with filter_tenant_id=p1 > HA2 including CA1, C2, ... Cx and with filter_tenant_id=p2 > etc > > 2) Use the AggregateInstanceExtraSpecsFilter, creating two > aggregates and having each flavor visible only by a set of projects, > and tagged with a specific string that should match the value > specified in the correspondent host aggregate > > Is this correct ? Can you see better options ? > > Thanks, Massimo > > > > [*] > # nova aggregate-set-metadata 1 > filter_tenant_id=ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298 > ERROR (BadRequest): Invalid input for field/attribute > filter_tenant_id. Value: > ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298. > u'ee1865a76440481cbcff08544c7d580a,1c63ef80901b46bebc9e1e8e10d85019,a22db12575694c9e9f8650dde73ef565,b4a1039f2f7c419596662950d8a1d397,d8ce7a449d8e4d68bbf2c40314da357e,e81df4c0b493439abb8b85bfd4cbe071,ee1865a76440481cbcff08544c7d580a,29d6233c0b2741e2a30324079f05aa9f,2b92483138dc4a61b1133c8c177ff298' > is too long (HTTP 400) (Request-ID: > req-b971d686-72e5-4c54-aaa1-fef5eb7c7001) > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From jimmy at openstack.org Tue Feb 6 14:09:57 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 06 Feb 2018 08:09:57 -0600 Subject: [Openstack-operators] Is there an Ops Meetup today? Message-ID: <5A79B735.1090508@openstack.org> Was it canceled? https://wiki.openstack.org/wiki/Ops_Meetups_Team From emccormick at cirrusseven.com Tue Feb 6 14:14:54 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 6 Feb 2018 09:14:54 -0500 Subject: [Openstack-operators] Is there an Ops Meetup today? In-Reply-To: <5A79B735.1090508@openstack.org> References: <5A79B735.1090508@openstack.org> Message-ID: It was moved to 10am EST die to lots of conflicts. Need to update the wiki. On Feb 6, 2018 9:11 AM, "Jimmy McArthur" wrote: > Was it canceled? > > https://wiki.openstack.org/wiki/Ops_Meetups_Team > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Feb 6 14:50:35 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 6 Feb 2018 09:50:35 -0500 Subject: [Openstack-operators] Ops Meetups team meeting 10 minute warning! Message-ID: Meeting starts in 10 minutes - new regular time. See you on #openstack-operators ! Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Feb 6 16:04:51 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 6 Feb 2018 11:04:51 -0500 Subject: [Openstack-operators] Ops Meetups Team meeting minutes and so much more! Message-ID: Good meeting today, here are the minutes: Meeting ended Tue Feb 6 15:47:56 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:47 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.html 10:48 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.txt 10:48 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-06-15.00.log.html The new time slot is 10am EST always, which if our sums are correct works out to be 1400 UTC during DST and 1500 UTC otherwise. The team wiki page has been updated (see https://wiki.openstack.org/wiki/Ops_Meetups_Team) with this The OpenStack Operations Guide has been successfully converted from openstack documentation to community-edited wiki, see https://wiki.openstack.org/wiki/OpsGuide : A few volunteers such as: myself, Sean McGinnis, David Medberry (comments received) and possibly others plan to modernize this since most of the content is a bit dated, so if you want to contribute please get in touch here on the operators mailing list. Reminder: The next Ops Meetup is coming up in March in Tokyo, see https://www.eventbrite.com/e/openstack-ops-meetup-tokyo-tickets-39089912982 The Meetup after that looks to be either in NYC, or possibly as part of OpenStack Days Nordic. Please attend next weeks meeting to learn more on those possibilities Cheers, Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Tue Feb 6 16:52:33 2018 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 6 Feb 2018 08:52:33 -0800 Subject: [Openstack-operators] Octavia LBaaS - networking requirements In-Reply-To: References: Message-ID: No issue with using an L2 network for the lb-mgmt-net. It only requires the following: Controllers can reach amphora-agent IPs on the TCP bind_port (default 9443) Amphora-agents can reach the controllers in the controller_ip_port_list via UDP (default 5555) This can be via an L2 lb-mgmt-net (provider or other) or in some routing combination. I have started work on a detailed installation guide that will cover these options. Hopefully I can get it done during Rocky. Michael On Tue, Feb 6, 2018 at 1:57 AM, Flint WALRUS wrote: > Ok, that’s what I was understanding from the documentation but as I couldn’t > find any information related to the L3 specifics I prefer to have another > check that mine only x) > > I’ll have to install and operate Octavia within an unusual L2 only network > and I would like to be sure I’ll not push myself from the cliff :-) > > Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka a écrit : >> >> Hi Flint, >> >> I think, Octavia expects reachibility between components over management >> network, regardless of network's technology. >> >> >> On 2/6/18 11:41 AM, Flint WALRUS wrote: >> >> Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider >> network instead of a neutron L3 vxlan ? >> >> Is Octavia specifically relying on L3 networking or can it operate without >> neutron L3 features ? >> >> I didn't find anything specifically related to the network requirements >> except for the network itself. >> >> Thanks guys! >> >> >> _______________________________________________ >> OpenStack-operators mailing list >> OpenStack-operators at lists.openstack.org >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > From gael.therond at gmail.com Tue Feb 6 17:11:57 2018 From: gael.therond at gmail.com (Flint WALRUS) Date: Tue, 06 Feb 2018 17:11:57 +0000 Subject: [Openstack-operators] Octavia LBaaS - networking requirements In-Reply-To: References: Message-ID: Thank you very much for this cristal clear statement, in the meantime I watched the sidney Octavia session and had answers on side questions. The Openstack community is really kickass, thanks everyone! Le mar. 6 févr. 2018 à 17:52, Michael Johnson a écrit : > No issue with using an L2 network for the lb-mgmt-net. > > It only requires the following: > Controllers can reach amphora-agent IPs on the TCP bind_port (default 9443) > Amphora-agents can reach the controllers in the > controller_ip_port_list via UDP (default 5555) > > This can be via an L2 lb-mgmt-net (provider or other) or in some > routing combination. > > I have started work on a detailed installation guide that will cover > these options. Hopefully I can get it done during Rocky. > > Michael > > On Tue, Feb 6, 2018 at 1:57 AM, Flint WALRUS > wrote: > > Ok, that’s what I was understanding from the documentation but as I > couldn’t > > find any information related to the L3 specifics I prefer to have another > > check that mine only x) > > > > I’ll have to install and operate Octavia within an unusual L2 only > network > > and I would like to be sure I’ll not push myself from the cliff :-) > > > > Le mar. 6 févr. 2018 à 10:53, Volodymyr Litovka a > écrit : > >> > >> Hi Flint, > >> > >> I think, Octavia expects reachibility between components over management > >> network, regardless of network's technology. > >> > >> > >> On 2/6/18 11:41 AM, Flint WALRUS wrote: > >> > >> Hi guys, I’m wondering if the Octavia lb-mgmt-net can be a L2 provider > >> network instead of a neutron L3 vxlan ? > >> > >> Is Octavia specifically relying on L3 networking or can it operate > without > >> neutron L3 features ? > >> > >> I didn't find anything specifically related to the network requirements > >> except for the network itself. > >> > >> Thanks guys! > >> > >> > >> _______________________________________________ > >> OpenStack-operators mailing list > >> OpenStack-operators at lists.openstack.org > >> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > >> > >> > >> -- > >> Volodymyr Litovka > >> "Vision without Execution is Hallucination." -- Thomas Edison > > > > > > _______________________________________________ > > OpenStack-operators mailing list > > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitskrieg at bitskrieg.net Tue Feb 6 20:14:46 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Tue, 06 Feb 2018 15:14:46 -0500 Subject: [Openstack-operators] [nova] nova-compute automatically disabling itself? In-Reply-To: References: <6dcf6baa4104c1923fc8e954dbf2737a@bitskrieg.net> <172e32f3-e23e-15ce-33fa-6cd2af93eb73@gmail.com> Message-ID: <3a0453550ab5402d711f3ff06624b270@bitskrieg.net> All, This was the core issue - setting consecutive_build_service_disable_threshold = 0 in nova.conf (on controllers and compute nodes) solved this. It was being triggered by neutron dropping requests (and/or responses) for vif-plugging due to cpu usage on the neutron endpoints being pegged at 100% for too long. We increased our rpc_response_timeout value and this issue appears to be resolved for the time being. We can probably safely remove the consecutive_build_service_disable_threshold option at this point, but we would rather have intermittent build failures rather than compute nodes falling over in the future. Slightly related, we are noticing that neutron endpoints are using noticeably more CPU time recently than in the past w/ a similar workload (we run linuxbridge w/ vxlan). We believe this is tied to our application of KPTI for meltdown mitigation across the various hosts in our cluster (the timeline matches). Has anyone else experienced similar impacts or can suggest anything to try to lessen the impact? --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net On 2018-01-31 04:47 PM, Chris Apsey wrote: > That looks promising. I'll report back to confirm the solution. > > Thanks! > > --- > v/r > > Chris Apsey > bitskrieg at bitskrieg.net > https://www.bitskrieg.net > > On 2018-01-31 04:40 PM, Matt Riedemann wrote: >> On 1/31/2018 3:16 PM, Chris Apsey wrote: >>> All, >>> >>> Running in to a strange issue I haven't seen before. >>> >>> Randomly, the nova-compute services on compute nodes are disabling >>> themselves (as if someone ran openstack compute service set --disable >>> hostX nova-compute.  When this happens, the node continues to report >>> itself as 'up' - the service is just disabled.  As a result, if >>> enough of these occur, we get scheduling errors due to lack of >>> available resources (which makes sense).  Re-enabling them works just >>> fine and they continue on as if nothing happened.  I looked through >>> the logs and I can find the API calls where we re-enable the services >>> (PUT /v2.1/os-services/enable), but I do not see any API calls where >>> the services are getting disabled initially. >>> >>> Is anyone aware of any cases where compute nodes will automatically >>> disable their nova-compute service on their own, or has anyone seen >>> this before and might know a root cause?  We have plenty of spare >>> vcpus and RAM on each node - like less than 25% utilization (both in >>> absolute terms and in terms of applied ratios). >>> >>> We're seeing follow-on errors regarding rmq messages getting lost and >>> vif-plug failures, but we think those are a symptom, not a cause. >>> >>> Currently running pike on Xenial. >>> >>> --- >>> v/r >>> >>> Chris Apsey >>> bitskrieg at bitskrieg.net >>> https://www.bitskrieg.net >>> >>> _______________________________________________ >>> OpenStack-operators mailing list >>> OpenStack-operators at lists.openstack.org >>> http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators >> >> >> This is actually a feature added in Pike: >> >> https://review.openstack.org/#/c/463597/ >> >> This came up in discussion with operators at the Forum in Boston. >> >> The vif-plug failures are likely the reason those computes are getting >> disabled. >> >> There is a config option "consecutive_build_service_disable_threshold" >> which you can set to disable the auto-disable behavior as some have >> experienced issues with it: >> >> https://bugs.launchpad.net/nova/+bug/1742102 > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators From jimmy at openstack.org Tue Feb 6 20:41:18 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 06 Feb 2018 14:41:18 -0600 Subject: [Openstack-operators] [openstack-community] Feb 8 CFP Deadline - OpenStack Summit Vancouver Message-ID: <5A7A12EE.4050905@openstack.org> Hi everyone, The Vancouver Summit CFP closes in two days: February 8 at 11:59pm Pacific Time (February 9 at 6:59am UTC). For the Vancouver, the Summit Tracks have evolved to cover the entire open infrastructure landscape. Get your talks in for: • Container infrastructure • Edge computing • CI/CD • HPC/GPU/AI • Open source community • OpenStack private, public and hybrid cloud The Programming Committees for each Track have provided suggest topics for Summit sessions. View topic ideas for each track and submit your proposals before this week's deadline! If you have any questions, please email summit at openstack.org . Cheers, Kendall Kendall Waters OpenStack Marketing kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mriedemos at gmail.com Wed Feb 7 00:44:41 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 6 Feb 2018 18:44:41 -0600 Subject: [Openstack-operators] [nova] nova-compute automatically disabling itself? In-Reply-To: <3a0453550ab5402d711f3ff06624b270@bitskrieg.net> References: <6dcf6baa4104c1923fc8e954dbf2737a@bitskrieg.net> <172e32f3-e23e-15ce-33fa-6cd2af93eb73@gmail.com> <3a0453550ab5402d711f3ff06624b270@bitskrieg.net> Message-ID: <37d7ad5e-055f-991a-47da-5f263a898055@gmail.com> On 2/6/2018 2:14 PM, Chris Apsey wrote: > but we would rather have intermittent build failures rather than compute > nodes falling over in the future. Note that once a compute has a successful build, the consecutive build failures counter is reset. So if your limit is the default (10) and you have 10 failures in a row, the compute service is auto-disabled. But if you have say 5 failures and then a pass, it's reset to 0 failures. Obviously if you're doing a pack-first scheduling strategy rather than spreading instances across the deployment, a burst of failures could easily disable a compute, especially if that host is overloaded like you saw. I'm not sure if rescheduling is helping you or not - that would be useful information since we consider the need to reschedule off a failed compute host as a bad thing. At the Forum in Boston when this idea came up, it was specifically for the case that operators in the room didn't want a bad compute to become a "black hole" in their deployment causing lots of reschedules until they get that one fixed. -- Thanks, Matt From thierry at openstack.org Wed Feb 7 09:11:42 2018 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 7 Feb 2018 10:11:42 +0100 Subject: [Openstack-operators] Rocky cycle goals Message-ID: <48aa9855-9313-4c96-7563-d2617487d6aa@openstack.org> Hi everyone, In case you missed it, we are at the end of the community goals selection process for the upcoming Rocky cycle. The current selection is a mix of one ops-facing goal (ability to set logs to debug without restarting the service, using mutable configuration), and one dev-facing tech debt reduction goal (going further in getting rid of mox). Emilien posted: > TC voted (but not approved yet) and selected 2 goals that will likely be approved if no strong voice is raised this week: > > Remove mox > https://review.openstack.org/#/c/532361/ > > Toggle the debug option at runtime > https://review.openstack.org/#/c/534605/ > > If you have any comment on these 2 selected goals, please say it now otherwise TC will approve it and we'll discuss about details at the PTG. Let us know of any comment. You can find the other proposed goals listed at: https://wiki.openstack.org/wiki/Technical_Committee_Tracker -- Thierry Carrez (ttx) From jimmy at openstack.org Thu Feb 8 19:10:58 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Thu, 08 Feb 2018 13:10:58 -0600 Subject: [Openstack-operators] [OpenStack Foundation] CFP Deadline Today - OpenStack Summit Vancouver Message-ID: <5A7CA0C2.4010809@openstack.org> Hi everyone, The Vancouver Summit CFP closes *TODAY at 11:59pm Pacific Time (February 9 at 6:59am UTC).* Get your talks in for: • Container infrastructure • Edge computing • CI/CD • HPC/GPU/AI • Open source community • OpenStack private, public and hybrid cloud View topic ideas for each track HERE and submit your proposals before the deadline! Please note, the sessions that are included in your sponsorship package or purchased as an add-on do not go through the CFP process. If you have any questions, please email summit at openstack.org . Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Thu Feb 8 20:53:14 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Thu, 8 Feb 2018 22:53:14 +0200 Subject: [Openstack-operators] diskimage-builder: prepare ubuntu 17.x images Message-ID: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> Hi colleagues, does anybody here know how to prepare Ubuntu Artful (17.10) image using diskimage-builder? diskimage-builder use the following naming style for download - $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, these archives for artful (and bionic) are absent on cloud-images.ubuntu.com. There are just different kinds of images, not source tree as in -root archives. I will appreciate any ideas or knowledge how to customize 17.10-based image using diskimage-builder or in diskimage-builder-like fashion. Thanks! -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at medberry.net Thu Feb 8 21:32:25 2018 From: openstack at medberry.net (David Medberry) Date: Thu, 8 Feb 2018 14:32:25 -0700 Subject: [Openstack-operators] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> Message-ID: Subscribe to this bug and click the "This affects me." link near the top. https://bugs.launchpad.net/cloud-images/+bug/1585233 On Thu, Feb 8, 2018 at 1:53 PM, Volodymyr Litovka wrote: > Hi colleagues, > > does anybody here know how to prepare Ubuntu Artful (17.10) image using > diskimage-builder? > > diskimage-builder use the following naming style for download - > $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz > > and while "-root" names are there for trusty/amd64 and xenial/amd64 > distros, these archives for artful (and bionic) are absent on > cloud-images.ubuntu.com. There are just different kinds of images, not > source tree as in -root archives. > > I will appreciate any ideas or knowledge how to customize 17.10-based > image using diskimage-builder or in diskimage-builder-like fashion. > > Thanks! > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Thu Feb 8 21:36:51 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Fri, 9 Feb 2018 10:36:51 +1300 Subject: [Openstack-operators] [publiccloud-wg][keystone][Horizon] Multi-Factor Auth in OpenStack Message-ID: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> Hello fellow Public Cloud operators! I'm quite sorry I haven't been able to attend the last few public cloud meetings, have been deep in various bits of work, and been very asleep when the meetings normally were. That said, I have some interesting things some of you might like to play with: https://github.com/catalyst-cloud/adjutant-mfa The above is a collection of plugins for Keystone, Horizon, and Adjutant that help facilitate MFA on an OpenStack cloud. Note, that while this is a working solution, it isn't merged or part of anything official upstream, just using the various plugin mechanisms. It uses existing pieces of working logic, and does nothing that isn't able to be migrated from. My plan for the Rocky cycle is to work in Keystone and address the missing pieces I need to get MFA working properly throughout OpenStack in an actually useful way, and I'll provide updates for that once I have the specs ready to submit (am waiting until start of Rocky for that). The good thing, is that this current solution for MFA works, and it can be migrated from to the methods I intend to work on for Rocky. The same credential models will be used in Keystone, and I will write tools to take users with TOTP credentials and configure auth rules for them for more official MFA support in Keystone once it is useful. We will be deploying the above MFA solution in our cloud in the next Month, and I'll provide you some updates as to how that goes, but do play with it yourselves, and tell me what you think. The solution does require technical domain knowledge to setup, but the docs in the above repo should hopefully be straightforward, if not, get in touch and I can help. I hope to have some other useful bits of 'missing public cloud features' updates for you soon too. Cheers, Adrian Turjak From lbragstad at gmail.com Fri Feb 9 02:50:16 2018 From: lbragstad at gmail.com (Lance Bragstad) Date: Thu, 8 Feb 2018 20:50:16 -0600 Subject: [Openstack-operators] [publiccloud-wg][keystone][Horizon] Multi-Factor Auth in OpenStack In-Reply-To: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> References: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> Message-ID: <9bc0878f-284f-c6f0-2999-8c53dc6f183e@gmail.com> On 02/08/2018 03:36 PM, Adrian Turjak wrote: > Hello fellow Public Cloud operators! > > I'm quite sorry I haven't been able to attend the last few public cloud meetings, have been deep in various bits of work, and been very asleep when the meetings normally were. > > That said, I have some interesting things some of you might like to play with: > https://github.com/catalyst-cloud/adjutant-mfa > > The above is a collection of plugins for Keystone, Horizon, and Adjutant that help facilitate MFA on an OpenStack cloud. Note, that while this is a working solution, it isn't merged or part of anything official upstream, just using the various plugin mechanisms. It uses existing pieces of working logic, and does nothing that isn't able to be migrated from. Thanks for sharing! > My plan for the Rocky cycle is to work in Keystone and address the missing pieces I need to get MFA working properly throughout OpenStack in an actually useful way, and I'll provide updates for that once I have the specs ready to submit (am waiting until start of Rocky for that). The good thing, is that this current solution for MFA works, and it can be migrated from to the methods I intend to work on for Rocky. The same credential models will be used in Keystone, and I will write tools to take users with TOTP credentials and configure auth rules for them for more official MFA support in Keystone once it is useful. Are you planning to revive the previous proposal [0]? We should have stable/queens branch by EOW, so Rocky development will be here soon. Are you planning on attending the PTG? It might be valuable to discuss what you have and how we can integrate it upstream. I thought I remember the issue being policy related (where admins were required to update user secrets and it wasn't necessarily a self-serving API). Now that we're in a better place with system-scope, we might be able to move the ball forward a bit regarding your use case. [0] https://review.openstack.org/#/c/345705/ > > We will be deploying the above MFA solution in our cloud in the next Month, and I'll provide you some updates as to how that goes, but do play with it yourselves, and tell me what you think. The solution does require technical domain knowledge to setup, but the docs in the above repo should hopefully be straightforward, if not, get in touch and I can help. > > I hope to have some other useful bits of 'missing public cloud features' updates for you soon too. > > Cheers, > > Adrian Turjak > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From tony at bakeyournoodle.com Fri Feb 9 04:01:47 2018 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 9 Feb 2018 15:01:47 +1100 Subject: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> Message-ID: <20180209040147.GR23143@thor.bakeyournoodle.com> On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: > Hi colleagues, > > does anybody here know how to prepare Ubuntu Artful (17.10) image using > diskimage-builder? > > diskimage-builder use the following naming style for download - > $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz > > and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, > these archives for artful (and bionic) are absent on > cloud-images.ubuntu.com. There are just different kinds of images, not > source tree as in -root archives. > > I will appreciate any ideas or knowledge how to customize 17.10-based image > using diskimage-builder or in diskimage-builder-like fashion. You might like to investigate the ubuntu-minimal DIB element which will build your ubuntu image from apt rather than starting with the pre-built image. In the meantime I'll look at how we can consume the .img file, which is similar to what we'd need to do for Fedora Yours Tony. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From tobias at citynetwork.se Fri Feb 9 08:50:18 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Fri, 9 Feb 2018 09:50:18 +0100 Subject: [Openstack-operators] [Openstack-sigs] [publiccloud-wg][keystone][Horizon] Multi-Factor Auth in OpenStack In-Reply-To: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> References: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> Message-ID: <76676011-0c42-7ec7-a8d2-67d485aaae2d@citynetwork.se> Awesome reading Adrian! This is really important stuff for the public cloud side if things, and much appreciated! Are you planning to come to the PTG in Dublin! Cheers, Tobias On 2018-02-08 22:36, Adrian Turjak wrote: > Hello fellow Public Cloud operators! > > I'm quite sorry I haven't been able to attend the last few public cloud meetings, have been deep in various bits of work, and been very asleep when the meetings normally were. > > That said, I have some interesting things some of you might like to play with: > https://github.com/catalyst-cloud/adjutant-mfa > > The above is a collection of plugins for Keystone, Horizon, and Adjutant that help facilitate MFA on an OpenStack cloud. Note, that while this is a working solution, it isn't merged or part of anything official upstream, just using the various plugin mechanisms. It uses existing pieces of working logic, and does nothing that isn't able to be migrated from. > > My plan for the Rocky cycle is to work in Keystone and address the missing pieces I need to get MFA working properly throughout OpenStack in an actually useful way, and I'll provide updates for that once I have the specs ready to submit (am waiting until start of Rocky for that). The good thing, is that this current solution for MFA works, and it can be migrated from to the methods I intend to work on for Rocky. The same credential models will be used in Keystone, and I will write tools to take users with TOTP credentials and configure auth rules for them for more official MFA support in Keystone once it is useful. > > We will be deploying the above MFA solution in our cloud in the next Month, and I'll provide you some updates as to how that goes, but do play with it yourselves, and tell me what you think. The solution does require technical domain knowledge to setup, but the docs in the above repo should hopefully be straightforward, if not, get in touch and I can help. > > I hope to have some other useful bits of 'missing public cloud features' updates for you soon too. > > Cheers, > > Adrian Turjak > > > > _______________________________________________ > openstack-sigs mailing list > openstack-sigs at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-sigs -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From sahinerokan at gmail.com Fri Feb 9 09:40:38 2018 From: sahinerokan at gmail.com (okan sahiner) Date: Fri, 9 Feb 2018 10:40:38 +0100 Subject: [Openstack-operators] DIB does not install trove-guestagent and mysql server in the image Message-ID: Hi everyone, I followed the documentation Trove documentation mentioned here; https://docs.openstack.org/trove/latest/admin/building_guest_images.html However, I came up with the image I ran does not have trove agents and mysql-server installed. Elements I have used; ubuntu vm cloud-init-datasources ubuntu-guest ubuntu-mysql Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Fri Feb 9 11:03:40 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 9 Feb 2018 13:03:40 +0200 Subject: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <20180209040147.GR23143@thor.bakeyournoodle.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> <20180209040147.GR23143@thor.bakeyournoodle.com> Message-ID: <687756ea-1989-418d-73d7-614a501078d7@gmx.com> Hi Tony, On 2/9/18 6:01 AM, Tony Breeds wrote: > On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: >> Hi colleagues, >> >> does anybody here know how to prepare Ubuntu Artful (17.10) image using >> diskimage-builder? >> >> diskimage-builder use the following naming style for download - >> $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz >> >> and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, >> these archives for artful (and bionic) are absent on >> cloud-images.ubuntu.com. There are just different kinds of images, not >> source tree as in -root archives. >> >> I will appreciate any ideas or knowledge how to customize 17.10-based image >> using diskimage-builder or in diskimage-builder-like fashion. > You might like to investigate the ubuntu-minimal DIB element which will > build your ubuntu image from apt rather than starting with the pre-built > image. good idea, but with export DIST="ubuntu-minimal" export DIB_RELEASE=artful diskimage-builder fails with the following debug: 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | + source /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.427 | ++++ dirname /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.428 | +++ PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' 2018-02-09 10:33:22.428 | +++ dib-init-system 2018-02-09 10:33:22.429 | + set -eu 2018-02-09 10:33:22.429 | + set -o pipefail 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] 2018-02-09 10:33:22.429 | + echo 'Unknown init system' 2018-02-09 10:36:54.852 | + exit 1 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system while earlier it find systemd 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | + source /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.223 | ++++ dirname /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash 2018-02-09 10:33:22.224 | +++ PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. 2018-02-09 10:33:22.224 | +++ dib-init-system 2018-02-09 10:33:22.225 | + set -eu 2018-02-09 10:33:22.225 | + set -o pipefail 2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f /bin/systemctl ']' 2018-02-09 10:33:22.225 | + echo systemd 2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd 2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM it seems somewhere in the middle something happens to systemd package > In the meantime I'll look at how we can consume the .img file, which is > similar to what we'd need to do for Fedora script diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball contains the function get_ubuntu_tarball() which, after all checks, does the following: sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE probably, the easiest hack around the issue is to change above to smth like sudo ( mount -o loop tar cv  | tar xv -C $TARGET_ROOT ... ) Will try this. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From doka.ua at gmx.com Fri Feb 9 13:48:53 2018 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 9 Feb 2018 15:48:53 +0200 Subject: [Openstack-operators] [Openstack] diskimage-builder: prepare ubuntu 17.x images In-Reply-To: <687756ea-1989-418d-73d7-614a501078d7@gmx.com> References: <5ff88bac-2298-7ba2-8f04-46263aee4693@gmx.com> <20180209040147.GR23143@thor.bakeyournoodle.com> <687756ea-1989-418d-73d7-614a501078d7@gmx.com> Message-ID: Hi Tony, this patch works for me: --- diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball.orig 2018-02-09 12:20:02.117793234 +0000 +++ diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball 2018-02-09 13:25:48.654868263 +0000 @@ -14,7 +14,9 @@  DIB_CLOUD_IMAGES=${DIB_CLOUD_IMAGES:-http://cloud-images.ubuntu.com}  DIB_RELEASE=${DIB_RELEASE:-trusty} -BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz} +SUFFIX="-root" +[[ $DIB_RELEASE =~ (artful|bionic) ]] && SUFFIX="" +BASE_IMAGE_FILE=${BASE_IMAGE_FILE:-$DIB_RELEASE-server-cloudimg-${ARCH}${SUFFIX}.tar.gz}  SHA256SUMS=${SHA256SUMS:-https://${DIB_CLOUD_IMAGES##http?(s)://}/$DIB_RELEASE/current/SHA256SUMS}  CACHED_FILE=$DIB_IMAGE_CACHE/$BASE_IMAGE_FILE  CACHED_FILE_LOCK=$DIB_LOCKFILES/$BASE_IMAGE_FILE.lock @@ -45,9 +47,25 @@          fi          popd      fi -    # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between -    # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) -    sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE +    if [ -n "$SUFFIX" ]; then +      # Extract the base image (use --numeric-owner to avoid UID/GID mismatch between +      # image tarball and host OS e.g. when building Ubuntu image on an openSUSE host) +      sudo tar -C $TARGET_ROOT --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE +    else +      # Unpack image to IDIR, mount it on MDIR, copy it to TARGET_ROOT +      IDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" +      MDIR="/tmp/`head /dev/urandom | tr -dc A-Za-z0-9 | head -c 13 ; echo ''`" +      sudo mkdir $IDIR $MDIR +      sudo tar -C $IDIR --numeric-owner -xzf $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE +      sudo mount -o loop -t auto $IDIR/$DIB_RELEASE-server-cloudimg-${ARCH}.img $MDIR +      pushd $PWD 2>/dev/null +      cd $MDIR +      sudo tar c . | sudo tar x -C $TARGET_ROOT -k --numeric-owner 2>/dev/null +      popd +      # Clean up +      sudo umount $MDIR +      sudo rm -rf $IDIR $MDIR +    fi  }  ( On 2/9/18 1:03 PM, Volodymyr Litovka wrote: > Hi Tony, > > On 2/9/18 6:01 AM, Tony Breeds wrote: >> On Thu, Feb 08, 2018 at 10:53:14PM +0200, Volodymyr Litovka wrote: >>> Hi colleagues, >>> >>> does anybody here know how to prepare Ubuntu Artful (17.10) image using >>> diskimage-builder? >>> >>> diskimage-builder use the following naming style for download - >>> $DIB_RELEASE-server-cloudimg-$ARCH-root.tar.gz >>> >>> and while "-root" names are there for trusty/amd64 and xenial/amd64 distros, >>> these archives for artful (and bionic) are absent on >>> cloud-images.ubuntu.com. There are just different kinds of images, not >>> source tree as in -root archives. >>> >>> I will appreciate any ideas or knowledge how to customize 17.10-based image >>> using diskimage-builder or in diskimage-builder-like fashion. >> You might like to investigate the ubuntu-minimal DIB element which will >> build your ubuntu image from apt rather than starting with the pre-built >> image. > good idea, but with > > export DIST="ubuntu-minimal" > export DIB_RELEASE=artful > > diskimage-builder fails with the following debug: > > 2018-02-09 10:33:22.426 | dib-run-parts Sourcing environment file > /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.427 | + source > /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.427 | ++++ dirname > /tmp/in_target.d/pre-install.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.428 | +++ > PATH='$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/tmp/in_target.d/pre-install.d/../environment.d/..' > 2018-02-09 10:33:22.428 | +++ dib-init-system > 2018-02-09 10:33:22.429 | + set -eu > 2018-02-09 10:33:22.429 | + set -o pipefail > 2018-02-09 10:33:22.429 | + '[' -f /usr/bin/systemctl -o -f > /bin/systemctl ']' > 2018-02-09 10:33:22.429 | + [[ -f /sbin/initctl ]] > 2018-02-09 10:33:22.429 | + [[ -f /etc/gentoo-release ]] > 2018-02-09 10:33:22.429 | + [[ -f /sbin/init ]] > 2018-02-09 10:33:22.429 | + echo 'Unknown init system' > 2018-02-09 10:36:54.852 | + exit 1 > 2018-02-09 10:36:54.853 | ++ DIB_INIT_SYSTEM='Unknown init system > > while earlier it find systemd > > 2018-02-09 10:33:22.221 | dib-run-parts Sourcing environment file > /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.223 | + source > /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.223 | ++++ dirname > /tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/10-dib-init-system.bash > 2018-02-09 10:33:22.224 | +++ > PATH=/home/doka/DIB/dib/bin:/home/doka/DIB/dib/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/snap/bin:/tmp/dib_build.fJUf6F4d/hooks/extra-data.d/../environment.d/.. > 2018-02-09 10:33:22.224 | +++ dib-init-system > 2018-02-09 10:33:22.225 | + set -eu > 2018-02-09 10:33:22.225 | + set -o pipefail > 2018-02-09 10:33:22.225 | + '[' -f /usr/bin/systemctl -o -f > /bin/systemctl ']' > 2018-02-09 10:33:22.225 | + echo systemd > 2018-02-09 10:33:22.226 | ++ DIB_INIT_SYSTEM=systemd > 2018-02-09 10:33:22.226 | ++ export DIB_INIT_SYSTEM > > it seems somewhere in the middle something happens to systemd package >> In the meantime I'll look at how we can consume the .img file, which is >> similar to what we'd need to do for Fedora > script > diskimage-builder/diskimage_builder/elements/ubuntu/root.d/10-cache-ubuntu-tarball > contains the function get_ubuntu_tarball() which, after all checks, > does the following: > > sudo tar -C $TARGET_ROOT --numeric-owner -xzf > $DIB_IMAGE_CACHE/$BASE_IMAGE_FILE > > probably, the easiest hack around the issue is to change above to smth > like > > sudo ( > mount -o loop > tar cv  | tar xv -C $TARGET_ROOT ... > ) > > Will try this. > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at openstack.org Fri Feb 9 23:29:49 2018 From: mike at openstack.org (Mike Perez) Date: Sat, 10 Feb 2018 10:29:49 +1100 Subject: [Openstack-operators] Developer Mailing List Digest February 3-9th Message-ID: <20180209232949.GE14568@openstack.org> Please help shape the future of the Developer Mailing List Digest with this two question survey: https://openstackfoundation.formstack.com/forms/openstack_developer_digest_feedback Contribute to the Dev Digest by summarizing OpenStack Dev List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs HTML version: https://www.openstack.org/blog/?p=8287 Success Bot Says * stephenfin on #openstack-nova [0]: After 3 years and 7 (?) releases, encryption between nova's consoleproxy service and compute nodes is finally * possible ✌️ * AJaeger on #openstack-infra [1]: zuul and nodepool feature/zuulv3 branches have merged into master * ildikov on #openstack-nova [2]: OpenStack now supports to attach a Cinder volume to multiple VM instances managed by Nova. * mriedem on #openstack-nova [3]: osc-placement 1.0.0 released; you can now do things with resource providers/classes via OSC CLI now. * AJaeger on #openstack-infra [4]: All tox jobs have been converted to Zuul v3 native syntax, run-tox.sh is gone. * ttx on #openstack-dev [5]: All teams have at least one candidate for PTL for the Rocky cycle! Might be the first time. * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-15.log.html [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-18.log.html [2] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-23.log.html [3] - http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2018-01-24.log.html [4] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-02-07.log.html [5] - http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-08.log.html Community Summaries =================== * Release countdown [0] * Nova placement resource provider update [1] * TC Report [2] * POST /api-sig/news [3] * Technical Committee Status Update [4] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127120.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127203.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127012.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127140.html [4] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127192.html Dublin PTG Schedule is Up ========================= PTG schedule is available [0]. A lot of rooms are available Monday/Tuesday to discuss additional topics that take half a day and can be requested [1]. For small things (90 min discussions) we can book them dyncamically during the event with the new PTG bot features. Follow the thread for updates to the schedule [2]. [0] - https://www.openstack.org/ptg#tab_schedule [1] - https://etherpad.openstack.org/p/PTG-Dublin-missing-topics [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#126892 Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#126892 Last Chance for PTG Dublin Tickets ================================== PTG tickets for Dublin were sold out this week, and the Foundation received many requests for more tickets. Working with the venue to accommodate the extra capacity, every additional attendee incrementally increases costs to $600. It's understood the importance of this event and the need to have key team members present, so the OpenStack Foundation has negotiated an additional 100 tickets and will partially subsidize to be at sold at $400 [0]. [0] - https://www.eventbrite.com/e/project-teams-gathering-dublin-2018-tickets-39055825024 Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127129.html New Zuul Depends-On Syntax ======================== Recently introduced url-based syntax for Depends-On: footer in your commit message: Depends-On: https://review.openstack.org/535851 Old syntax will continue to work for a while, but please begin using the new syntax. Zuul has grown the ability to talk to multiple backend systems (Gerrit, Git and plain Git so far). From a change in gerrit you could have: Depends-On: https://github.com/ikalnytskyi/sphinxcontrib-openapi/pull/17 Or from a Github pull request: Depends-On: https://review.openstack.org/536159 Tips and certain cases contained further in the full message. Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-January/126535.html Call For Mentors and Funding ============================ The Outreachy program [0] helps people of underrepresented groups get involved in free and open source software by matching interns with established mentors in the upstream community. OpenStack will be participating in Outreachy May 2018 to August 2018. Application period opens on February 12th. Interested mentors should publish their project ideas [1]. You can read more information about being a mentor [2]. Interested sponsors [3] can help provide a stipend to interns for a three month program. [0] - https://wiki.openstack.org/wiki/Outreachy [1] - https://www.outreachy.org/communities/cfp/openstack/submit-project/ [2] - https://wiki.openstack.org/wiki/Outreachy/Mentors [3] - https://www.outreachy.org/sponsor/ Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127009.html Community Goals for Rocky ========================= TC voted by not approved yet: * Remove mox [0] * Toggle the debug option at runtime [1] Comment now on the two selected goals, or the TC will approve them and they'll be discussed at the PTG. [0] - https://review.openstack.org/#/c/532361/ [1] - https://review.openstack.org/#/c/534605/ Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127017.html End of PTL Nominations ====================== Official candidate list available [0]. There are 0 projects without candidates, so the TC will not have to appoint an PTL's. Three projects will have elections: Kolla, QA and Mistral. [0] - http://governance.openstack.org/election/#Rocky-ptl-candidates Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127098.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From mriedemos at gmail.com Sun Feb 11 21:45:09 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Sun, 11 Feb 2018 15:45:09 -0600 Subject: [Openstack-operators] [nova] Regression bug for boot from volume with IsolatedHostsFilter Message-ID: I triaged this bug a couple of weeks ago: https://bugs.launchpad.net/nova/+bug/1746483 It looks like it's been regressed since Mitaka when that filter started using the RequestSpec object rather than legacy filter_properties dict. Looking a bit deeper though, it looks like this filter never worked for volume-backed instances. That's because this code, called from the compute API, never takes the image_id out of the volumes "volume_image_metadata": https://github.com/openstack/nova/blob/fa6c0f9cb14f1b4ce4d9b1dbacb1743173089986/nova/utils.py#L1032 So before the regression that breaks the filter, the filter just never got the image.id to validate and accepted whatever host for that instance since it didn't know the image to tell if it was isolated or not. I've got a functional recreate test for the bug and I think it's a pretty easy fix, but a question comes up about backports, which is - do we do two fixes for this bug, one to backport to stable which is just handling the missing RequestSpec.image.id attribute in the filter so the filter doesn't explode? Then we do another fix which actually pulls the image_id off the volume_image_metadata and put that properly into the RequestSpec so the filter actually _works_ with volume-backed instances? That would technically be a change in behavior for the filter, albeit likely the correct thing to do all along but we just never did it, and apparently no one ever noticed or cared (it's not a default enabled filter after all). -- Thanks, Matt From adriant at catalyst.net.nz Sun Feb 11 23:37:17 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Mon, 12 Feb 2018 12:37:17 +1300 Subject: [Openstack-operators] [publiccloud-wg][keystone][Horizon] Multi-Factor Auth in OpenStack In-Reply-To: <9bc0878f-284f-c6f0-2999-8c53dc6f183e@gmail.com> References: <5814be16-85f7-0eb9-f694-e9280b617d04@catalyst.net.nz> <9bc0878f-284f-c6f0-2999-8c53dc6f183e@gmail.com> Message-ID: <0c5b5347-bf91-e642-862e-34ffc3d91dc4@catalyst.net.nz> On 09/02/18 15:50, Lance Bragstad wrote: > On 02/08/2018 03:36 PM, Adrian Turjak wrote: >> My plan for the Rocky cycle is to work in Keystone and address the missing pieces I need to get MFA working properly throughout OpenStack in an actually useful way, and I'll provide updates for that once I have the specs ready to submit (am waiting until start of Rocky for that). The good thing, is that this current solution for MFA works, and it can be migrated from to the methods I intend to work on for Rocky. The same credential models will be used in Keystone, and I will write tools to take users with TOTP credentials and configure auth rules for them for more official MFA support in Keystone once it is useful. > Are you planning to revive the previous proposal [0]? We should have > stable/queens branch by EOW, so Rocky development will be here soon. Are > you planning on attending the PTG? It might be valuable to discuss what > you have and how we can integrate it upstream. I thought I remember the > issue being policy related (where admins were required to update user > secrets and it wasn't necessarily a self-serving API). Now that we're in > a better place with system-scope, we might be able to move the ball > forward a bit regarding your use case. > > [0] https://review.openstack.org/#/c/345705/ So the use case is not just self-management, that's a part of it, but one at least we've solved outside of Keystone. The bigger issue is that MFA as we currently have it in Keystone is... unfinished and very hard to consume. And no I won't be coming to the PTG. :( The multi-auth-method approach is good, as are the per user auth rules, but right now nothing is consuming it using more than one method. In fact KeystoneAuth doesn't know how to deal with it. In part that is my fault since I put my hand up to make KeystoneAuth work with Multi-method auth, but... I gave up because it got ugly fast. We could make auth methods in KeystoneAuth that are made up of multiple methods, but then you need an explicit auth module for each combination... We need to refactor that code to allow you to specify a combination and have the code underneath do the right thing. The other issue is that you always need to know ahead of time how to auth for a given user and their specific auth rules, and you can't programmatically figure that out. The missing piece is something that allows us to programmatically know what is missing when 1 out of 2+ auth rules succeeds. When a user with more than 1 auth rule attempts to auth to Keystone, if they auth with 1 rule, but need 2 (password and totp), then the auth will fail and the error will be unhelpful. Even if the error was helpful, we can't rely on parsing error messages, that's unsafe. What should happen is Keystone acknowledges they were successful with one of their configured auth rules, at which point we know this user is 'probably' who they say they are. We now pass them a Partially Authed Token, which says they've already authed with 'password', but are missing 'totp' to complete their auth. The user can now return that token, and the missing totp auth method, and get back a full token. So the first spec I intend to propose is the Partially Authed Token type. Which solves the challenge response problem we have, and lets us actually know how to proceed when auth is unfinished. Once we have that, we can update KeystoneAuth, then the CLI to support challenge response, and then Horizon as well. Then we can look at user self management of MFA. Amusingly the very original spec that brought it multi-auth methods into Keystone talked about the need for a 'half-token': https://adam.younglogic.com/2012/10/multifactor-auth-and-keystone/ https://blueprints.launchpad.net/keystone/+spec/multi-factor-authn https://review.openstack.org/#/c/21487/ But the 'half-token' was never implemented. :( The MFA method in this original email was just... replace the password auth method with one that expects an appended totp passcode. It's simple, it doesn't break anything nor expect more than one auth method, it works with Horizon and the CLI because of that, but it doesn't do real challenge response. It's a stop gap measure for us since we're on an older version of Keystone, and because the current methods are too hard for our customers to actually consume. And most importantly, I can migrate users from using it to using user auth rules, since Keystone already stores the totp credential and all I need to then do is make auth rules for users with a totp cred. Hope that explains my plan, and why I'm going to be proposing it. It's going to be a lot of work. :P From akalambu at cisco.com Mon Feb 12 06:24:37 2018 From: akalambu at cisco.com (Ajay Kalambur (akalambu)) Date: Mon, 12 Feb 2018 06:24:37 +0000 Subject: [Openstack-operators] [neutron][connection tracking] OVS connection tracking for a DNS VNF Message-ID: <8C84E563-9277-406A-A455-D02D3DB207C3@cisco.com> Hi Has anyone had any experience running a DNS VNF on Openstack. Typically for these VNFs there is a really huge volume of DNS lookups and this translates to entries for udp in the conntrack table Sometimes under load this can lead to nf_conntrack table being FULL The default max on most systems for conntrack is 65536. Some forums suggest increasing this to a very large value to handle large DNS scale. Question I have is there a way to disable OVS connection tracking on a per port basis in neutron. Also folks running this in production do you get this working by tweaking ip_conntrack_max and udp timeout? Ajay -------------- next part -------------- An HTML attachment was scrubbed... URL: From zioproto at gmail.com Mon Feb 12 09:28:45 2018 From: zioproto at gmail.com (Saverio Proto) Date: Mon, 12 Feb 2018 10:28:45 +0100 Subject: [Openstack-operators] AggregateMultiTenancyIsolation with multiple (many) projects In-Reply-To: References: Message-ID: > If you’re willing to, I could share with you a way to get a FrankeinCloud > using a Docker method with kolla to get a pike/queens/whatever cloud at the > same time that your Ocata one. I am interested in knowing more about this. If you have any link / blog post please share them :) thank you Saverio From thingee at gmail.com Mon Feb 12 15:55:14 2018 From: thingee at gmail.com (Mike Perez) Date: Tue, 13 Feb 2018 02:55:14 +1100 Subject: [Openstack-operators] Feedback on the Dev Digest Message-ID: <20180212155514.GH14568@gmail.com> Hey all, I setup a two question survey asking about your frequency with the Dev Digest, and how it can be improved: https://openstackfoundation.formstack.com/forms/openstack_developer_digest_feedback In case you're not familiar, the Dev Digest tries to provide summaries of the OpenStack Dev mailing list, for people who might not have time to read every message and thread on the list. The hope is for people to be informed on discussions they would've otherwise missed, and be able to get caught up to chime in if necessary. This is a community effort worked on via etherpad: https://etherpad.openstack.org/p/devdigest The content on Fridays is posted to the Dev list in plaintext, LWN, Twitter and the OpenStack blog: https://www.openstack.org/blog/ Thank you! -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From emccormick at cirrusseven.com Tue Feb 13 15:36:54 2018 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 13 Feb 2018 10:36:54 -0500 Subject: [Openstack-operators] Tokyo Ops Meetup - Call for Moderators - Last Call for Topics Message-ID: Hello all, TL;DR - Those going to the Tokyo Ops meetup, please go volunteer to moderate sessions and offer any last minute topic ideas at https://etherpad.openstack.org/p/TYO-ops-meetup-2018 ----- The spring Ops meetup in Tokyo is rapidly approaching. Details on the event can be found here: https://www.okura-nikko.com/japan/tokyo/hotel-jal-city-tamachi-tokyo/ Registration can be done here: https://www.eventbrite.com/e/openstack-ops-meetup-tokyo-tickets-39089912982 Most importantly, we need to press forward with solidifying the content. The planning document for the schedule can be found here: https://etherpad.openstack.org/p/TYO-ops-meetup-2018 We have many great session ideas, but are very short on moderators presently. If you plan to attend and are willing to lead a session, please place your name on the list of moderators starting at Line 149. If you have a specific session you would like to handle, feel free to put your name next to it on the list starting at line 22. We also have room for a few more sessions, especially in the Enterprise track, so if you would like to propose something, feel free to add it to the list. We will probably close out new session topics and start setting the schedule next Tuesday.. Looking forward to seeing lots of you there! Cheers, Erik From mihalis68 at gmail.com Tue Feb 13 16:00:09 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 13 Feb 2018 11:00:09 -0500 Subject: [Openstack-operators] OpenStack Operators - Ops Meetups team meeting 2018-2-13 Message-ID: Here are the meeting minutes and log for today's ops meetups team meeting on IRC Meeting ended Tue Feb 13 15:45:58 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:46 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-13-15.01.html 10:46 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-13-15.01.txt 10:46 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-13-15.01.log.html One interesting document I hadn't see is this evolving summary discussion of release cadence https://docs.google.com/document/d/1wu_tKwogCvirDV3kd4YkWa4bWb_UMT4j4llle6j5MRg Ops Meetups team meetings are currently held weekly at 10am EST in #openstack-operators - see you there Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Tue Feb 13 21:31:31 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 13 Feb 2018 21:31:31 +0000 Subject: [Openstack-operators] [scientific] Meeting Wednesday: Ironic for infrastructure management, Kubernetes-as-a-Service Message-ID: Hi All - We have a Scientific SIG IRC meeting on Wednesday at 1100 UTC in channel #openstack-meeting. Everyone is welcome. As well as some up-coming conferences, we also have a discussion on recent CERN & SKA projects using Ironic for bare metal infrastructure management, and an update from Saverio at SWITCH on their Kubernetes-as-a-Service. The agenda details are here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_14th_2018 Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias at citynetwork.se Wed Feb 14 08:33:29 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Wed, 14 Feb 2018 09:33:29 +0100 Subject: [Openstack-operators] [publiccloud-wg] Reminder for todays meeting Message-ID: <4fab430c-0d04-8862-132a-6637b555a021@citynetwork.se> Hi all, Time again for a meeting for the Public Cloud WG - today at 1400 UTC in #openstack-meeting-3 Agenda and etherpad at: https://etherpad.openstack.org/p/publiccloud-wg See you later! Tobias Rydberg -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From gr at ham.ie Wed Feb 14 14:57:29 2018 From: gr at ham.ie (Graham Hayes) Date: Wed, 14 Feb 2018 14:57:29 +0000 Subject: [Openstack-operators] [designate] V1 API is now fully removed Message-ID: <750a3ea9-87ea-cb57-b8f6-90ca50e55952@ham.ie> I saw [1] and realised that we should be more explicit about the upcoming release. As highlighted in [2], this email is a reminder that the long awaited removal of the DNS V1 API is now complete with [3]. This means from Queens onwards it will not be possible to re-enable the V1 API (we have had it off by default for a long period of time). Horizon, Heat and the OpenStack CLI all have v2 usable resources, and have been deprecating the v1 resources for some time. Any deployment tooling, custom dashboards, and internal tools should all ensure they do not require the v1 API, and do not try to enable it. - Graham 1 - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127366.html 2 - https://docs.openstack.org/releasenotes/designate/queens.html#critical-issues 3 - https://github.com/openstack/designate/commit/c318106c01b2b3976049f2c3ba0c8502a874242b -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: OpenPGP digital signature URL: From mriedemos at gmail.com Wed Feb 14 20:51:57 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Wed, 14 Feb 2018 14:51:57 -0600 Subject: [Openstack-operators] [openstack-dev] [nova] heads up to users of Aggregate[Core|Ram|Disk]Filter: behavior change in >= Ocata In-Reply-To: <2ce313c6-90ff-9db9-ab0f-4b573c0f472b@gmail.com> References: <1253b153-9e7e-6dbe-1836-5a9a2f059bdb@gmail.com> <2ce313c6-90ff-9db9-ab0f-4b573c0f472b@gmail.com> Message-ID: <55df94ae-c2a5-23b5-d330-93c50e0211b7@gmail.com> On 2/5/2018 9:00 PM, Matt Riedemann wrote: > Given the size and detail of this thread, I've tried to summarize the > problems and possible solutions/workarounds in this etherpad: > > https://etherpad.openstack.org/p/nova-aggregate-filter-allocation-ratio-snafu > > > For those working on this, please check that what I have written down is > correct and then we can try to make some kind of plan for resolving this. Jay has a spec up for review now: https://review.openstack.org/#/c/544683/ It would be great to get operator feedback on that. -- Thanks, Matt From melwittt at gmail.com Thu Feb 15 02:37:04 2018 From: melwittt at gmail.com (melanie witt) Date: Wed, 14 Feb 2018 18:37:04 -0800 Subject: [Openstack-operators] [openstack-dev] [nova] Rocky PTG early planning In-Reply-To: References: Message-ID: <4A2CD313-F89B-4038-80E1-12CCFB088896@gmail.com> > On Jan 8, 2018, at 10:33, Matt Riedemann wrote: > > As the Queens release winds to a close, I've started thinking about topics for Rocky that can be discussed at the PTG. > > I've created an etherpad [1] for just throwing various topics in there, completely free-form at this point; just remember to add your name next to any topic you add. > > [1] https://etherpad.openstack.org/p/nova-ptg-rocky We have the PTG coming up soon in just 12 days and I wanted to remind everybody to please add your discussion topics to the etherpad ^. We’ll be using the etherpad as our agenda for Wed-Fri. If you’d like your topic discussed but won’t be able to attend the PTG in person, please make a note about it next to your topic and name when you add it. And provide enough context/detail so we can discuss your topic and add notes/action items/next steps to the etherpad for your review. Then, we can follow up asynchronously on the mailing list and/or IRC after the PTG. Best, -melanie From josef.zelenka at cloudevelops.com Fri Feb 16 08:56:28 2018 From: josef.zelenka at cloudevelops.com (Josef Zelenka) Date: Fri, 16 Feb 2018 09:56:28 +0100 Subject: [Openstack-operators] CInder - migration to LVM from RBD Message-ID: <2e1101c5-6495-ccd9-a063-357a878cccbf@cloudevelops.com> Hello everyone, i'm currently trying to figure out how to migrate my volumes from my ceph backend. I'm currently using Ceph Luminous, but so far it has proven not really prod ready for us, so we want to downgrade. However, our client already has some of his VMs on this ceph cluster and downgrading means backing them up somewhere. THe best course of action for us would be live migration to local lvm storage, but it isn't possible via the standard cinder migrate tool - i always get an error, nothing happens. We are running openstack pike. Does anyone have any procedures/ideas for making this work? Our last resort is rbd exporting the volumes to another cluster and then importing them back after the downgrade, but we'd prefer to do a live migrate. Apparently this has worked in the past. Thanks Josef Zelenka From josef.zelenka at cloudevelops.com Fri Feb 16 09:23:43 2018 From: josef.zelenka at cloudevelops.com (Josef Zelenka) Date: Fri, 16 Feb 2018 10:23:43 +0100 Subject: [Openstack-operators] CInder - migration to LVM from RBD In-Reply-To: References: <2e1101c5-6495-ccd9-a063-357a878cccbf@cloudevelops.com> Message-ID: Hi, we've had several issues, mainly very slow recovery(on clusters containing rbd volumes, cephfs, radosgw pools) - sometimes even ten times slower. Also there have been issues with performance when the cluster was over 65%~ full, there has been a significant slowdown. ANother issue we've battled was stuck i/o in the cluster, where a VM had an operation stuck and generated 60k iops, only thing that helped was deleting the volume. I'm not saying these issues can't be solved, but for the time being, we've decided to downgrade :) Josef On 16/02/18 10:14, Sean Redmond wrote: > Hi Josef, > > I can't help you with the cinder type migrations, however I am > interested to know why you find ceph Luminous with RBD is not > production ready in your case. > > I have a cluster here supporting over 1k instances all backed by RBD > and find it very reliable in production. > > Thanks > Sean Redmond > > On Fri, Feb 16, 2018 at 8:56 AM, Josef Zelenka > > wrote: > > Hello everyone, > > i'm currently trying to figure out how to migrate my volumes from > my ceph backend. I'm currently using Ceph Luminous, but so far it > has proven not really prod ready for us, so we want to downgrade. > However, our client already has some of his VMs on this ceph > cluster and downgrading means backing them up somewhere. THe best > course of action for us would be live migration to local lvm > storage, but it isn't possible via the standard cinder migrate > tool - i always get an error, nothing happens. We are running > openstack pike. Does anyone have any procedures/ideas for making > this work? Our last resort is rbd exporting the volumes to another > cluster and then importing them back after the downgrade, but we'd > prefer to do a live migrate. Apparently this has worked in the > past. Thanks > > Josef Zelenka > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thingee at gmail.com Fri Feb 16 21:06:26 2018 From: thingee at gmail.com (Mike Perez) Date: Fri, 16 Feb 2018 13:06:26 -0800 Subject: [Openstack-operators] Developer Mailing List Digest February 10-16th Message-ID: <20180216210600.GJ14568@gmail.com> HTML version: https://www.openstack.org/blog/?p=8321 Please help shape the future of the Developer Mailing List Digest with this two question survey: https://openstackfoundation.formstack.com/forms/openstack_developer_digest_feedback Contribute to the Dev Digest by summarizing OpenStack Dev and SIG List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs Success Bot Says ================ None for this week. Tell us yours in OpenStack IRC channels using the command "#success " More: https://wiki.openstack.org/wiki/Successes Thanks Bot Says =============== * diablo_rojo on #openstack-101 [0]: spotz for watching the #openstack-101 channel and helping to point newcomers to good resources to get them started :) * fungi on #openstack-infra [1]: dmsimard and mnaser for getting deep-linking in ara working for firefox * fungi on #openstack-infra [2]: to Matt Van Winkle for volunteering to act as internal advocate at Rackspace for our control plane account there! * AJaeger on #openstack-doc [3]: corvus for deleting /draft content * AJaeger on #openstack-infra [4]: cmurphy for your investigation * AJaeger on #openstack-infra [5]: to mordred for laying wonderful groundwork with the tox_siblings work. * smcginnis on #openstack-infra [6]: fungi jeblair mordred AJaeger and other infra-team members for clearing up release job issues * fungi on #openstack-infra [7]: zuul v3 for having such detailed configuration syntax error reporting. * fungi on #openstack-dev [8]: diablo_rojo and persia for smooth but "rocky" ptl elections! * Tell us yours in OpenStack IRC channels using the command "#thanks " * More: https://wiki.openstack.org/wiki/Thanks [0] - http://eavesdrop.openstack.org/irclogs/%23openstack-101/%23openstack-101.2017-12-13.log.html [1] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-12-20.log.html [2] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-09.log.html [3] - http://eavesdrop.openstack.org/irclogs/%23openstack-doc/%23openstack-doc.2018-01-22.log.html [4] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-01-30.log.html [5] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-02-03.log.html [6] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2017-12-11.log.html [7] - http://eavesdrop.openstack.org/irclogs/%23openstack-infra/%23openstack-infra.2018-02-14.log.html [8] - http://eavesdrop.openstack.org/irclogs/%23openstack-dev/%23openstack-dev.2018-02-15.log.html Community Summaries =================== Nova Placement update [0] Release Countdown [1] TC Report [2] Technical Committee Status update [3] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127473.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127465.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127324.html [3] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127467.html PTG Bot HOWTO for Dublin ======================== The third PTG is an event where topics of discussion are loosely scheduled in tracks to maximize the attendee productivity. To keep track of what's happening currently we have an event schedule page [0]. Below are some helpful discussions in using PTG bot: Track Leads ----------- Track leads will be able issue various commands [1] in irc channel #openstack-ptg: * #TRACK now - example: #swift now brainstorming improvements to the ring. * Cross project interactions #TRACK now : - #nova now discussing #cinder interactions * What's next #TRACK next : - #api-sig next at 2pm we'll be discussing pagination woes * Clear all now and next entries for a track #TRACK clean: - #ironic clean Booking Reservable Rooms ------------------------ Reservable rooms and what's being discussed works the same with it showing on the event schedule page [0]. Different set of commands: * Get the slot codes with the book command #TRACK book: * Book a room with #TRACK book - example: #relmgt book Coiste Bainisti-MonP2 Any track can book additional space. These slots are 1 hour and 45 minutes long. You can ask ttx, diablo_rojo or #openstack-infra to add a track that's missing. Keep in mind various teams will be soley relying on this for space at the PTG. Additional commands can be found in the PTG bot README [1]. [0] - http://ptg.openstack.org/ptg.html [1] - https://git.openstack.org/cgit/openstack/ptgbot/tree/README.rst Full messages: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127413.html and http://lists.openstack.org/pipermail/openstack-dev/2018-February/127414.html PTL Election Results and Conclusions ==================================== PTL election is over and the results are in [0]! Congrats to returning and new PTLs! There were three elections that took place: * Kolla [1] * Mistral [2] * Quality Assurance [3] On the statistics side, we renewed 17 of the 64 PTLs, so around 27%. Our usual renewal rate is more around 35%, but we did renew more at the last elections (40%) so this is likely why we didn't renew as much as usual this time. Much thanks to our election officials for carrying out this important responsibility in our community! [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127404.html [1] - https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_74983fd83cf5adab [2] - https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_74983fd83cf5adab [3] - https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_274f37d8e5497358 Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#127404 Election Process Tweaks ======================= Discussions have started with ways to improve our election process. Current scripts in place have become brittle that are needed for governance documentation building that use gerrit lookup functions. Election officials currently have to make changes to an exception file [0] when email address with foundation accounts don't match gerrit. Discussed improvements include: * Uncouple TC and PTL election processes. * Make TC and PTL validation functions separate. * Change how-to-submit-candidacy directions to requires candidates email address to match their gerrit and foundation account. Comments, concerns and better ideas are welcome. The plan is to schedule time at the PTG to start hacking on some of those items so feedback before then would be appreciated by your election officials! [0] - http://git.openstack.org/cgit/openstack/election/tree/exceptions.txt Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#127435 -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From laszlo.budai at gmail.com Sun Feb 18 19:26:16 2018 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Sun, 18 Feb 2018 21:26:16 +0200 Subject: [Openstack-operators] monitoring openstack processes Message-ID: <50835d48-03f0-8bb4-38ce-592a37ad365e@gmail.com> Dear all, recently we had an issue where the novncproxy on one of our controller nodes stopped to respond. The tcp connection got built, but no response was coming from the process. What are your recommendations/experiences regarding the monitoring of the openstack processes? Kind regards, Laszlo From lyarwood at redhat.com Mon Feb 19 10:26:08 2018 From: lyarwood at redhat.com (Lee Yarwood) Date: Mon, 19 Feb 2018 10:26:08 +0000 Subject: [Openstack-operators] [ffu][upgrades] Dublin PTG room and agenda Message-ID: <20180219102608.yn63ja4o6hfchbyg@lyarwood.usersys.redhat.com> Hello all, A very late mail to highlight that there will once again be a 1 day track/room dedicated to talking about Fast-forward upgrades at the upcoming PTG in Dublin. The etherpad for which is listed below: https://etherpad.openstack.org/p/ffu-ptg-rocky Please feel free to add items to the pad, I'd really like to see some concrete action items finally come from these discussions ahead of R. Thanks in advance and see you in Dublin! -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 455 bytes Desc: not available URL: From mrhillsman at gmail.com Mon Feb 19 17:31:39 2018 From: mrhillsman at gmail.com (Melvin Hillsman) Date: Mon, 19 Feb 2018 11:31:39 -0600 Subject: [Openstack-operators] User Committee Elections Message-ID: Hi everyone, We had to push the voting back a week if you have been keeping up with the UC elections[0]. That being said, election officials have sent out the poll and so voting is now open! Be sure to check out the candidates - https://goo.gl/x183he - and get your vote in before the poll closes. [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Feb 19 18:38:13 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 19 Feb 2018 18:38:13 +0000 Subject: [Openstack-operators] User Committee Elections In-Reply-To: References: Message-ID: I saw election email with the pointer to votes. See no reason for stopping it now. But extending vote for 1 more week makes sense. Thanks, Arkady From: Melvin Hillsman [mailto:mrhillsman at gmail.com] Sent: Monday, February 19, 2018 11:32 AM To: user-committee ; OpenStack Mailing List ; OpenStack Operators ; OpenStack Dev ; community at lists.openstack.org Subject: [Openstack-operators] User Committee Elections Hi everyone, We had to push the voting back a week if you have been keeping up with the UC elections[0]. That being said, election officials have sent out the poll and so voting is now open! Be sure to check out the candidates - https://goo.gl/x183he - and get your vote in before the poll closes. [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html -- Kind regards, Melvin Hillsman mrhillsman at gmail.com mobile: (832) 264-2646 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mike at openstack.org Mon Feb 19 20:15:25 2018 From: mike at openstack.org (Mike Perez) Date: Mon, 19 Feb 2018 12:15:25 -0800 Subject: [Openstack-operators] [all] Thanks Bot - For a happier open source community, give recognition Message-ID: <20180219201525.GA22198@openstack.org> Every open source community is made up of real people with real feelings. Many open source contributors are working in their free time to provide essential software that we use daily. Sometimes praise is lost in the feedback of bugs or missing features. Focusing on too much negative feedback can lead contributors to frustration and burnout. However you end up contributing to OpenStack, or any open source project, I believe that what gets people excited about working with a community is some form of recognition. My first answer to people coming into the OpenStack community is to join our Project Team Gathering event. Significant changes are discussed here to understand the technical details to carry out the work in the new release. You should seek out people who are owners of these changes and volunteer to work on a portion of the work. Not only are these people interested in your success by having you take on some of the work they have invested in, but you will be doing work that interests the entire team. You’ll finish the improvements and be known as the person in the project with the expertise in that area. You’ll receive some recognition from the team and the community using your software. And just like that, you’re hooked because you know your work is making a difference. Maybe you’ll improve that area of the project more, venture onto other parts of the project, or even expand to other open source projects. If you work in the OpenStack community, there’s also another way you can give and get recognition. In OpenStack IRC channels, you can thank members of the community publicly with the following command: #thanks for being a swell person in that heated discussion! To be clear, is replaced with the person you want to give thanks. Where does this information go? Just like the Success Bot in which we can share successes as a community, Thanks Bot will post them to the OpenStack wiki. They will also be featured in the OpenStack Developer Digest. https://wiki.openstack.org/wiki/Thanks In developing this feature, I’ve had help and feedback from various members of the community. You can see my history of thanking people along the way, too. At the next OpenStack event, you’re still welcome to buy a tasty beverage for someone to say thanks. But why not give them recognition now too and let them know how much they’re appreciated in the community? -- Mike Perez (thingee) -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From ebiibe82 at gmail.com Tue Feb 20 11:13:45 2018 From: ebiibe82 at gmail.com (Amit Kumar) Date: Tue, 20 Feb 2018 16:43:45 +0530 Subject: [Openstack-operators] [openstack][openstack-nova][openstack-operators] Query regarding LXC instantiation using nova Message-ID: Hello, I have a running OpenStack Ocata setup on which I am able to launch VMs. But I want to move to LXC instantiation instead of VMs. So, for this, I installed nova-compute-lxd on my compute node (Ubuntu 16.04). */etc/nova/nova-compute.conf* on my compute nodes was changed to contain the following values for *compute_driver* and* virt_type*. *[DEFAULT]* *compute_driver = lxd.LXDDriver* *[libvirt]* *virt_type = lxc* After this, I restarted the nova-compute service and launched an instance, launch failed after some time (4-5 mins remain in spawning state) and gives the following error: [Error: No valid host was found. There are not enough hosts available.]. Detailed nova-compute logs are attached with this e-mail. Could you please guide what else is required to launch container on OpenStack setup? What other configurations will I need to configure LXD and my nova user to see the LXD daemon. Regards, Amit -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nova-compute.log Type: application/octet-stream Size: 17730 bytes Desc: not available URL: From mriedemos at gmail.com Tue Feb 20 14:16:04 2018 From: mriedemos at gmail.com (Matt Riedemann) Date: Tue, 20 Feb 2018 08:16:04 -0600 Subject: [Openstack-operators] [openstack][openstack-nova][openstack-operators] Query regarding LXC instantiation using nova In-Reply-To: References: Message-ID: On 2/20/2018 5:13 AM, Amit Kumar wrote: > Hello, > > I have a running OpenStack Ocata setup on which I am able to launch VMs. > But I want to move to LXC instantiation instead of VMs. So, for this, I > installed nova-compute-lxd on my compute node (Ubuntu 16.04). > */etc/nova/nova-compute.conf* on my compute nodes was changed to contain > the following values for /compute_driver/ and/virt_type/. > / > / > /[DEFAULT]/ > /compute_driver = lxd.LXDDriver/ > /[libvirt]/ > /virt_type = lxc/ > / > / > After this, I restarted the nova-compute service and launched an > instance, launch failed after some time (4-5 mins remain in spawning > state) and gives the following error: > [Error: No valid host was found. There are not enough hosts available.]. > Detailed nova-compute logs are attached with this e-mail. > > Could you please guide what else is required to launch container on > OpenStack setup? What other configurations will I need to configure LXD > and my nova user to see the LXD daemon. > > Regards, > Amit > > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators > The nova libvirt driver configured to use lxc is not the same as the nova-lxd driver, it's a completely different virt driver, not using libvirt. https://linuxcontainers.org/lxd/getting-started-openstack/ https://docs.openstack.org/charm-guide/latest/openstack-on-lxd.html -- Thanks, Matt From james.page at ubuntu.com Tue Feb 20 14:43:12 2018 From: james.page at ubuntu.com (James Page) Date: Tue, 20 Feb 2018 14:43:12 +0000 Subject: [Openstack-operators] [nova] [nova-lxd] Query regarding LXC instantiation using nova In-Reply-To: References: Message-ID: Hi Amit (re-titled thread with scoped topics) As Matt has already referenced, [0] is a good starting place for using the nova-lxd driver. On Tue, 20 Feb 2018 at 11:13 Amit Kumar wrote: > Hello, > > I have a running OpenStack Ocata setup on which I am able to launch VMs. > But I want to move to LXC instantiation instead of VMs. So, for this, I > installed nova-compute-lxd on my compute node (Ubuntu 16.04). > */etc/nova/nova-compute.conf* on my compute nodes was changed to contain > the following values for *compute_driver* and* virt_type*. > > *[DEFAULT]* > *compute_driver = lxd.LXDDriver* > You only need the above part for nova-lxd (the below snippet is for the libvirt/lxc driver) > *[libvirt]* > *virt_type = lxc* > > After this, I restarted the nova-compute service and launched an instance, > launch failed after some time (4-5 mins remain in spawning state) and gives > the following error: > [Error: No valid host was found. There are not enough hosts available.]. Detailed > nova-compute logs are attached with this e-mail. > Looking at your logs, it would appear a VIF plugging timeout occurred; was your cloud functional with Libvirt/KVM before you made the switch to using nova-lxd? The neutron log files would be a good place to look so see what went wrong. Regards James [0] https://linuxcontainers.org/lxd/getting-started-openstack/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Tue Feb 20 16:02:41 2018 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 20 Feb 2018 11:02:41 -0500 Subject: [Openstack-operators] minutes from today's ops meetups team meeting Message-ID: Hello OpenStack Operators! The OpenStack Operators "ops meetups team" meeting was held today on IRC (#openstack-operators). Our "charter" is here https://wiki.openstack.org/wiki/Ops_Meetups_Team Minutes: Meeting ended Tue Feb 20 15:42:33 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:42 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-20-15.00.html 10:42 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-20-15.00.txt 10:42 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2018/ops_meetup_team.2018-02-20-15.00.log.html Next week some of the team will be at OpenStack PTG, but we will still try to have a meeting, since important planning for the Tokyo meet up is still going on. Current meeting time is 10am EST (15:00 UTC, currently) every week. Cheers Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openstack.org Tue Feb 20 16:47:33 2018 From: allison at openstack.org (Allison Price) Date: Tue, 20 Feb 2018 10:47:33 -0600 Subject: [Openstack-operators] Community Voting NOW OPEN - OpenStack Summit Vancouver 2018 Message-ID: Hi everyone, Session voting is now open for the May 2018 OpenStack Summit in Vancouver! VOTE HERE Hurry, voting closes Sunday, February 25 at 11:59pm Pacific Time (Monday, February 26 at 7:59 UTC). The Programming Committees will ultimately determine the final schedule. Community votes are meant to help inform the decision, but are not considered to be the deciding factor. The Programming Committee members exercise judgment in their area of expertise and help ensure diversity. View full details of the session selection process here . Continue to visit https://www.openstack.org/summit/vancouver-2018 for all Summit-related information. REGISTER Register HERE before prices increase in early April! VISA APPLICATION PROCESS More information about the visa process can be found HERE . TRAVEL SUPPORT PROGRAM March 22 is the last day to submit applications. Please submit your applications HERE by 11:59pm Pacific Time (March 23 at 6:59am UTC). If you have any questions, please email summit at openstack.org . Cheers, Allison Allison Price OpenStack Foundation allison at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From stig.openstack at telfer.org Tue Feb 20 17:45:31 2018 From: stig.openstack at telfer.org (Stig Telfer) Date: Tue, 20 Feb 2018 17:45:31 +0000 Subject: [Openstack-operators] [scientific] IRC meeting - Managing dedicated capacity, PTG topics and beyond Message-ID: <1F260C7A-7B2C-4742-B4BB-865AF7F9FA94@telfer.org> Hi All - We have a Scientific SIG IRC meeting today in channel #openstack-meeting at 2100 UTC. Everyone is welcome. This week we are gathering details on best practice for managing dedicated capacity in scientific OpenStack private clouds, eg for resources funded by specific projects or partners. Also we are gathering presentation picks for community voting for Vancouver, and preparations for the PTG in Dublin next week. The full agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_20th_2018 Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitskrieg at bitskrieg.net Wed Feb 21 03:27:06 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Tue, 20 Feb 2018 22:27:06 -0500 Subject: [Openstack-operators] [keystone] Timeouts reading response headers in keystone-wsgi-public and keystone-wsgi-admin In-Reply-To: <1F260C7A-7B2C-4742-B4BB-865AF7F9FA94@telfer.org> References: <1F260C7A-7B2C-4742-B4BB-865AF7F9FA94@telfer.org> Message-ID: <31695ccdfb12b40da56b78fb2fe17f24@bitskrieg.net> All, Currently experiencing a sporadic issue with our keystone endpoints. Throughout the day, keystone will just stop responding on both the admin and public endpoints, which will cause all services to hang. Restarting apache2 fixes the issue for some amount of time, but it inevitably appears again later on. Here is what I am seeing: keystone: /var/log/apache2/keystone.log 2018-02-20 21:50:38.830302 Timeout when reading response headers from daemon process 'keystone-admin': /usr/bin/keystone-wsgi-admin 2018-02-20 21:50:50.799587 Timeout when reading response headers from daemon process 'keystone-admin': /usr/bin/keystone-wsgi-admin 2018-02-20 21:51:02.857266 Timeout when reading response headers from daemon process 'keystone-admin': /usr/bin/keystone-wsgi-admin 2018-02-20 21:51:02.879630 mod_wsgi (pid=1221): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-admin'. 2018-02-20 21:51:02.879796 IOError: failed to write data 2018-02-20 21:51:07.005702 mod_wsgi (pid=1220): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-admin'. horizon: /var/log/apache2/error.log [Tue Feb 20 21:47:02.582511 2018] [wsgi:error] [pid 1227:tid 140591048296192] [client 10.10.5.200:57462] Timeout when reading response headers from daemon process 'horizon': /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi, referer: https://vta.cybbh.space/horizon/project/instances/900e9d57-752d-488c-8dba-ffc098e1a51a/ [Tue Feb 20 21:48:03.962589 2018] [wsgi:error] [pid 1225:tid 140591249823488] ERROR openstack_auth.user Unable to retrieve project list. [Tue Feb 20 21:48:03.962646 2018] [wsgi:error] [pid 1225:tid 140591249823488] Traceback (most recent call last): [Tue Feb 20 21:48:03.962656 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/openstack_auth/user.py", line 350, in authorized_tenants [Tue Feb 20 21:48:03.962665 2018] [wsgi:error] [pid 1225:tid 140591249823488] is_federated=self.is_federated) [Tue Feb 20 21:48:03.962673 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/openstack_auth/utils.py", line 372, in get_project_list [Tue Feb 20 21:48:03.962734 2018] [wsgi:error] [pid 1225:tid 140591249823488] projects = client.projects.list(user=kwargs.get('user_id')) [Tue Feb 20 21:48:03.962744 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/positional/__init__.py", line 101, in inner [Tue Feb 20 21:48:03.962752 2018] [wsgi:error] [pid 1225:tid 140591249823488] return wrapped(*args, **kwargs) [Tue Feb 20 21:48:03.962759 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneclient/v3/projects.py", line 119, in list [Tue Feb 20 21:48:03.962767 2018] [wsgi:error] [pid 1225:tid 140591249823488] **kwargs) [Tue Feb 20 21:48:03.962774 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 75, in func [Tue Feb 20 21:48:03.962782 2018] [wsgi:error] [pid 1225:tid 140591249823488] return f(*args, **new_kwargs) [Tue Feb 20 21:48:03.962789 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 390, in list [Tue Feb 20 21:48:03.962796 2018] [wsgi:error] [pid 1225:tid 140591249823488] self.collection_key) [Tue Feb 20 21:48:03.962803 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 125, in _list [Tue Feb 20 21:48:03.962811 2018] [wsgi:error] [pid 1225:tid 140591249823488] resp, body = self.client.get(url, **kwargs) [Tue Feb 20 21:48:03.962818 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, in get [Tue Feb 20 21:48:03.962826 2018] [wsgi:error] [pid 1225:tid 140591249823488] return self.request(url, 'GET', **kwargs) [Tue Feb 20 21:48:03.962833 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 447, in request [Tue Feb 20 21:48:03.962841 2018] [wsgi:error] [pid 1225:tid 140591249823488] resp = super(LegacyJsonAdapter, self).request(*args, **kwargs) [Tue Feb 20 21:48:03.962848 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 192, in request [Tue Feb 20 21:48:03.962855 2018] [wsgi:error] [pid 1225:tid 140591249823488] return self.session.request(url, method, **kwargs) [Tue Feb 20 21:48:03.962863 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/positional/__init__.py", line 101, in inner [Tue Feb 20 21:48:03.962870 2018] [wsgi:error] [pid 1225:tid 140591249823488] return wrapped(*args, **kwargs) [Tue Feb 20 21:48:03.962877 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 703, in request [Tue Feb 20 21:48:03.962885 2018] [wsgi:error] [pid 1225:tid 140591249823488] resp = send(**kwargs) [Tue Feb 20 21:48:03.962892 2018] [wsgi:error] [pid 1225:tid 140591249823488] File "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 777, in _send_request [Tue Feb 20 21:48:03.962899 2018] [wsgi:error] [pid 1225:tid 140591249823488] raise exceptions.ConnectFailure(msg) [Tue Feb 20 21:48:03.962907 2018] [wsgi:error] [pid 1225:tid 140591249823488] ConnectFailure: Unable to establish connection to https://*******:5000/v3/users/7e68b998ee1ec26139d3482818c9643d1ce3b5aff532c865cff65e1c9fe01306/projects?: ('Connection aborted.', BadStatusLine("''",)) I get the same behavior regardless of service and regardless of whether or not I use the CLI or Horizon. All signs point to keystone being the culprit. I have adjusted my /etc/apache2/sites-available/keystone.conf: WSGIDaemonProcess keystone-public processes=8 threads=4 user=keystone group=keystone display-name=%{GROUP} WSGIDaemonProcess keystone-admin processes=8 threads=4 user=keystone group=keystone display-name=%{GROUP} And ensured that WSGIApplicationGroup %{GLOBAL} is present. haproxy is sitting in between keystone and all other services, and is configured as follows: defaults log global maxconn 16384 option redispatch retries 3 timeout http-request 30s timeout queue 1m timeout connect 30s timeout client 2m timeout server 2m timeout check 10s ... listen keystone_admin_cluster bind 10.10.5.200:35357 balance source option tcpka option httpchk option tcplog server keystone-0 10.10.5.120:35357 check inter 2000 rise 2 fall 5 server keystone-1 10.10.5.121:35357 check inter 2000 rise 2 fall 5 listen keystone_public_internal_cluster bind 10.50.10.0:5000 ssl crt /etc/letsencrypt/live/*****/master.pem bind 10.10.5.200:5000 balance roundrobin option tcpka option httpchk option tcplog server keystone-0 10.10.5.120:5000 check inter 2000 rise 2 fall 5 server keystone-1 10.10.5.121:5000 check inter 2000 rise 2 fall 5 ... Any ideas on where else I should look? Thanks in advance, --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net From jon at csail.mit.edu Wed Feb 21 17:22:06 2018 From: jon at csail.mit.edu (Jonathan D. Proulx) Date: Wed, 21 Feb 2018 12:22:06 -0500 Subject: [Openstack-operators] [neutron][ovs]Leaking flow? Message-ID: <20180221172206.GC4593@csail.mit.edu> Hi All, Running Mitaka on Ubuntu 16.04 so perhaps this is an ancient bug, but... I have a project with it's own vlan-provider network that spans their legacy physical hosts and cloud systems (probably not relevant but it is weird). They noticed on of their VMs could over hear one unicast flow between two other VMs on this network. Looking a bit deeper they had 7 VMs on this particular hypervisor. One is a participant in the flow in question, but all could see that flow. The "source" system is high traffic with a lot of flows but this one and only this one seems to be "leaky". any pointers to what maybe going on here? Thanks, -Jon From blair.bethwaite at gmail.com Thu Feb 22 11:53:43 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 22 Feb 2018 22:53:43 +1100 Subject: [Openstack-operators] big windows guests on skylake server cpus with libvirt+kvm Message-ID: Hi all, Has anyone else tried this combination? We've set up some new computes with dual Xeon Gold 6150s (18c/36t), so 72 logical cores with hyperthreading. We're trying to launch a Windows Server 2012 R2 guest (hyperv enlightenments enabled via image properties os_type=windows, and virtio 141 drivers, built with CloudBase's scripts (thanks guys!)) with 72 vCPUs (host-passthrough) and 340GB RAM. On qemu 2.5 from cloudarchive Newton the guest sits in a boot loop, with qemu 2.10 from Pike we hit the "Windows failed to start" screen (status: 0xc0000001). The interesting thing is that smaller numbers of vCPUs, e.g. splitting the host in half 2x 36 vCPU guests, seems to be working fine. We've also set the guest CPU topology so that Windoze is seeing the host topology, but still no joy. Looking for some secret sauce we can poor on this? -- Cheers, ~Blairo From blair.bethwaite at gmail.com Thu Feb 22 12:15:43 2018 From: blair.bethwaite at gmail.com (Blair Bethwaite) Date: Thu, 22 Feb 2018 23:15:43 +1100 Subject: [Openstack-operators] big windows guests on skylake server cpus with libvirt+kvm In-Reply-To: References: Message-ID: Finally got my G-fu right, looks like this is the issue: https://lists.gnu.org/archive/html/qemu-devel/2017-02/msg03409.html Might have to live with 64 vCPUs for now (irregardless of topology it seems). On 22 February 2018 at 22:53, Blair Bethwaite wrote: > Hi all, > > Has anyone else tried this combination? > > We've set up some new computes with dual Xeon Gold 6150s (18c/36t), so > 72 logical cores with hyperthreading. We're trying to launch a Windows > Server 2012 R2 guest (hyperv enlightenments enabled via image > properties os_type=windows, and virtio 141 drivers, built with > CloudBase's scripts (thanks guys!)) with 72 vCPUs (host-passthrough) > and 340GB RAM. On qemu 2.5 from cloudarchive Newton the guest sits in > a boot loop, with qemu 2.10 from Pike we hit the "Windows failed to > start" screen (status: 0xc0000001). > > The interesting thing is that smaller numbers of vCPUs, e.g. splitting > the host in half 2x 36 vCPU guests, seems to be working fine. We've > also set the guest CPU topology so that Windoze is seeing the host > topology, but still no joy. Looking for some secret sauce we can poor > on this? > > -- > Cheers, > ~Blairo -- Cheers, ~Blairo From shilla.saebi at gmail.com Thu Feb 22 19:40:05 2018 From: shilla.saebi at gmail.com (Shilla Saebi) Date: Thu, 22 Feb 2018 14:40:05 -0500 Subject: [Openstack-operators] [User-committee] User Committee Elections Message-ID: Hi Everyone, Just a friendly reminder that voting is still open! Please be sure to check out the candidates - https://goo.gl/x183he - and vote before February 25th, 11:59 UTC. Thanks! Shilla On Mon, Feb 19, 2018 at 1:38 PM, wrote: > I saw election email with the pointer to votes. > > See no reason for stopping it now. But extending vote for 1 more week > makes sense. > > Thanks, > Arkady > > > > *From:* Melvin Hillsman [mailto:mrhillsman at gmail.com] > *Sent:* Monday, February 19, 2018 11:32 AM > *To:* user-committee ; OpenStack > Mailing List ; OpenStack Operators < > openstack-operators at lists.openstack.org>; OpenStack Dev < > openstack-dev at lists.openstack.org>; community at lists.openstack.org > *Subject:* [Openstack-operators] User Committee Elections > > > > Hi everyone, > > > > We had to push the voting back a week if you have been keeping up with the > UC elections[0]. That being said, election officials have sent out the poll > and so voting is now open! Be sure to check out the candidates - > https://goo.gl/x183he - and get your vote in before the poll closes. > > > > [0] https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > > > -- > > Kind regards, > > Melvin Hillsman > > mrhillsman at gmail.com > mobile: (832) 264-2646 > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aspiers at suse.com Thu Feb 22 21:30:10 2018 From: aspiers at suse.com (Adam Spiers) Date: Thu, 22 Feb 2018 21:30:10 +0000 Subject: [Openstack-operators] [self-healing][PTG] etherpad for PTG session on self-healing Message-ID: <20180222213010.wxsmgwvdy6vlwxgi@pacific.linksys.moosehall> Hi all, Yushiro kindly created an etherpad for the self-healing SIG session at the Dublin PTG on Tuesday afternoon next week, and I've fleshed it out a bit: https://etherpad.openstack.org/p/self-healing-ptg-rocky Anyone with an interest in self-healing is of course very welcome to attend (or keep an eye on it remotely!) This SIG is still very young, so it's a great chance for you to shape the direction it takes :-) If you are able to attend, please add your name, and also feel free to add topics which you would like to see covered. It would be particularly helpful if operators could participate and share their experiences of what is or isn't (yet!) working with self-healing in OpenStack, so that those of us on the development side can aim to solve the right problems :-) Thanks, and see some of you in Dublin! Adam From masha.atakova at mail.com Fri Feb 23 16:20:04 2018 From: masha.atakova at mail.com (Masha Atakova) Date: Fri, 23 Feb 2018 17:20:04 +0100 Subject: [Openstack-operators] ceilometer gnocchi switch in pike release Message-ID: An HTML attachment was scrubbed... URL: From mike at openstack.org Fri Feb 23 16:53:23 2018 From: mike at openstack.org (Mike Perez) Date: Fri, 23 Feb 2018 08:53:23 -0800 Subject: [Openstack-operators] Developer Mailing List Digest February 17-23rd Message-ID: <20180223165323.GC32596@openstack.org> HTML version: https://www.openstack.org/blog/?p=8332 Contribute to the Dev Digest by summarizing OpenStack Dev List threads: * https://etherpad.openstack.org/p/devdigest * http://lists.openstack.org/pipermail/openstack-dev/ * http://lists.openstack.org/pipermail/openstack-sigs Helpful PTG links ================== PTG is around the corner. Here are some helpful links: * Main welcome email http://lists.openstack.org/pipermail/openstack-dev/2018-February/127611.html * Quick links: http://ptg.openstack.org/ * PDF schedule: http://lists.openstack.org/pipermail/openstack-dev/attachments/20180221/5c279bb3/attachment-0002.pdf * PDf map for PTG venue: http://lists.openstack.org/pipermail/openstack-dev/attachments/20180221/5c279bb3/attachment-0003.pdf Success Bot Says ================ * mhayden: got centos OSA gate under 2h today * thingee: we have an on-boarding page and documentation for new contributors! [0] * Tell us yours in OpenStack IRC channels using the command "#success " * More: https://wiki.openstack.org/wiki/Successes [0] - https://www.openstack.org/community Thanks Bot Says =============== * Thanks pkovar for keep the Documentation team going! * Thanks pabelanger and infra for getting ubuntu mirrors repaired and backup quickly! * Thanks lbragstad for helping troubleshoot an intermittent fernet token validation failure in puppet gates * Thanks TheJulia for helping me with a problem last week, it was really a networking problem issue, like you said so :) * Thanks tosky for backporting devstack ansible changes to pike! * Thanks thingee for Thanks Bot * Thanks openstackstatus for logging our things * Thanks strigazi for the v1.9.3 image * Thanks smcginnis for not stopping this. * Tell us yours in OpenStack IRC channels using the command "#thanks " * More: https://wiki.openstack.org/wiki/Thanks Community Summaries =================== * TC report [0] * POST /api-sig/news [1] * Release countdown [2] [0] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127584.html [1] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127651.html [2] - http://lists.openstack.org/pipermail/openstack-dev/2018-February/127465.html Vancouver Community Contributor Awards ====================================== The Community contributor awards gives recognition to those that are undervalued, don't know they're appreciated, bind the community together, keep things fun, or challenge some norm. There are a lot of people out there that could use a pat on the back and affirmation that they do good work in the community. Nomination period is open now [0] until May 14th. Winners will be announced in feedback session at Vancouver. [0] - https://openstackfoundation.formstack.com/forms/cca_nominations_vancouver Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127634.html Release Naming For S - time to suggest a name! ============================================== It's time to pick a name for our "S" release! Since the associated Summit will be in Berlin, the Geographic location has been chosen as "Berlin" (state). Nominations are now open [0]. Rules and processes can be seen on the Governance site [1]. [0] - https://wiki.openstack.org/wiki/Release_Naming/S_Proposals [1] - https://governance.openstack.org/tc/reference/release-naming.html Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127592.html Final Queens RC Deadline ======================== Thursday 22nd of April is the deadline for any final Queens release candidates. We'll enter a quiet period for a week in preparation of tagging the final Queens release during the PTG week. Make sure if you have patches merged to stable/queens that you propose a new RC before the deadline. PTLs should watch for a patch from the release management team tagging the final release. While not required, an acknowledgement on the patch would be appreciated. Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127540.html Do Not Import oslo_db.tests.* ============================= Deprecations were made on oslo_db.sqlalchemy.test_base package of DbFixture and DbTestCase. In a patch [0], and assumption was made to that these should be imported from oslo_db.tests.sqlalchemy. Cinder, Ironic and Glance have been found with this issue [1]. Unfortunately these were not prefixed with underscores to comply with naming conventions for people to recognize private code. The tests module was included for consumers to run those tests on their own packages easily. [0] - https://review.openstack.org/#/c/522290/ [1] - http://codesearch.openstack.org/?q=oslo_db.tests&i=nope&files=&repos= Full thread: http://lists.openstack.org/pipermail/openstack-dev/2018-February/thread.html#127531 Some New Zuul Features ====================== Default timeout is 30 minutes for "post-run" phase of the job. A new attribute "timeout" [0] can set this to something else, which could be useful for a job that performs a long artifact upload. Two new job attributes added "host-vars" and "group-vars" [1] which behave like "vars" but applies to a specific host or group. [0] - https://docs.openstack.org/infra/zuul/user/config.html#attr-job.post-timeout [1] - https://docs.openstack.org/infra/zuul/user/config.html#attr-job.host-vars Full message: http://lists.openstack.org/pipermail/openstack-dev/2018-February/127591.html -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From gord at live.ca Fri Feb 23 17:54:58 2018 From: gord at live.ca (gordon chung) Date: Fri, 23 Feb 2018 17:54:58 +0000 Subject: [Openstack-operators] ceilometer gnocchi switch in pike release In-Reply-To: References: Message-ID: On 2018-02-23 11:20 AM, Masha Atakova wrote: > I reconfigured the setup accordingly, to use gnocchi as a storage, but > the problem is: ceilometer has multiple types of metrics while gnocchi > doesn’t have metric types at all. > And it doesn’t really make sense to store metrics of cumulative type > (like compute.node.cpu.kernel.time, switch.port.receive.bytes, energy) > and of gauge type (like vcpus, volume.size, image.size) in one place. i'm curious why this is an issue? they're just numbers. could you explain in case we missed a use case. > > Have any of you faced this problem? If yes, what are the workarounds > that you’ve found sufficient? > I’ve been thinking about different ways to handle this problem, like > converting data before it gets written to gnocchi, but I believe there > should be a better solution! gnocchi does provide statistical/mathematical operations you can apply to the series. but again, not sure what issue you're encountering. cheers, -- gord From pradhanparas at gmail.com Fri Feb 23 22:25:15 2018 From: pradhanparas at gmail.com (Paras pradhan) Date: Fri, 23 Feb 2018 16:25:15 -0600 Subject: [Openstack-operators] custom dashboards Message-ID: What do you guys use to create custom dashboard for openstack? Do you use django and python openstack sdk or modify the horizon dashboard? Thanks Paras. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bitskrieg at bitskrieg.net Sun Feb 25 00:58:06 2018 From: bitskrieg at bitskrieg.net (Chris Apsey) Date: Sat, 24 Feb 2018 19:58:06 -0500 Subject: [Openstack-operators] [keystone] Timeouts reading response headers in keystone-wsgi-public and keystone-wsgi-admin In-Reply-To: <31695ccdfb12b40da56b78fb2fe17f24@bitskrieg.net> References: <1F260C7A-7B2C-4742-B4BB-865AF7F9FA94@telfer.org> <31695ccdfb12b40da56b78fb2fe17f24@bitskrieg.net> Message-ID: All, As a follow up to this, we eventually discovered that the issues were caused by a recent migration to a new public cloud provider (they host our stateful data - user accounts, etc.). Our old cloud provider allowed for publicly-routable IPs to be directly attached to instances, and our new one does not (they utilize 1:1 NAT). The 1:1 NAT was causing issued with our LDAP provider (ipa) and would occasionally cause the keystone-wsgi service to hang (but only when leveraging the LDAP keystone domain - accounts in the default sql domain worked fine). Pointing keystone at a fully-routable address over a VPN instead of at the 1:1 NAT address over the internet caused the issues to disappear. This may have been more of a FreeIPA not liking 1:1 NAT more than anything else, but just thought I'd share in case anyone ran into something similar in the future. --- v/r Chris Apsey bitskrieg at bitskrieg.net https://www.bitskrieg.net On 2018-02-20 10:27 PM, Chris Apsey wrote: > All, > > Currently experiencing a sporadic issue with our keystone endpoints. > Throughout the day, keystone will just stop responding on both the > admin and public endpoints, which will cause all services to hang. > Restarting apache2 fixes the issue for some amount of time, but it > inevitably appears again later on. Here is what I am seeing: > > keystone: /var/log/apache2/keystone.log > > 2018-02-20 21:50:38.830302 Timeout when reading response headers > from daemon process 'keystone-admin': /usr/bin/keystone-wsgi-admin > 2018-02-20 21:50:50.799587 Timeout when reading response headers > from daemon process 'keystone-admin': /usr/bin/keystone-wsgi-admin > 2018-02-20 21:51:02.857266 Timeout when reading response headers > from daemon process 'keystone-admin': /usr/bin/keystone-wsgi-admin > 2018-02-20 21:51:02.879630 mod_wsgi (pid=1221): Exception occurred > processing WSGI script '/usr/bin/keystone-wsgi-admin'. > 2018-02-20 21:51:02.879796 IOError: failed to write data > 2018-02-20 21:51:07.005702 mod_wsgi (pid=1220): Exception occurred > processing WSGI script '/usr/bin/keystone-wsgi-admin'. > > horizon: /var/log/apache2/error.log > > [Tue Feb 20 21:47:02.582511 2018] [wsgi:error] [pid 1227:tid > 140591048296192] [client 10.10.5.200:57462] Timeout when reading > response headers from daemon process 'horizon': > /usr/share/openstack-dashboard/openstack_dashboard/wsgi/django.wsgi, > referer: > https://vta.cybbh.space/horizon/project/instances/900e9d57-752d-488c-8dba-ffc098e1a51a/ > [Tue Feb 20 21:48:03.962589 2018] [wsgi:error] [pid 1225:tid > 140591249823488] ERROR openstack_auth.user Unable to retrieve project > list. > [Tue Feb 20 21:48:03.962646 2018] [wsgi:error] [pid 1225:tid > 140591249823488] Traceback (most recent call last): > [Tue Feb 20 21:48:03.962656 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/openstack_auth/user.py", line 350, > in authorized_tenants > [Tue Feb 20 21:48:03.962665 2018] [wsgi:error] [pid 1225:tid > 140591249823488] is_federated=self.is_federated) > [Tue Feb 20 21:48:03.962673 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/openstack_auth/utils.py", line 372, > in get_project_list > [Tue Feb 20 21:48:03.962734 2018] [wsgi:error] [pid 1225:tid > 140591249823488] projects = > client.projects.list(user=kwargs.get('user_id')) > [Tue Feb 20 21:48:03.962744 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/positional/__init__.py", line 101, > in inner > [Tue Feb 20 21:48:03.962752 2018] [wsgi:error] [pid 1225:tid > 140591249823488] return wrapped(*args, **kwargs) > [Tue Feb 20 21:48:03.962759 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneclient/v3/projects.py", line > 119, in list > [Tue Feb 20 21:48:03.962767 2018] [wsgi:error] [pid 1225:tid > 140591249823488] **kwargs) > [Tue Feb 20 21:48:03.962774 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 75, in > func > [Tue Feb 20 21:48:03.962782 2018] [wsgi:error] [pid 1225:tid > 140591249823488] return f(*args, **new_kwargs) > [Tue Feb 20 21:48:03.962789 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 390, > in list > [Tue Feb 20 21:48:03.962796 2018] [wsgi:error] [pid 1225:tid > 140591249823488] self.collection_key) > [Tue Feb 20 21:48:03.962803 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneclient/base.py", line 125, > in _list > [Tue Feb 20 21:48:03.962811 2018] [wsgi:error] [pid 1225:tid > 140591249823488] resp, body = self.client.get(url, **kwargs) > [Tue Feb 20 21:48:03.962818 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 288, > in get > [Tue Feb 20 21:48:03.962826 2018] [wsgi:error] [pid 1225:tid > 140591249823488] return self.request(url, 'GET', **kwargs) > [Tue Feb 20 21:48:03.962833 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 447, > in request > [Tue Feb 20 21:48:03.962841 2018] [wsgi:error] [pid 1225:tid > 140591249823488] resp = super(LegacyJsonAdapter, > self).request(*args, **kwargs) > [Tue Feb 20 21:48:03.962848 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneauth1/adapter.py", line 192, > in request > [Tue Feb 20 21:48:03.962855 2018] [wsgi:error] [pid 1225:tid > 140591249823488] return self.session.request(url, method, > **kwargs) > [Tue Feb 20 21:48:03.962863 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/positional/__init__.py", line 101, > in inner > [Tue Feb 20 21:48:03.962870 2018] [wsgi:error] [pid 1225:tid > 140591249823488] return wrapped(*args, **kwargs) > [Tue Feb 20 21:48:03.962877 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 703, > in request > [Tue Feb 20 21:48:03.962885 2018] [wsgi:error] [pid 1225:tid > 140591249823488] resp = send(**kwargs) > [Tue Feb 20 21:48:03.962892 2018] [wsgi:error] [pid 1225:tid > 140591249823488] File > "/usr/lib/python2.7/dist-packages/keystoneauth1/session.py", line 777, > in _send_request > [Tue Feb 20 21:48:03.962899 2018] [wsgi:error] [pid 1225:tid > 140591249823488] raise exceptions.ConnectFailure(msg) > [Tue Feb 20 21:48:03.962907 2018] [wsgi:error] [pid 1225:tid > 140591249823488] ConnectFailure: Unable to establish connection to > https://*******:5000/v3/users/7e68b998ee1ec26139d3482818c9643d1ce3b5aff532c865cff65e1c9fe01306/projects?: > ('Connection aborted.', BadStatusLine("''",)) > > > I get the same behavior regardless of service and regardless of > whether or not I use the CLI or Horizon. All signs point to keystone > being the culprit. > > I have adjusted my /etc/apache2/sites-available/keystone.conf: > > WSGIDaemonProcess keystone-public processes=8 threads=4 user=keystone > group=keystone display-name=%{GROUP} > WSGIDaemonProcess keystone-admin processes=8 threads=4 user=keystone > group=keystone display-name=%{GROUP} > > And ensured that WSGIApplicationGroup %{GLOBAL} is present. > > haproxy is sitting in between keystone and all other services, and is > configured as follows: > > defaults > log global > maxconn 16384 > option redispatch > retries 3 > timeout http-request 30s > timeout queue 1m > timeout connect 30s > timeout client 2m > timeout server 2m > timeout check 10s > > ... > > listen keystone_admin_cluster > bind 10.10.5.200:35357 > balance source > option tcpka > option httpchk > option tcplog > server keystone-0 10.10.5.120:35357 check inter 2000 rise 2 fall 5 > server keystone-1 10.10.5.121:35357 check inter 2000 rise 2 fall 5 > > listen keystone_public_internal_cluster > bind 10.50.10.0:5000 ssl crt /etc/letsencrypt/live/*****/master.pem > bind 10.10.5.200:5000 > balance roundrobin > option tcpka > option httpchk > option tcplog > server keystone-0 10.10.5.120:5000 check inter 2000 rise 2 fall 5 > server keystone-1 10.10.5.121:5000 check inter 2000 rise 2 fall 5 > > ... > > > Any ideas on where else I should look? > > Thanks in advance, > > --- > v/r > > Chris Apsey > bitskrieg at bitskrieg.net > https://www.bitskrieg.net From tobias at citynetwork.se Sun Feb 25 13:21:30 2018 From: tobias at citynetwork.se (Tobias Rydberg) Date: Sun, 25 Feb 2018 14:21:30 +0100 Subject: [Openstack-operators] [publiccloud-wg][PTG] Schedule Message-ID: Hi folks, Here is the schedule for the Public Cloud WG parts at the PTG next week. Come int person or join remote. We will try to get some link up for remotes to join. Will get back with more information around that as soon as I have something to share. https://etherpad.openstack.org/p/publiccloud-wg-ptg-rocky Cheers, Tobias -- Tobias Rydberg Senior Developer Mobile: +46 733 312780 www.citynetwork.eu | www.citycloud.com INNOVATION THROUGH OPEN IT INFRASTRUCTURE ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3945 bytes Desc: S/MIME Cryptographic Signature URL: From shilla.saebi at gmail.com Sun Feb 25 23:52:16 2018 From: shilla.saebi at gmail.com (Shilla Saebi) Date: Sun, 25 Feb 2018 18:52:16 -0500 Subject: [Openstack-operators] User Committee Election Results - February 2018 Message-ID: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla -------------- next part -------------- An HTML attachment was scrubbed... URL: From snow19642003 at yahoo.com Mon Feb 26 01:30:33 2018 From: snow19642003 at yahoo.com (William Genovese) Date: Mon, 26 Feb 2018 01:30:33 +0000 (UTC) Subject: [Openstack-operators] [openstack-community] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <1169833579.5810488.1519608633249@mail.yahoo.com> Hi- Can you tell me who is chairing the Financial Services Industry OpenStack Cloud Committee? I thought there was one at one time? I'd like to (at a minimum) sign up and contribute to this. Thank you, William (Bill) M Genovese (威廉 (比尔) 迈克尔 · 吉诺维斯) Vice President Corporate Strategy Planning | Banking, Financial Services andIT Services Solutions HUAWEI TECHNOLOGIES CO., LTD. Bantian, Longgang District Shenzhen 518129 P.R. China www.huawei.com Mobile: +86 132-4373-0940 (CN) +1 704-906-3558 (US) Email: william.michael.genovese at huawei.com Wechat: Wechat: Bill277619782016 LinkedIn: https://www.linkedin.com/in/wgenovese On ‎Monday‎, ‎February‎ ‎26‎, ‎2018‎ ‎07‎:‎53‎:‎58‎ ‎AM, Shilla Saebi wrote: Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla _______________________________________________ Community mailing list Community at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/community -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Mon Feb 26 09:31:00 2018 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Mon, 26 Feb 2018 09:31:00 +0000 Subject: [Openstack-operators] [User-committee] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <0e96f3a69451488aabf5c9de9aaa2a1e@AUSX13MPS308.AMER.DELL.COM> Congrats to new committee members. And thanks for great job for previous ones. From: Shilla Saebi [mailto:shilla.saebi at gmail.com] Sent: Sunday, February 25, 2018 5:52 PM To: user-committee ; OpenStack Mailing List ; OpenStack Operators ; OpenStack Dev ; community at lists.openstack.org Subject: [User-committee] User Committee Election Results - February 2018 Hello Everyone! Please join me in congratulating 3 newly elected members of the User Committee (UC)! The winners for the 3 seats are: Melvin Hillsman Amy Marrich Yih Leong Sun Full results can be found here: https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 Election details can also be found here: https://governance.openstack.org/uc/reference/uc-election-feb2018.html Thank you to all of the candidates, and to all of you who voted and/or promoted the election! Shilla -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Mon Feb 26 09:40:57 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Mon, 26 Feb 2018 09:40:57 +0000 Subject: [Openstack-operators] User Committee Election Results - February 2018 In-Reply-To: References: Message-ID: <5A93D629.2000704@openstack.org> Congrats everyone! And thanks to the UC Election Committee for managing :) Cheers, Jimmy > Shilla Saebi > February 25, 2018 at 11:52 PM > Hello Everyone! > > Please join me in congratulating 3 newly elected members of the User > Committee (UC)! The winners for the 3 seats are: > > Melvin Hillsman > Amy Marrich > Yih Leong Sun > > Full results can be found here: > https://civs.cs.cornell.edu/cgi-bin/results.pl?id=E_f7b17dc638013045 > > Election details can also be found here: > https://governance.openstack.org/uc/reference/uc-election-feb2018.html > > Thank you to all of the candidates, and to all of you who voted and/or > promoted the election! > > Shilla > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: From adriant at catalyst.net.nz Wed Feb 28 01:34:07 2018 From: adriant at catalyst.net.nz (Adrian Turjak) Date: Wed, 28 Feb 2018 14:34:07 +1300 Subject: [Openstack-operators] custom dashboards In-Reply-To: References: Message-ID: We do a lot of custom dashboard stuff but adding additional dashboards and panels into Horizon. While it's a bit weird at first, Horizon's plugin mechanisms are reasonably flexible and good. On 24/02/18 11:25, Paras pradhan wrote: > What do you guys use to create custom dashboard for openstack? Do you > use django and python openstack sdk  or modify the horizon dashboard? > > > Thanks > Paras. > > > _______________________________________________ > OpenStack-operators mailing list > OpenStack-operators at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-operators -------------- next part -------------- An HTML attachment was scrubbed... URL: