From oliver.weinmann at me.com Sun Nov 1 07:55:53 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 1 Nov 2020 08:55:53 +0100 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) Message-ID: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> Hi, I'm still in the process of preparing a OpenStack POC. I'm 100% sure that I want to use CEPH and so I purchased the book Mastering CEPH 2nd edition. First of all, It's a really good book. It basically explains the various methods how ceph can be deployed and also the components that CEPH is build of. So I played around a lot with ceph-ansible and rook in my virtualbox environment. I also started to play with tripleo ceph deployment, although I haven't had the time yet to sucessfully deploy a openstack cluster with CEPH. Now I'm wondering, which of these 3 methods should I use? rook ceph-ansible tripleo I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of VMs running under vSphere that currently consume storage from a ZFS storage via CIFS and NFS. I don't know if rook can be used for this. I have the feeling that it is purely meant to be used for kubernetes? And If I would like to have CIFS and NFS maybe tripleo is not capable of enabling these features in CEPH? So I would be left with ceph-ansible? Any suggestions are very welcome. Best Regards, Oliver From jerome.pansanel at iphc.cnrs.fr Sun Nov 1 08:49:09 2020 From: jerome.pansanel at iphc.cnrs.fr (Jerome Pansanel) Date: Sun, 1 Nov 2020 09:49:09 +0100 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) In-Reply-To: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> References: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> Message-ID: Hi Oliver, Due to recent changes in CEPH Nautilus and Octopus (new Orchestrator APIs), we decided to switch to Rook. It is still in an experimental stage on our platform, but it seems promising. Cheers, Jerome Le 01/11/2020 à 08:55, Oliver Weinmann a écrit : > Hi, > > I'm still in the process of preparing a OpenStack POC. I'm 100% sure > that I want to use CEPH and so I purchased the book Mastering CEPH 2nd > edition. First of all, It's a really good book. It basically explains > the various methods how ceph can be deployed and also the components > that CEPH is build of. So I played around a lot with ceph-ansible and > rook in my virtualbox environment. I also started to play with tripleo > ceph deployment, although I haven't had the time yet to sucessfully > deploy a openstack cluster with CEPH. Now I'm wondering, which of these > 3 methods should I use? > > rook > > ceph-ansible > > tripleo > > I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of > VMs running under vSphere that currently consume storage from a ZFS > storage via CIFS and NFS. I don't know if rook can be used for this. I > have the feeling that it is purely meant to be used for kubernetes? And > If I would like to have CIFS and NFS maybe tripleo is not capable of > enabling these features in CEPH? So I would be left with ceph-ansible? > > Any suggestions are very welcome. > > Best Regards, > > Oliver > > -- Jerome Pansanel, PhD Technical Director at France Grilles Head of Scientific Cloud Infrastructure (SCIGNE) at IPHC IPHC || GSM: +33 (0)6 25 19 24 43 23 rue du Loess, BP 28 || Tel: +33 (0)3 88 10 66 24 F-67037 STRASBOURG Cedex 2 || Fax: +33 (0)3 88 10 62 34 From florian at datalounges.com Sun Nov 1 09:19:45 2020 From: florian at datalounges.com (Florian Rommel) Date: Sun, 1 Nov 2020 11:19:45 +0200 Subject: Weird iptables problem Message-ID: <39FFD224-4615-4DBE-A169-711ECA78EB17@datalounges.com> Hi, so we deployed a test cluster with 8 total nodes. Openstack ussuri, Ubuntu 20.04 , opevswitch, octavia. Everything works more or less as it should , however when deploying a loadbalancer (which works perfectly), instances on the same node as the amphora deployment loose all access via public up after open switch logs a message : modified security group “long uuid”, which is a sec group belonging to octavia. What effect does that have with the rest? Questions: Should ufw be on and started? How can we troubleshoot this more in depth? Why does the lb work perfectly whole instance with floating up loos all access from the internet but the lb works fine passing thru the web services. One more thing. Does anyone have a recommended sysctl.conf Settings list for Ubuntu/ openstack ussuri? Thanks already and have a nice weekend, //f From oliver.weinmann at me.com Sun Nov 1 09:28:48 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Sun, 1 Nov 2020 10:28:48 +0100 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) In-Reply-To: References: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> Message-ID: Hi Jerome, thanks for your reply. Ok, but can rook also be used outside of kubernetes? Can I provision e.g. volumes for openstack cinder? Am 01.11.2020 um 09:49 schrieb Jerome Pansanel: > Hi Oliver, > > Due to recent changes in CEPH Nautilus and Octopus (new Orchestrator > APIs), we decided to switch to Rook. It is still in an experimental > stage on our platform, but it seems promising. > > Cheers, > > Jerome > > Le 01/11/2020 à 08:55, Oliver Weinmann a écrit : >> Hi, >> >> I'm still in the process of preparing a OpenStack POC. I'm 100% sure >> that I want to use CEPH and so I purchased the book Mastering CEPH 2nd >> edition. First of all, It's a really good book. It basically explains >> the various methods how ceph can be deployed and also the components >> that CEPH is build of. So I played around a lot with ceph-ansible and >> rook in my virtualbox environment. I also started to play with tripleo >> ceph deployment, although I haven't had the time yet to sucessfully >> deploy a openstack cluster with CEPH. Now I'm wondering, which of these >> 3 methods should I use? >> >> rook >> >> ceph-ansible >> >> tripleo >> >> I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of >> VMs running under vSphere that currently consume storage from a ZFS >> storage via CIFS and NFS. I don't know if rook can be used for this. I >> have the feeling that it is purely meant to be used for kubernetes? And >> If I would like to have CIFS and NFS maybe tripleo is not capable of >> enabling these features in CEPH? So I would be left with ceph-ansible? >> >> Any suggestions are very welcome. >> >> Best Regards, >> >> Oliver >> >> > From johfulto at redhat.com Sun Nov 1 18:34:06 2020 From: johfulto at redhat.com (John Fulton) Date: Sun, 1 Nov 2020 13:34:06 -0500 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) In-Reply-To: References: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> Message-ID: On Sun, Nov 1, 2020 at 4:32 AM Oliver Weinmann wrote: > > Hi Jerome, > > thanks for your reply. Ok, but can rook also be used outside of > kubernetes? Can I provision e.g. volumes for openstack cinder? Rook provides storage for k8s and Ceph is one of its storage providers. I don't think a Ceph cluster as orchestrated by rook was designed for access outside of k8s. I've seen people expose parts of it outside of k8s [0] but that type of thing doesn't seem to be a standard rook pattern. I wonder how many people are doing that. If you want Ceph to provide pools to both k8s and OpenStack, then one approach I see is the following: 1. Install Ceph on its own independent system so it's external to both OpenStack and k8s and use cephadm to do that deployment [1] 2. Install rook for k8s access to Ceph but direct rook to the external Ceph cluster from step 1 [2] 3. Install OpenStack but direct it to the external Ceph cluster from step 1, e.g. TripleO can do this [3] I'm suggesting a tool to install Ceph, cephadm [1], which isn't in the current list of tools in the subject. One benefit of this tool is that you'd have the new Orchestrator APIs that Jerome mentioned but you wouldn't have to use k8s if you don't want to. The downside is that you don't have k8s operators directly managing Ceph. An upside of using the 3-step approach above is that one Ceph cluster can be used as the same backend to both OpenStack and k8s services. I haven't done this myself but I don't see why this wouldn't work. You'd just manage different pools/cephx keys for different services. If you're not planning to use k8s, the above (without step 2) would also work well for a large deployment. If you're interested in a smaller footprint with just OpenStack and Ceph, perhaps where you collocate OSD and Compute containers (aka "hyperconverged"), then TripleO will deploy that too and it uses ceph-ansible to deploy Ceph behind the scenes [4]. TripleO can also deploy CephFS for NFS access from OpenStack tenants via ganesha. This can work even if the Ceph cluster itself is external [6] though access to the NFS service is meant for OpenStack tenants, not e.g. VMWare tenants. This method of deployment will likely evolve over the next two TripleO cycles to use cephadm in place of ceph-ansible [7]. John [0] https://www.adaltas.com/en/2020/04/16/expose-ceph-from-rook-kubernetes/ [1] https://docs.ceph.com/en/latest/cephadm [2] https://github.com/rook/rook/blob/master/design/ceph/ceph-external-cluster.md [3] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ceph_external.html [4] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ceph_config.html [5] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/deploy_manila.html [6] https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/deploy_manila.html#deploying-the-overcloud-with-an-external-backend [7] https://review.opendev.org/#/c/723108/ > > Am 01.11.2020 um 09:49 schrieb Jerome Pansanel: > > Hi Oliver, > > > > Due to recent changes in CEPH Nautilus and Octopus (new Orchestrator > > APIs), we decided to switch to Rook. It is still in an experimental > > stage on our platform, but it seems promising. > > > > Cheers, > > > > Jerome > > > > Le 01/11/2020 à 08:55, Oliver Weinmann a écrit : > >> Hi, > >> > >> I'm still in the process of preparing a OpenStack POC. I'm 100% sure > >> that I want to use CEPH and so I purchased the book Mastering CEPH 2nd > >> edition. First of all, It's a really good book. It basically explains > >> the various methods how ceph can be deployed and also the components > >> that CEPH is build of. So I played around a lot with ceph-ansible and > >> rook in my virtualbox environment. I also started to play with tripleo > >> ceph deployment, although I haven't had the time yet to sucessfully > >> deploy a openstack cluster with CEPH. Now I'm wondering, which of these > >> 3 methods should I use? > >> > >> rook > >> > >> ceph-ansible > >> > >> tripleo > >> > >> I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of > >> VMs running under vSphere that currently consume storage from a ZFS > >> storage via CIFS and NFS. I don't know if rook can be used for this. I > >> have the feeling that it is purely meant to be used for kubernetes? And > >> If I would like to have CIFS and NFS maybe tripleo is not capable of > >> enabling these features in CEPH? So I would be left with ceph-ansible? > >> > >> Any suggestions are very welcome. > >> > >> Best Regards, > >> > >> Oliver > >> > >> > > > From andy at andybotting.com Sun Nov 1 23:11:30 2020 From: andy at andybotting.com (Andy Botting) Date: Mon, 2 Nov 2020 10:11:30 +1100 Subject: [MURANO] DevStack Murano "yaql_function" Error In-Reply-To: References: Message-ID: Hi Rong and İzzetti, I haven't solved the issue yet, but if I do, I'll push up a review. > I had some time to track down the error AttributeError: 'method' object has no attribute '__yaql_function__' which turned out to be a Python 3 issue and I've got a review up here for it https://review.opendev.org/#/c/760470/ I've fixed a couple of Python 3 specific issues recently, so I'd recommend you either use the Train version with Python 2 or if you want to run on Python 3 then you'll need to use master right now with my patches on top. cheers, Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Mon Nov 2 02:03:22 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Mon, 2 Nov 2020 10:03:22 +0800 Subject: How to config DPDK compute in OVN Message-ID: Hi....as far I understand ovs does support dpdk compute to be deployed. I'm referring to the guideline ' *https://docs.openstack.org/networking-ovn/queens/admin/dpdk.html * ' ... It's too brief.. May I know how to config dpdk compute in OVN networking openstack? Hope someone could help what is the config to be done.... Currently compute is run without dpdk over openstack train ovn based. Please need some info on how I can proceed. Thank you -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Mon Nov 2 05:36:31 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Mon, 2 Nov 2020 11:06:31 +0530 Subject: [Glance] Wallaby PTG summary Message-ID: Hi All, We had our second virtual PTG last week from 26th October to 30th October 2020. Thanks to everyone who joined the virtual PTG sessions. Using bluejeans app we had lots of discussion around different topics for glance, glance + cinder and glance + tempest. I have created etherpad [1] with Notes from session and which also includes the recordings of each discussion. The same etherpad includes milestone wise priorities at the bottom. Below topics we discussed during PTG: 1. Victoria retrospection 2. Image encryption (progress) 3. Remove single store configuration from glance_store 4. Cache-API 5. Duplicate downloads 6. Extend issue when using nfs cinder backend (with cinder team) 7. Multi-format images 8. Cluster awareness 9. Improve performance of ceph/rbd store of glance (multi-threaded rbd driver) 10. Task API's for general users 11. Tempest enhancement/bridge gap between glance and tempest 12. Code cleanup, enhancements 13. Wallaby priorities Kindly let me know if you have any questions about the same. [1] https://etherpad.opendev.org/p/glance-wallaby-ptg Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Nov 2 11:43:30 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 2 Nov 2020 08:43:30 -0300 Subject: [Cloudkitty][ptg] CloudKitty Wallaby PTG outcome Message-ID: Hey people, Thanks to everyone who attended the CloudKitty PTG sessions, we covered a lot during the meeting. The Etherpad is available at [1]. In Etherpad [1], I added the notation "TODO(XXX):" in the topics we discussed, that require effort from someone. The work can be either writing a use case story, developing extensions/features, or creating test cases. Therefore, if you are interested in working with that item, do not hesitate and put your name there. Anyone in the community is welcome. [1] https://etherpad.opendev.org/p/cloudkitty-ptg-wallaby -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesse at odyssey4.me Mon Nov 2 11:55:35 2020 From: jesse at odyssey4.me (Jesse Pretorius) Date: Mon, 2 Nov 2020 11:55:35 +0000 Subject: [tripleo] please avoid creating ansible playbooks that call modules for trivial tasks In-Reply-To: References: <0bea7456-e413-1f05-ce91-8b1738c8777b@redhat.com> Message-ID: On Thu, 2020-10-29 at 07:00 -0600, Alex Schultz wrote: On Thu, Oct 29, 2020 at 5:32 AM Bogdan Dobrelya < bdobreli at redhat.com > wrote: In today TripleO client development we invoke ansible playbooks instead of Mistral workflows. I believe this may only be justified for non-trivial things. Otherwise please just import python modules into the client command actions without introducing additional moving parts in the form of ansible modules (that import the same python modules as well). I see no point of adding a new playbook and a new module for that trivial example. Those 4 packages could (and should) be as well installed from the client caller code without any ansible involved in the middle IMO. While I can agree to a certain extent, there's actually some good reasons to even move trivial bits into ansible. Personally I'm not certain the switch to using ansible under the covers from some cli actions is an improvement (questionable logging, error handling isn't great), there is a case for certain actions. As we discussed at the PTG, the overcloud image building process is one of those things that actually has to be executed on bare metal. If we wanted to continue to look at containerizing the cli, we need to be able to invoke this action from within the container but build on an external host. This is something that is trivial with the switch to an ansible playbook that isn't available when running under the pure python as it exists today. Container builds would be another example action that is required to run on a bare metal host. Additionally the movement of this invocation to an ansible module also allows the action to be moved into something like the undercloud installation as an optional action as part of the deployment itself. It's not exactly without merit in this case. I don't really care one way or another for this action, however I don't think it's as simple as saying "oh it's just a few lines of code so we shouldn't..." What it sounds like is that there's a need for documented guidelines. A lot of changes have been made as part of a learning process and we now know a lot more about what tasks are better suited to be done directly in the client vs via ansible roles vs via ansible modules. If we can document these best practises then we can guide any new changes according to them. It seems to me that we need to consider: 1. Requirements - what needs to get done 2. Constraints - does the action need something special like access to devices or kernel API's 3. Testability - something in python or an ansible module is unit testable, whereas an ansible role is more difficult to properly test 4. Scalability - complex ansible tasks/vars scale far worse that ansible modules 5. Maintainability - many factors are involved here, but sometimes efficiency should be sacrificed for simplicity -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Mon Nov 2 12:04:47 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Mon, 2 Nov 2020 20:04:47 +0800 Subject: [undercloud][train]Fatal error during train undercloud deployment Message-ID: Hi all, I keep getting the same fatal error below during undercloud deployment. My environment is VMware VM Centos7 undercloud (openstack train) and refers to the official openstack guideline. TASK [tripleo_container_manage : Print failing containers] ********************* Monday 02 November 2020 19:55:07 +0800 (0:00:00.182) 0:17:50.677 ******* fatal: [undercloud]: FAILED! => changed=false msg: 'Container(s) with bad ExitCode: [u''container-puppet-ironic'', u''container-puppet-zaqar''], check logs in /var/log/containers/stdouts/' I even tried with clean different VM and all give me same error. Appreciate some help please. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at ya.ru Mon Nov 2 12:18:49 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Mon, 02 Nov 2020 14:18:49 +0200 Subject: [openstack-ansible] Wallaby PTG results Message-ID: <2577631604317234@mail.yandex.ru> Hi, Bellow you may find summarization of descisions that were taken during PTG discussion. Victoria retrospective ---------------------------- 1. We're pretty far from dropping resource creation code out of the tempest, as we should respect TripleO needs here. So leaving as is during this cycle but adding option to be possible to omit resource creation. 2. We decided to postpone prometheus integration (as well as libvirtd exporter) as we don't have enough resource to implement and moreover support it in the future. 3. Decided to leave ELK stack in ops repo as for now. 4. Adjusting bootstrap-ansible to pick up gerrit patches idea was also postponed as has super low prio and we have a lot of more useful things on our hands 5. We didn't have much progress in tempest coverage of existing roles, we should proceed with it during W. Same is applicable to the roles overall speedup topic. Descisions taken for Wallaby -------------------------------------- 1. Remove nspawn from docs, and set it to unmaintained state in V. Remove code in W. 2. Document how upgrade paths for distro installs are difficult, and may/may not align with OSA releases 3. Check through the logs what roles are we covering with tempest, and what not 4. extract deprecation messages from journal files to the separate log 5. Continue working on speeding up OSA runtime: - Fight with skipped tasks (ie by moving them to separate files that would be included) - most valid for systemd service, systemd_networkd and python_venv_build roles - Try to split up variables by group_vars - Try to use include instead of imports again 6. Set TRANSFORM_INVALID_GROUP_CHARS to ignore in V and replace invalid chars in groups later. For octavia we can temporary add 2 groups (old style and new style) for migration process. 7. Try to speedup zuul required projects clone process and talk to zuul folks 8. Finally do Integrated repo deploy for Zun and sync deployment process with service docs 9. Installation of ara for deployer needs - ARA deploy on localhost, configured by variables - Option for remote ARA - Option for UI server locally 10. Create Root CA on deploy host and overall SSL approach refactoring. Futher discussion in SPEC: https://review.opendev.org/#/c/758805/ Original etherpad for the reference https://etherpad.opendev.org/p/osa-certificates-refactor 11. For ansible 2.11 (which requires py3.8 on deploy host) we can use pyenv for ansible venv to workaround this, and leave hosts themselves on their native python 12. Move neutron-server to standalone group in env.d 13. Add Senlin tempest test - Add plugins (source only) to os_tempest https://review.opendev.org/754044 - Upstream senlin endpoints fix https://review.opendev.org/74987 - Enable senlin tempest plugin in integrated repo https://review.opendev.org/#/c/754105/ - Test patch for senlin/tempest https://review.opendev.org/754045 14 Document in os_nova defaults way to manage ratios via api (ie set cpu_allocation_ratio to 0.0) (regarding https://review.opendev.org/#/c/758029/) 15. Add support for zookeeper deployment for services coordination (low priority) 16. Check where we don't use uwsgi role and see if we can use it there now (like designate, neutron-api) (low priority) -- Kind Regards, Dmitriy Rabotyagov From Aija.Jaunteva at dell.com Mon Nov 2 12:30:25 2020 From: Aija.Jaunteva at dell.com (Jaunteva, Aija) Date: Mon, 2 Nov 2020 12:30:25 +0000 Subject: [ironic] Configuration mold follow-up Message-ID: Hi, to follow up on Ironic PTG session about Configuration molds [1] I'm scheduling a call to discuss remaining items (mainly storage of the molds). Anyone interested please add your availability in Doodle [2]. When time slot decided, will share call details. Regards, Aija [1] https://etherpad.opendev.org/p/ironic-wallaby-ptg line 216 [2] https://doodle.com/poll/dry4x5tbmhi6x6p3 -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Nov 2 13:07:15 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 02 Nov 2020 14:07:15 +0100 Subject: [nova] python3.9 support Message-ID: <3476JQ.0LX9S5E3FLUD2@est.tech> Hi, I run unit and functional tests (where applicable) for nova, python-novaclient, os-vif, placement, osc-placement, os-traits, os-resource-classes with python 3.9 and everything is green. I've proposed some tox.ini targets where it was needed [1][2][3] Cheers, gibi [1]https://review.opendev.org/#/c/760884/ [2]https://review.opendev.org/#/c/760890/ [3]https://review.opendev.org/#/c/760912/ From aschultz at redhat.com Mon Nov 2 14:16:54 2020 From: aschultz at redhat.com (Alex Schultz) Date: Mon, 2 Nov 2020 07:16:54 -0700 Subject: [undercloud][train]Fatal error during train undercloud deployment In-Reply-To: References: Message-ID: On Mon, Nov 2, 2020 at 5:11 AM dangerzone ar wrote: > > Hi all, I keep getting the same fatal error below during undercloud deployment. My environment is VMware VM Centos7 undercloud (openstack train) and refers to the official openstack guideline. > > TASK [tripleo_container_manage : Print failing containers] ********************* > Monday 02 November 2020 19:55:07 +0800 (0:00:00.182) 0:17:50.677 ******* > fatal: [undercloud]: FAILED! => changed=false > msg: 'Container(s) with bad ExitCode: [u''container-puppet-ironic'', u''container-puppet-zaqar''], check logs in /var/log/containers/stdouts/' > > I even tried with clean different VM and all give me same error. Appreciate some help please. Thank you. Check containers, they are likely centos8 based containers which fail when trying to deploy on centos7. At this point it's highly recommended to use centos8 for train as it's readily available from RDO now. From ionut at fleio.com Mon Nov 2 14:23:19 2020 From: ionut at fleio.com (Ionut Biru) Date: Mon, 2 Nov 2020 16:23:19 +0200 Subject: [magnum] hyperkube deprecation situation In-Reply-To: <601fe2f9-52fb-c55f-12c7-8da36c2d9484@catalyst.net.nz> References: <601fe2f9-52fb-c55f-12c7-8da36c2d9484@catalyst.net.nz> Message-ID: Hi Feilong, Thanks for the reply. My issue is that currently there is no way I can overwrite hyperkube and use a 3rd party image because once I set up container_infra_prefix': ' docker.io/rancher/', magnum tries to use this registry for all images. On Sat, Oct 24, 2020 at 10:03 PM feilong wrote: > Hi Ionut, > > We have already made the decision actually, we would like to support both > binary and container for kubelet. Spyros has started the work with > https://review.opendev.org/#/c/748141/ and we will backport it to > Victoria. At this stage, if you need to use latest k8s version, the simple > way is building your own hyperkube image which is not difficult. You can > just copy the folder https://github.com/kubernetes/kubernetes/tree/v1.18.8/cluster/images/hyperkube > to the version you want to build images and follow the process. Or, you > can use the hyperkube image built by 3rd parties, e.g. rancher. Please let > me know if you need more information. > > > On 23/10/20 8:05 pm, Ionut Biru wrote: > > Hi guys, > > I was hoping that in Victoria there will be a decision regarding this > topic. > Currently, if I use vanilla magnum, I'm stuck on 1.18.6 being the latest > version, even so 1.18.10 was released. > > We are kinda stuck on a version that maybe in the near future is pruned to > security issues. > > How do other operators are using magnum (vanilla) and keeping their > kubernetes up to date? > > Are operators using a forked version of > https://review.opendev.org/#/c/752254/ ? > > > -- > Ionut Biru - https://fleio.com > > -- > Cheers & Best regards, > Feilong Wang (王飞龙) > ------------------------------------------------------ > Senior Cloud Software Engineer > Tel: +64-48032246 > Email: flwang at catalyst.net.nz > Catalyst IT Limited > Level 6, Catalyst House, 150 Willis Street, Wellington > ------------------------------------------------------ > > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Mon Nov 2 08:47:23 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Mon, 02 Nov 2020 09:47:23 +0100 Subject: [nova] PTG Message-ID: Hi, Thank you all for the PTG participation! All our agreements and decisions are marked in the etherpad[1] with AGREED word. To avoid loosing this information I made a dump of the etherpad on Friday at the end of the PTG and attached to this mail Cheers, gibi [1] https://etherpad.opendev.org/p/nova-wallaby-ptg -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangerzonen at gmail.com Mon Nov 2 12:17:17 2020 From: dangerzonen at gmail.com (dangerzone ar) Date: Mon, 2 Nov 2020 20:17:17 +0800 Subject: [undercloud][train]Fatal error during train undercloud deployment In-Reply-To: References: Message-ID: I'm attaching a snippet of the problem for your reference. Thank you. On Mon, Nov 2, 2020 at 8:04 PM dangerzone ar wrote: > Hi all, I keep getting the same fatal error below during undercloud > deployment. My environment is VMware VM Centos7 undercloud (openstack > train) and refers to the official openstack guideline. > > TASK [tripleo_container_manage : Print failing containers] > ********************* > Monday 02 November 2020 19:55:07 +0800 (0:00:00.182) 0:17:50.677 > ******* > fatal: [undercloud]: FAILED! => changed=false > msg: 'Container(s) with bad ExitCode: [u''container-puppet-ironic'', > u''container-puppet-zaqar''], check logs in /var/log/containers/stdouts/' > > I even tried with clean different VM and all give me same error. > Appreciate some help please. Thank you. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: undercloud_error2.jpg Type: image/jpeg Size: 109223 bytes Desc: not available URL: From sean.mcginnis at gmx.com Mon Nov 2 14:53:44 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 2 Nov 2020 08:53:44 -0600 Subject: [all] X Release Naming Message-ID: <20201102145344.GA1505236@sm-workstation> Hello everyone, This is to notify everyone that we are now kicking off the release naming process for the 'X' OpenStack release. Starting with the last W naming, we had altered our traditional way of choosing a release name to allow any name that meets the basic criteria - basically that the name begins with the chosen letter, it is made up if the ISO basic Latin alphabet, and that it is 10 characters or less in length. Community proposed names are collected on the wiki page for the X Release Naming. This page also includes more details about the process and timing: https://wiki.openstack.org/wiki/Release_Naming/X_Proposals We welcome any names from the community. This should be an intersting release to see what everyone comes up with! Please add any ideas to the wiki page. -- Sean McGinnis From fungi at yuggoth.org Mon Nov 2 15:10:46 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 2 Nov 2020 15:10:46 +0000 Subject: Python 3.9 is here in Debian Sid In-Reply-To: <20201016135910.paltyva5ut2x5qcc@yuggoth.org> References: <12e2eefa-21c5-4b2e-67a9-662e04bd38c7@debian.org> <4fd89e88-d8ce-7faa-772f-0908a3387d5c@goirand.fr> <20201016135910.paltyva5ut2x5qcc@yuggoth.org> Message-ID: <20201102151046.jgkwy5sek3n3rmqx@yuggoth.org> On 2020-10-16 13:59:10 +0000 (+0000), Jeremy Stanley wrote: > On 2020-10-16 10:21:34 +0200 (+0200), Thomas Goirand wrote: > [...] > > I also think it'd be nice to have non-voting jobs detecting > > deprecated stuff. For example, a quick grep shows that a lot of > > projects are still using collections.Mapping instead of > > collections.abc.Mapping (which is to be removed in Python 3.10, > > according to the 3.9 release notes). Would there be a way to get > > our CI report these issues earlier? > > They're going to all explode, at least until PBR gets some more > changes merged and released to stop doing deprecated things. I've > been slowly working my way through testing simple PBR-using projects > with PYTHONWARNINGS=error (instead of =default::DeprecationWarning) > and fixing or noting the issues I encounter. Up until recently, a > number of its dependencies were also throwing deprecation warnings > under 3.9, but now I think we're down to just a couple of remaining > fixes pending. We didn't want to try to rush in a new PBR release > until Victoria was wrapped up, but now I think we can finish this > fairly soon. Just to follow up, at this point I think we've merged and released everything PBR needs for this, but now it's become apparent we're blocked on Setuptools behavior. In particular, any call into setuptools/command/build_py.py unconditionally tries to import setuptools.lib2to3_ex.Mixin2to3 even if the build is not going to use lib2to3, so this will automatically raise a SetuptoolsDeprecationWarning if run with PYTHONWARNINGS=error. I expect we're stuck until https://github.com/pypa/setuptools/issues/2086 gets resolved, so this is the next thing to work on if anyone has time (it may take me a while to get around to submitting a PR as I expect they're going to want a very thorough cleanup). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From zigo at debian.org Mon Nov 2 15:31:01 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 2 Nov 2020 16:31:01 +0100 Subject: [nova] python3.9 support In-Reply-To: <3476JQ.0LX9S5E3FLUD2@est.tech> References: <3476JQ.0LX9S5E3FLUD2@est.tech> Message-ID: <460f346c-5b40-227d-c705-5de9a7dc92ee@debian.org> On 11/2/20 2:07 PM, Balázs Gibizer wrote: > Hi, > > I run unit and functional tests (where applicable) for nova, > python-novaclient, os-vif, placement, osc-placement, os-traits, > os-resource-classes with python 3.9 and everything is green. > > I've proposed some tox.ini targets where it was needed [1][2][3] > > Cheers, > gibi > > [1]https://review.opendev.org/#/c/760884/ > [2]https://review.opendev.org/#/c/760890/ > [3]https://review.opendev.org/#/c/760912/ Hi Gibi, Thanks for this work, it's very much appreciated here! Do you know if there anything patch that would need backporting to Victoria that I may have missed? (by this, I mean backported in the Debian packages, it doesn't have to be in the stable branch of upstream OpenStack) Cheers, Thomas Goirand (zigo) From elod.illes at est.tech Mon Nov 2 15:36:48 2020 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Mon, 2 Nov 2020 16:36:48 +0100 Subject: [ptl][release][stable][EM] Extended Maintenance - Stein In-Reply-To: References: Message-ID: <9445d2df-4dd2-1b2e-de50-84db3bd141bd@est.tech> Hi, Reminder: the planned date for Stein transition to Extended Maintenance is next Wednesday. If you have planned to do a final release, then it is a good time to do now. Warning: Do not merge and release risky patches in a hurry! Remember that Stein will be open for further patches, the only thing is that there won't be further official, upstream release anymore. Thanks, Előd On 2020. 10. 15. 10:55, Előd Illés wrote: > Hi, > > As Victoria was released yesterday and we are in a less busy period, it > is a good opportunity to call your attention to the following: > > In less than a month Stein is planned to enter into Extended > Maintenance phase [1] (planned date: 2020-11-11). > > I have generated the list of *open* and *unreleased* changes in > *stable/stein* for the follows-policy tagged repositories [2]. These > lists could help the teams, who are planning to do a *final* release on > Stein before moving stable/stein branches to Extended Maintenance. Feel > free to edit and extend these lists to track your progress! > > * At the transition date the Release Team will tag the *latest* (Stein) >   releases of repositories with *stein-em* tag. > * After the transition stable/stein will be still open for bugfixes, >   but there won't be any official releases. > > NOTE: teams, please focus on wrapping up your libraries first if there > is any concern about the changes, in order to avoid broken releases! > > Thanks, > > Előd > > [1] https://releases.openstack.org > [2] https://etherpad.openstack.org/p/stein-final-release-before-em > > > From bcafarel at redhat.com Mon Nov 2 16:16:31 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 2 Nov 2020 17:16:31 +0100 Subject: [neutron][stable]Proposing Rodolfo Alonso Hernandez as a stable branches core reviewer In-Reply-To: <1766330.tdWV9SEqCh@p1> References: <1766330.tdWV9SEqCh@p1> Message-ID: On Thu, 29 Oct 2020 at 12:33, Slawek Kaplonski wrote: > Hi, > > I would like to propose Rodolfo Alonso Hernandez (ralonsoh) to be new > member > of the Neutron stable core team. > Rodolfo works in Neutron since very long time. He is core reviewer in > master > branch already. > During last few cycles he also proved his ability to help with stable > branches. He is proposing many backports from master to our stable > branches as > well as doing reviews of other backports. > He has knowledge about stable branches policies. > I think that he will be great addition to our (not too big) stable core > team. > > I will open this nomination open for a week to get feedback about it. > A big +1 from me too, Rodolfo is quite active in stable branches, and reviews all stable specific steps (change backportability, cherrry-pick ID, conflict lines, proper backport chain, potential related backports, …) -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Mon Nov 2 16:40:36 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Mon, 2 Nov 2020 16:40:36 +0000 Subject: [tripleo] please avoid creating ansible playbooks that call modules for trivial tasks In-Reply-To: References: <0bea7456-e413-1f05-ce91-8b1738c8777b@redhat.com> Message-ID: <5D235976-DD12-438C-8A64-4C4EE57BBA01@redhat.com> IMHO, roles (inside collections) do provide quite good abstraction layer as they allow us to change implementation without changing the API (the variables passed to the role). If the role uses an embedded module, external one or lots of tasks to achieve the same functionality it is an implementation details. At any point in time we can refactor the role internals and create new modules, swap them, .... without messing its consumption. I cannot say the same about playbooks, especially those that call modules directly. If they call roles, we should be fine. > On 2 Nov 2020, at 11:55, Jesse Pretorius wrote: > > On Thu, 2020-10-29 at 07:00 -0600, Alex Schultz wrote: >> On Thu, Oct 29, 2020 at 5:32 AM Bogdan Dobrelya < >> bdobreli at redhat.com >> > wrote: >>> >>> In today TripleO client development we invoke ansible playbooks instead >>> of Mistral workflows. I believe this may only be justified for >>> non-trivial things. Otherwise please just import python modules into the >>> client command actions without introducing additional moving parts in >>> the form of ansible modules (that import the same python modules as well). >>> > >>> >>> I see no point of adding a new playbook and a new module for that >>> trivial example. Those 4 packages could (and should) be as well >>> installed from the client caller code without any ansible involved in >>> the middle IMO. >> >> While I can agree to a certain extent, there's actually some good >> reasons to even move trivial bits into ansible. Personally I'm not >> certain the switch to using ansible under the covers from some cli >> actions is an improvement (questionable logging, error handling isn't >> great), there is a case for certain actions. As we discussed at the >> PTG, the overcloud image building process is one of those things that >> actually has to be executed on bare metal. If we wanted to continue to >> look at containerizing the cli, we need to be able to invoke this >> action from within the container but build on an external host. This >> is something that is trivial with the switch to an ansible playbook >> that isn't available when running under the pure python as it exists >> today. Container builds would be another example action that is >> required to run on a bare metal host. Additionally the movement of >> this invocation to an ansible module also allows the action to be >> moved into something like the undercloud installation as an optional >> action as part of the deployment itself. It's not exactly without >> merit in this case. >> >> I don't really care one way or another for this action, however I >> don't think it's as simple as saying "oh it's just a few lines of code >> so we shouldn't..." > > What it sounds like is that there's a need for documented guidelines. A lot of changes have been made as part of a learning process and we now know a lot more about what tasks are better suited to be done directly in the client vs via ansible roles vs via ansible modules. If we can document these best practises then we can guide any new changes according to them. > > It seems to me that we need to consider: > > 1. Requirements - what needs to get done > 2. Constraints - does the action need something special like access to devices or kernel API's > 3. Testability - something in python or an ansible module is unit testable, whereas an ansible role is more difficult to properly test > 4. Scalability - complex ansible tasks/vars scale far worse that ansible modules > 5. Maintainability - many factors are involved here, but sometimes efficiency should be sacrificed for simplicity > From marios at redhat.com Mon Nov 2 17:13:30 2020 From: marios at redhat.com (Marios Andreou) Date: Mon, 2 Nov 2020 19:13:30 +0200 Subject: [tripleo] stable/victoria for tripleo repos In-Reply-To: References: Message-ID: o/ tripleos update on stable/victoria for the tripleo things. You may have noticed that we have a stable/victoria branch in our repos after [1] merged. The ci squad discussed this again today and we think we are good to go on merging to stable/victoria. I see there are already a few patches posted - note that we need to merge all of the .gitreview patches [2] and each repo has one, before patches will appear on stable/victoria in gerrit (or rebase your stuff onto the relevant review). For CI there is still some ongoing work - for example victoria upgrade jobs [3] - as well as third party/ovb branchful jobs. You can see some tests at [4][5]. Please shout if you see something wrong with the stable/victoria check or gate jobs - as usual or is your first port of call in freenode #tripleo #oooq. The wallaby series is also now a thing in launchpad - as usual we have wallaby-1 wallaby-2 wallaby-3 and wallaby-rc1 [6] - I used [7] as reference for the milestone dates. The victoria series is marked "current stable release" so wallaby is now the active series. I'll run the scripts tomorrow (once I find them ;)) to move all the bugs currently targeted to victoria into wallaby-1. please reach out for any clarifications/questions/suggestions/corrections! regards, marios [1] https://review.opendev.org/#/c/760571/ [2] https://review.opendev.org/#/c/760598/ [3] https://review.opendev.org/760323 [4] https://review.opendev.org/#/c/760359/ [5] https://review.opendev.org/#/c/760360/ [6] https://launchpad.net/tripleo/wallaby [7] https://releases.openstack.org/wallaby/schedule.html On Wed, Oct 28, 2020 at 1:57 PM Marios Andreou wrote: > > o/ > > So https://review.opendev.org/#/c/759449/ merged and we have > stable/victoria for > https://github.com/openstack/python-tripleoclient/tree/stable/victoria > and https://github.com/openstack/tripleo-common/tree/stable/victoria. We > have tests running e.g. https://review.opendev.org/#/c/760106/ . > > REMINDER PLEASE AVOID MERGING to stable/victoria tripleoclient/common for > now until > we are confident that our jobs are doing the right thing! > > Per the plan outlined in the message I'm replying to, please let's avoid > merging anything to client/common victoria until we have branched > stable/victoria for all the repos (implying we are happy with ci). > > Thanks in advance for your patience - without adequate ci coverage it will > be very easy to unintentionally break another part of the code and there > are many parts in tripleo ;) > > > On Fri, Oct 23, 2020 at 6:44 PM Marios Andreou wrote: > >> Hello all, >> >> I got some pings about tripleo stable/victoria so here is an update. >> >> The tripleo-ci team is busy preparing and we are almost ready. Basically >> we've blocked branching until we have adequate CI coverage. This means >> check & gate jobs but also promotion jobs so that check & gate aren't >> running with stale content. >> >> tl;dr We are aiming to branch victoria tripleoclient & common today and >> everything else next week - with more details below but please note: >> >> PLEASE AVOID MERGING to the stable/victoria tripleoclient/common that >> will appear after [4] until we are confident that our ci coverage is >> working well. >> >> Status: >> >> Most of our 'core' (standalone/scenarios/undercloud etc) check/gate jobs >> match on branch like [1] so we expect those to run OK once there is a >> stable/victoria available. There is ongoing work for branchful, upgrade, >> 3rd party and ovb jobs but we will not block branching on those. >> >> The victoria integration pipeline [2] (which produces current-tripleo) is >> in good shape. It was blocked on a nit in the component pipeline but it is >> resolved with [3]. Once we have a victoria promotion then we should be all >> clear to branch stable/victoria. >> >> Actions: >> >> A). today: make stable/victoria for python-tripleoclient & >> tripleo-common. The change at [4] will do this once merged. We will use >> this to see our check/gate jobs run against victoria & fix issues. >> >> B). next week: once we are happy with A). AND we've had a promotion, we >> release and make stable/victoria for everything else. >> >> Hopefully B). will happen end of next week but that depends on how well >> the above goes (plus inevitable delays cos PTG), >> >> I'll repeat this cos it's important: >> >> PLEASE AVOID MERGING to stable/victoria tripleoclient/common for now >> until we are confident that our jobs are doing the right thing! >> >> As always grateful for a sanity check or any thoughts about any of the >> above - I'm pretty new to this (weshay has been a great help so far thank >> you weshay++), >> >> thanks, >> >> marios >> >> [1] >> https://opendev.org/openstack/tripleo-ci/src/commit/d5514028452f9d427949f5a8fac26b48bd0d7c03/zuul.d/standalone-jobs.yaml#L795 >> [2] >> https://review.rdoproject.org/zuul/builds?pipeline=openstack-periodic-integration-stable1 >> [3] https://review.rdoproject.org/r/#/c/30654/ >> [4] https://review.opendev.org/#/c/759449/ >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From melwittt at gmail.com Mon Nov 2 17:36:19 2020 From: melwittt at gmail.com (melanie witt) Date: Mon, 2 Nov 2020 09:36:19 -0800 Subject: [keystone][policy] user read-only role not working In-Reply-To: <17587cfe3a2.c5a6e78920408.1786542316064610506@zohocorp.com> References: <174c3a06897.bdfa9ca56831.6510718612076837121@zohocorp.com> <2b1b796a-9136-3066-0d1b-553a41065ca5@gmail.com> <17587cfe3a2.c5a6e78920408.1786542316064610506@zohocorp.com> Message-ID: <7a1ca866-318b-8d95-036b-5329b4207a2b@gmail.com> Adding back the mailing list +openstack-discuss@ On 11/1/20 23:15, its-openstack at zohocorp.com wrote: > Dear Openstack, > >       we are implementing this reader role through kolla-ansible. Need > help in understanding the policy file for adding custom role both in > nova and keystone. You can learn how to use the policy file directly by reading the docs I linked earlier: * https://docs.openstack.org/security-guide/identity/policies.html * https://docs.openstack.org/oslo.policy/train/admin/policy-json-file.html And then the APIs you can control access to in nova are shown in this sample file: * https://docs.openstack.org/nova/train/configuration/sample-policy.html APIs in keystone are shown in this sample file: * https://docs.openstack.org/keystone/train/configuration/samples/policy-yaml.html I'm afraid I don't know anything about how to adjust the policy file through kolla-ansible though. Cheers, -melanie > ---- On Fri, 02 Oct 2020 02:12:39 +0530 *melanie witt > * wrote ---- > > On 9/25/20 07:25, Ben Nemec wrote: > > I don't believe that the reader role was respected by most > projects in > > Train. Moving every project to support it is still a work in > progress. > > This is true and for nova, we have added support for the reader role > beginning in the Ussuri release as part of this spec work: > > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/policy-defaults-refresh.html > > > > Documentation: > > https://docs.openstack.org/nova/latest/configuration/policy-concepts.html > > > > To accomplish a read-only user in the Train release for nova, you can > DIY to a limited extent by creating custom roles and adjusting your > policy.json file [1][2] accordingly. There are separate policies for > GET/POST/PUT/DELETE in many cases so if you were to create a role > ReadWriteUser you could specify that for POST/PUT/DELETE APIs and > create > another role ReadOnlyUser and specify that for GET APIs. > > Hope this helps, > -melanie > > [1] > https://docs.openstack.org/nova/train/configuration/sample-policy.html > > > [2] https://docs.openstack.org/security-guide/identity/policies.html > > > > On 9/24/20 11:58 PM, its-openstack at zohocorp.com > wrote: > >> Dear Openstack, > >> > >> We have deployed openstack train branch. > >> > >> This mail is in regards to the default role in openstack. we are > >> trying to create a read-only user i.e, the said user can only > view in > >> the web portal(horizon)/using cli commands. > >> the user cannot create an instance or delete an instance , the same > >> with any resource. > >> > >> we created a user in a project test with reader role, but in > >> horizon/cli able to create and delete instance and similar to other > >> access also > >> if you so kindly help us fix this issue would be grateful. > >> > >> the commands used for creation > >> > >> > >> > >> $ openstack user create --domain default --password-prompt > >> test-reader at test.com > > > >> $ openstack role add --project test --user test-reader at test.com > > >> > reader > >> > >> > >> > >> Thanks and Regards > >> sysadmin > >> > >> > >> > >> > >> > > > > > From jesse at odyssey4.me Mon Nov 2 18:24:39 2020 From: jesse at odyssey4.me (Jesse Pretorius) Date: Mon, 2 Nov 2020 18:24:39 +0000 Subject: [tripleo] please avoid creating ansible playbooks that call modules for trivial tasks In-Reply-To: <5D235976-DD12-438C-8A64-4C4EE57BBA01@redhat.com> References: <0bea7456-e413-1f05-ce91-8b1738c8777b@redhat.com> <5D235976-DD12-438C-8A64-4C4EE57BBA01@redhat.com> Message-ID: <32567c8c6d46e8e6e5b8860243e15e405da39301.camel@odyssey4.me> On Mon, 2020-11-02 at 16:40 +0000, Sorin Sbarnea wrote: IMHO, roles (inside collections) do provide quite good abstraction layer as they allow us to change implementation without changing the API (the variables passed to the role). If the role uses an embedded module, external one or lots of tasks to achieve the same functionality it is an implementation details. At any point in time we can refactor the role internals and create new modules, swap them, .... without messing its consumption. I cannot say the same about playbooks, especially those that call modules directly. If they call roles, we should be fine. Good point. That reminds me - to really make roles work more like an API, we should also probably implement tests which determine whether the API is changing in an acceptable way. We'd have to agree what 'acceptable' looks like - at the very least, I would imagine that we should be enforcing the same inputs & results with changes being additive unless there's sufficient deprecation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ssbarnea at redhat.com Mon Nov 2 18:29:18 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Mon, 2 Nov 2020 18:29:18 +0000 Subject: [tripleo] please avoid creating ansible playbooks that call modules for trivial tasks In-Reply-To: <32567c8c6d46e8e6e5b8860243e15e405da39301.camel@odyssey4.me> References: <0bea7456-e413-1f05-ce91-8b1738c8777b@redhat.com> <5D235976-DD12-438C-8A64-4C4EE57BBA01@redhat.com> <32567c8c6d46e8e6e5b8860243e15e405da39301.camel@odyssey4.me> Message-ID: I would give zuul-jobs roles a good example of big repository with well defined policies about how to write roles, including requirements for variables names and ways to document arguments. There is already work done towards adding support in ansible for declaring in/out variables for roles. They will likely look similar to how module arguments are now documented and reside inside "meta" folder. Still, until then use of README inside each role looks like a decent solution. > On 2 Nov 2020, at 18:24, Jesse Pretorius wrote: > > t reminds me - to really make roles work more like an API, we should also probably implement tests which determine whether the API is changing in an acceptable way. We'd have to agree what 'acceptable' looks like - at the very least, I would imagine that we should be enforcing the From peter.matulis at canonical.com Mon Nov 2 19:05:02 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 2 Nov 2020 14:05:02 -0500 Subject: [charms] OpenStack Charms 20.10 release is now available Message-ID: The 20.10 release of the OpenStack Charms is now available. This release brings several new features to the existing OpenStack Charms deployments for Queens, Rocky, Stein, Train, Ussuri, Victoria and many stable combinations of Ubuntu + OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/2010.html == Highlights == * OpenStack Victoria OpenStack Victoria is now supported on Ubuntu 20.10 (natively) and Ubuntu 20.04 LTS (via UCA). * Gnocchi and external root CA for S3 The gnocchi charm now supports the configuration of an external root CA in the gnocchi units. This is useful when an encrypted S3 (storage backend) endpoint uses certificates that are not managed by Vault. * Arista - Supported releases The neutron-api-plugin-arista charm now supports all OpenStack releases starting with Queens. * Neutron Gateway and additional data on ports and bridges The neutron-gateway charm now marks the openvswitch ports and bridges it creates as managed by the charm. This allows for less riskier cleanup mechanisms to be implemented. * cinder charm: Allow specifying the default volume type The cinder charm can now set the default volume type. This is to support scenarios where multiple storage backends are being used with Cinder. * Action based migration from Neutron ML2+OVS to ML2+OVN New actions have been introduced to enable migration of a Neutron ML2+OVS based deployment to ML2+OVN. * New charm: ceph-iscsi The ceph-iscsi charm has been promoted to supported status. This charm implements the Ceph iSCSI gateway, which provides iSCSI targets backed by a Ceph cluster. == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel on Freenode. == Thank you == Lots of thanks to the below 30 charm contributors who squashed 45 bugs, enabled support for a new release of OpenStack, improved documentation, and added exciting new functionality! Alex Kavanagh Andrea Ieri Andreas Jaeger Aurelien Lourot Chris MacNaughton Corey Bryant Dan Ardelean David Ames Dmitrii Shcherbakov Edin Sarajlic Edward Hope-Morley Felipe Reyes Frode Nordahl Gabriel Adrian Hemanth Nakkina James Page Jorge Niedbalski Liam Young Martin Kalcok Martin Kopec Marton Kiss Michael Quiniola Nobuto Murata Pedro Guimaraes Peter Matulis Ponnuvel Palaniyappan Rodrigo Barbieri Xav Paice camille.rodriguez zhhuabj -- OpenStack Charms Team -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Nov 2 19:19:06 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 2 Nov 2020 19:19:06 +0000 Subject: [tripleo] please avoid creating ansible playbooks that call modules for trivial tasks In-Reply-To: References: <0bea7456-e413-1f05-ce91-8b1738c8777b@redhat.com> <5D235976-DD12-438C-8A64-4C4EE57BBA01@redhat.com> <32567c8c6d46e8e6e5b8860243e15e405da39301.camel@odyssey4.me> Message-ID: <20201102191906.zeys7mjelns52pvs@yuggoth.org> On 2020-11-02 18:29:18 +0000 (+0000), Sorin Sbarnea wrote: [...] > I would give zuul-jobs roles a good example of big repository with > well defined policies about how to write roles, including > requirements for variables names and ways to document arguments. [...] > use of README inside each role looks like a decent solution. [...] Also, with the help of a Sphinx extension, it allows us to generate fancy role documentation like this: https://zuul-ci.org/docs/zuul-jobs/container-roles.html I'd imagine once standardized metadata comes along, we'll support building documentation from that too/instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From helena at openstack.org Mon Nov 2 21:42:07 2020 From: helena at openstack.org (helena at openstack.org) Date: Mon, 2 Nov 2020 16:42:07 -0500 (EST) Subject: [ptl] Victoria Release Community Meeting In-Reply-To: <20201021082822.5cocsx4zhhxs7n3q@p1.internet.domowy> References: <1602712314.02751333@apps.rackspace.com> <1603213221.77557521@apps.rackspace.com> <20201021082822.5cocsx4zhhxs7n3q@p1.internet.domowy> Message-ID: <1604353327.602612275@apps.rackspace.com> An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Project Updates Template (1).pptx Type: application/octet-stream Size: 791794 bytes Desc: not available URL: From skaplons at redhat.com Mon Nov 2 21:56:59 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 02 Nov 2020 22:56:59 +0100 Subject: [Neutron] PTG summary Message-ID: <6315331.olFIrCdlcp@p1> Hi, Below is my summary of the Neutron team sessions which we had during the virtual PTG last week. Etherpad with notes from the discussions can be found at [1]. ## Retrospective of the Victoria cycle >From the good things during _Victoria_ cycle team pointed: * Complete 8 blueprints including the _Metadata over IPv6_ [2], * Improved feature parity in the OVN driver, * Good review velocity >From the not so good things we mentioned: * CI instability - average number of rechecks needed to merge patch in last year can be found at [3], * Too much "Red Hat" in the Neutron team - and percentage of the reviews (and patches) done by people from Red Hat is constantly increasing over last few cycles. As a main reason of that we pointed that it is hard to change companies way of thinking that dedicating developer to the upstream project means that they lose developer in downstream. * Migration of our CI and Zuul v3 took us a lot of time and effort - but good thing is that we accomplished that in the Victoria cycle :) During that session we also agreed on some action items for the Wallaby cycle: * keep talking about OVN details - Miguel Lavalle and me will work on some way to deliver talks about Neutron OVN backend and OVN internals to the community during next months. The idea is to maybe propose some talks ever 6 weeks or so. This may make ovn driver development more diverse and let operators to thing about migration to the ovn backend. ## Review of the existing meetings We reviewed list of our existing upstream meetings and discussed about ideas on how to increase number of attendees on the meetings. We decided to: * drop neutron-qos meeting as it's not needed anymore * advertise more meetings and meetings' agenda on the OpenStack mailing list - I will send reminders with links to the agenda before every meeting * Together with Lajos Katona we will give some introduction to the debugging of CI issues in Neutron* ## Support for old versions Bernard started discussion about support for the old releases in Neutron and Neutron stadium projects. For Neutron we decided to mark __Ocata__ branch as unmaintained already as its gate is already broken. For the __Pike__ and never branches we will keep them in the __EM__ phase as there is still some community interest to keep those branches open. For the stadium projects we decided to do it similary to what we did while looking for new maintainers for the projects. We will send email "call for maintainers" for such stable branches. If there will be no voluneers to step in, fix gate issues and keep those branches healthy, we will mark them as __unmaintained__ and later as __End of Life__ (EOL). Currently broken CI is in projects: * networking-sfc, * networking-bgpvpn/bagpipe, * neutron-fwaas And those are candidates to be marked as unmaintained if there will be no volunteers to fix them. Bernard Cafarelli volunteered to work on that in next days/weeks. ## Healtcheck API endpoint We discussed as our healtcheck API should works. During the discussion we decided that: * healtcheck result should __NOT__ rely on the agents status, it should rely on worker's ability to connect to the DB and MQ (rabbitmq) * Lajos will ask community (API experts) about some guidance how it should works on the whole OpenStack level, * As for reference implementation we can check e.g. Octavia [4] and Keystone [5] which already implemented it. ## Squash of the DB migration script Rodolfo explained us what are benefits of doing such squash of the db migration scripts from the old versions: * Deployment is faster: we don't need to create/delete tables or create+update other ones - the win is small possibly in the magnitude of 5s per job, * DB definition is centralized in one place, not in original definition plus further migrations - that is most important reason why we really should do that, * UTs faster: removal of some older checks. The problem with this may be that we need to do that carefully and be really verbose about with such changes we may break stadium projects or 3rd party projects which are doing db migration too. To minimalize potential breakage, we will announce such changes on the OpenStack discuss mailing list. Rodolfo volunteered to take propose squash up to Liberty release in W cycle. Together with this squash we will also document that process so in next cycles we should be able to do squashes for newer releases in easier way. Lajos volunteered to help with fixing Neutron stadium projects if that will be needed. ## Switch to the new engine facade We were discussing how to move on and finally finish old Blueprint [6]. We decided that together with Rodolfo we will try how this new engine facade will work without using transaction guards in the code. Hopefully that will let us move on with this. If not, we will try to reach out to some DB experts for some help with this. ## Change from rootwrap to the privsep This is now community goal during the Wallaby cycle so we need to focus on it to accomplish that transition finally. This transition may speed up and make our code a bit more secure. Rodolfo explained us multiple possible strategies of migration: * move to native, e.g. * replace ps with python psutils, not using rootwrap or privsep * replace ip commands with pyroute2, under a privsep context (elevated permissions needed) * directly translate rootwrap to privsep, executing the same shell command but under a privsep context To move on with this I will create list of the pieces of code which needs to be transitioned in the Neutron repo and in the stadium projects. Current todo items can be found on the storyboard [7]. ## Migration to the NFtables During this session we were discussing potential strategies on how to migrate from the old iptables to the new nftables. We need to start planning that work as it major Linux distributions (e.g. RHEL) are planning to deprecate iptables in next releases. It seems that currently all major distros (Ubuntu, Centos, OpenSuSE) supports nftables already. We decided that in Wallaby cycle we will propose new _Manager_ class and we will add some config option which will allow people to test new solution. In next cycles we will continue work on it to make it stable and to make upgrade and migration path for users as easy as possible. There is already created blueprint to track progress on that topic [8]. We need to migrate: * Linuxbridge firewall, iptables OVS hybrid firewall, * L3 code (legacy router, DVR router, conntrack, port forwarding), * iptables metering, * metadata proxy, * dhcp agent for when it does metadata for isolated networks and namespace creation, * neutron-vpnaas - ipsec code, * and maybe something else what we didn't found yet. ## Nova-Neutron cross project session We had very interesting discussion with Nova team. We were discussing topics like: * NUMA affinity in the neutron port * vhost-vdpa support * default __vnic_type__/__port flavour__ Notes from that discussion are available in the nova's etherpad [9]. ## Neutron scalling issues At this session we were discussing issues mentioned by operators on the Forum sessions a week before the PTG. There was couple of issues mentioned there: * problems with retries of the DB operations - we should migrate all our code to the oslo.db retries mechanism - new blueprint [10] is created to track progress on that one. * problems with maintenance of the agents, like e.g. DHCP or L3 agents - many of those issues are caused by how our agents are designed and to really fix that we would need very deep and huge changes. But also many of those issues can be solved by the __ovn__ backend - **and that is strategic direction in which neutron wants to go in the next cycles**, * Miguel Lavalle and I volunteered to do some profiling of the agents to see where we are loosing most of the time - maybe we will be able to find some _low hanging fruits_ which can be fixed and improve the situation at least a bit, * Similar problem with neutron-ovs-agent and especially security groups which are using _remove group id_ as a reference - here we also need some volunteers who will try to optimize that. ## CI (in)stablility On Thursday we were discussing how to improve our very poor CI. Finally we decided to: * not recheck patches without giving reason of recheck in the comment - there should be already reported bug which should be linked in the _recheck_ comment, or user should open new one and link to it also. IN case if the problem was e.g. related to infra some simple comment like _infra issue_ will also be enough there, * To lower number of existing jobs we will do some changes like: * move *-neutron-lib-master and *-ovs-master jobs to the experimental and periodic queues to not run them on every patch, * I will switch _neutron-tempest-plugin-api_ job to be deployed with uwsgi so we can drop _neutron-tempest-with-uwsgi_ job, * Consolidate _neutron-tempest-plugin-scenario-linuxbridge_ and _neutron- tempest-linuxbridge_ jobs, * Consolidate _neutron-tempest-plugin-scenario-iptables_hybrid and _neutron- tempest-iptables_hybrid jobs, Later we also discussed about the way how to run or skip tests which can be only run when some specific feature is available in the cloud (e.g. _Metadata over IPv6_). After some discussion we decided to add new config option with list of enabled features. It will be very similar to the existing option _api_extensions_. Lajos volunteered to work on that. As last CI related topic we discussed about testing DVR in our CI. Oleg Bondarev volunteered to check and try to fix broken _neutron-tempest-plugin- dvr-multinode-scenario_ job. ## Flow based DHCP Liu Yulong raised topic about new way of doing fully distributed DHCP service, instead of using _DHCP agent_ on the nodes - RFE is proposed at [11]. His proposal of doing Open Flow based DHCP (similar to what e.g. ovn-controller is doing) is described in [12]. It could be implemented as an L2 agent extension and enabled by operators in the config when they would need it. As a next step Liu will now propose spec with details about this solution and we will continue discussion about it in the spec's review. ## Routed provider networks limited to one host As a lost topic on Thursday we briefly talked about old RFE [13]. Miguel Lavalle told us that his company, Verizon Media, is interested in working on this RFE in next cycles. This also involves some work on Nova's side which was started by Sylvain Bauza already. Miguel will sync with Sylvain on that RFE. ## L3 feature improvements On Friday we were discussing some potential improvements in the L3 area. Lajos and Bence shown us some features which their company is interested in and on which they plan to work. Those are things like: * support for Bidirectional Forwarding Detection * some additional API to set additional router parameters like: * ECMP max path, * ECMP hash algorith * --provider-allocation-pool parameter in the subnets - in some specific cases it may help to use IPs from such _special_ pool for some infrastructure needs, more details about that will come in the RFE in future, For now all those described above improvements are in very early planning phase but Bence will sync with Liu and Liu will dedicate some time to discuss progress on them during the __L3 subteam meetings__. ## Leveraging routing-on-the-host in Neutron in our next-gen clusters As a last topic on Friday we were discussing potential solutions of the _L3 on the host_ in the Neutron. The idea here is very similar to what e.g. __Calico plugin__ is doing currently. More details about potential solutions are described in the etherpad [14]. During the discussion Dawid Deja from OVH told us that OVH is also using very similar, downstream only solution. Conclusion of that discussion was that we may have most of the needed code already in Neutron and some stadium projects so as a first step people who are interested in that topic, like Jan Gutter, Miguel and Dawid will work on some deployment guide for such use case. ## Team photo During the PTG we also made team photos which You can find at [15]. [1] https://etherpad.opendev.org/p/neutron-wallaby-ptg [2] https://blueprints.launchpad.net/neutron/+spec/metadata-over-ipv6 [3] https://ibb.co/12sB9N9 [4] https://opendev.org/openstack/octavia/src/branch/master/octavia/api/ healthcheck/healthcheck_plugins.py [5] https://docs.openstack.org/keystone/victoria/admin/health-check-middleware.html [6] https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch [7] https://storyboard.openstack.org/#!/story/2007686 [8] https://blueprints.launchpad.net/neutron/+spec/nftables-migration [9] https://etherpad.opendev.org/p/nova-wallaby-ptg [10] https://blueprints.launchpad.net/neutron/+spec/oslo-db-retries [11] https://bugs.launchpad.net/neutron/+bug/1900934 [12] https://github.com/gotostack/shanghai_ptg/blob/master/ shanghai_neutron_ptg_topics_liuyulong.pdf [13] https://bugs.launchpad.net/neutron/+bug/1764738 [14] https://etherpad.opendev.org/p/neutron-routing-on-the-host [15] http://kaplonski.pl/files/Neutron_virtual_PTG_October_2020.tar.gz -- Slawek Kaplonski Principal Software Engineer Red Hat From zigo at debian.org Mon Nov 2 22:59:58 2020 From: zigo at debian.org (Thomas Goirand) Date: Mon, 2 Nov 2020 23:59:58 +0100 Subject: [Neutron] PTG summary In-Reply-To: <6315331.olFIrCdlcp@p1> References: <6315331.olFIrCdlcp@p1> Message-ID: <16c52015-e038-317b-27b7-4831987efa1b@debian.org> Hi Slawek, Thanks a lot for the summary, that's very useful. On 11/2/20 10:56 PM, Slawek Kaplonski wrote: > * replace ip commands with pyroute2, under a privsep context (elevated > permissions needed) Please, please, please, do this, and give it some high priority. Spawning thousands of times the ip command simply doesn't scale. > ## Migration to the NFtables > During this session we were discussing potential strategies on how to migrate > from the old iptables to the new nftables. We need to start planning that work > as it major Linux distributions (e.g. RHEL) are planning to deprecate iptables > in next releases. Did you know that Debian uses nftables by default since Buster, and that one must set iptables-legacy as alternative, otherwise Neutron becomes mad and fails applying firewall rules? I'm not sure about Bullseye, but maybe there, iptables-legacy will even be gone?!? > ## Leveraging routing-on-the-host in Neutron in our next-gen clusters > > As a last topic on Friday we were discussing potential solutions of the _L3 on > the host_ in the Neutron. The idea here is very similar to what e.g. __Calico > plugin__ is doing currently. > More details about potential solutions are described in the etherpad [14]. > During the discussion Dawid Deja from OVH told us that OVH is also using very > similar, downstream only solution. > Conclusion of that discussion was that we may have most of the needed code > already in Neutron and some stadium projects so as a first step people who are > interested in that topic, like Jan Gutter, Miguel and Dawid will work on some > deployment guide for such use case. It'd be great if people were sharing code for this. I've seen at least 3 or 4 companies doing it, none sharing any bits... :/ How well is the Calico plugin working for this? Do we know? Has anyone tried it in production? Does it scale well? Cheers, Thomas Goirand (zigo) From changzhi at cn.ibm.com Tue Nov 3 05:51:06 2020 From: changzhi at cn.ibm.com (Zhi CZ Chang) Date: Tue, 3 Nov 2020 05:51:06 +0000 Subject: [nova] Why nova needs password-less SSH to do live migraiton? Message-ID: An HTML attachment was scrubbed... URL: From zigo at debian.org Tue Nov 3 07:48:39 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 3 Nov 2020 08:48:39 +0100 Subject: [nova] Why nova needs password-less SSH to do live migraiton? In-Reply-To: References: Message-ID: <2462c573-0a7a-8a27-b637-aadaba911580@debian.org> On 11/3/20 6:51 AM, Zhi CZ Chang wrote: > Hi, all >   > In the nova live migration doc[1], there is some description of libvirt > configuration: > " > Enable password-less SSH so that root on one compute host can log on to > any other compute host without providing a password. > The |libvirtd| daemon, which runs as root, uses the SSH protocol to copy > the instance to the destination and can’t know the passwords of all > compute hosts. > " > According to the description, I understand that the libvirtd daemon runs > as the root user for remote copy the instance to the destination. >   > My question is, why make the libvirtd daemon runs as the "root" user for > copy instance rather than other users, like the "nova" user? >   >   > Thanks > Zhi Chang Hi, What's needed is password-less (ie: key authentication) under the nova user, not root. What I did was having the ssh host keys signed, so that nodes can authenticate with each other in a secure way. I strongly recommend doing that, instead of blindly trusting ssh keys, which could potentially mean someone could be in the middle. Cheers, Thomas Goirand (zigo) From changzhi at cn.ibm.com Tue Nov 3 08:18:14 2020 From: changzhi at cn.ibm.com (Zhi CZ Chang) Date: Tue, 3 Nov 2020 08:18:14 +0000 Subject: [nova] Why nova needs password-less SSH to do live migraiton? In-Reply-To: <2462c573-0a7a-8a27-b637-aadaba911580@debian.org> References: <2462c573-0a7a-8a27-b637-aadaba911580@debian.org>, Message-ID: An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Tue Nov 3 10:23:14 2020 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 3 Nov 2020 11:23:14 +0100 Subject: [neutron] bug deputy report for week of 2020-10-26 Message-ID: Hi Team, We had the following new bugs on the week of the PTG. Sorry for sending the report a bit late. Critical: * https://bugs.launchpad.net/neutron/+bug/1901534 Neutron fail to create networks because of MTU value changes proposed: https://review.opendev.org/#/q/topic:bug/1901534 Medium: * https://bugs.launchpad.net/neutron/+bug/1901527 Agent API gets broken if OVN DB is upgraded to the one introducing chassis_private change series proposed: https://review.opendev.org/#/q/topic:bug/1901527 * https://bugs.launchpad.net/neutron/+bug/1901992 Original default route in DVR SNAT namespace is not recovered after creating and deleting /0 extraroute The first fix proposed won't work since it would have been an API change. No fix proposed for the re-stated problem (bug comments #4 and #7) at the moment. * https://bugs.launchpad.net/neutron/+bug/1902512 neutron-ovn-tripleo-ci-centos-8-containers-multinode fails on private networ creation (mtu size) unassigned Low: * https://bugs.launchpad.net/neutron/+bug/1901936 [OVN Octavia Provider] OVN provider loadbalancer failover should fail as unsupported fix proposed: https://review.opendev.org/760209 Incomplete: * https://bugs.launchpad.net/neutron/+bug/1902211 Router State standby on all l3 agent when create Looks vpnaas related. The reporter is asking for help in debugging a lock that's never released. Not reproducible at will at the moment. * https://bugs.launchpad.net/neutron/+bug/1901707 race condition on port binding vs instance being resumed for live-migrations Broken out of https://bugs.launchpad.net/neutron/+bug/1815989. * https://bugs.launchpad.net/neutron/+bug/1463631 60_nova/resources.sh:106:ping_check_public fails intermittently Sporadic re-appearance of an ages old bug, with hardly enough information to reproduce at will. Cheers, Bence From zigo at debian.org Tue Nov 3 10:26:27 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 3 Nov 2020 11:26:27 +0100 Subject: [nova] Why nova needs password-less SSH to do live migraiton? In-Reply-To: References: <2462c573-0a7a-8a27-b637-aadaba911580@debian.org> Message-ID: <9fb653f8-026d-d1de-8be6-6e7e39fca209@debian.org> On 11/3/20 9:18 AM, Zhi CZ Chang wrote: > Hi, Thomas >   > Thanks for your reply. >   > In your environment, you use the "root" user for authenticating with > each other compute node, rather than the "nova" user, right? > If so, why use the "root" user rather than the "nova" user then > privilege the root permission to the "nova" user? >   > Thanks > Zhi Chang Hi, No, the username is "nova", not "root". Thomas Goirand (zigo) P.S: Please don't CC me, I'm registered to the list. From changzhi at cn.ibm.com Tue Nov 3 11:15:45 2020 From: changzhi at cn.ibm.com (Zhi CZ Chang) Date: Tue, 3 Nov 2020 11:15:45 +0000 Subject: [nova] Why nova needs password-less SSH to do live migraiton? In-Reply-To: <9fb653f8-026d-d1de-8be6-6e7e39fca209@debian.org> References: <9fb653f8-026d-d1de-8be6-6e7e39fca209@debian.org>, <2462c573-0a7a-8a27-b637-aadaba911580@debian.org> Message-ID: An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Nov 3 11:24:13 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 03 Nov 2020 12:24:13 +0100 Subject: [neutron][core team] (Re)adding Oleg Bondarev to the Neutron core team Message-ID: <3552015.USbqv0nN58@p1> Hi, I'm glad to announce that I just fast-tracked back Oleg to the Neutron core team. Oleg was Neutron core in the past and now, since few months he is again one of the most active Neutron reviewers [1]. Welcome back in the core team Oleg :) [1] https://www.stackalytics.com/report/contribution/neutron-group/90 -- Slawek Kaplonski Principal Software Engineer Red Hat From ralonsoh at redhat.com Tue Nov 3 11:32:24 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 3 Nov 2020 12:32:24 +0100 Subject: [neutron][core team] (Re)adding Oleg Bondarev to the Neutron core team In-Reply-To: <3552015.USbqv0nN58@p1> References: <3552015.USbqv0nN58@p1> Message-ID: Welcome back Oleg! On Tue, Nov 3, 2020 at 12:28 PM Slawek Kaplonski wrote: > Hi, > > I'm glad to announce that I just fast-tracked back Oleg to the Neutron > core > team. > Oleg was Neutron core in the past and now, since few months he is again > one of > the most active Neutron reviewers [1]. > Welcome back in the core team Oleg :) > > [1] https://www.stackalytics.com/report/contribution/neutron-group/90 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yumeng_bao at yahoo.com Tue Nov 3 11:36:43 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Tue, 3 Nov 2020 19:36:43 +0800 Subject: [cyborg] PTG Summary and Wallaby Release Schedule References: Message-ID: Hi all, Thank you all for the PTG participation! All our agreements and decisions are marked in the etherpad[1] with AGREED word. And I have proposed the Wallaby_Release_Schedule[2] on Cyborg wiki page to track features with milestone. [1]https://etherpad.opendev.org/p/cyborg-wallaby-goals [2]https://wiki.openstack.org/wiki/Cyborg/Wallaby_Release_Schedule Regards, Yumeng From katonalala at gmail.com Tue Nov 3 12:02:19 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 3 Nov 2020 13:02:19 +0100 Subject: [neutron][core team] (Re)adding Oleg Bondarev to the Neutron core team In-Reply-To: <3552015.USbqv0nN58@p1> References: <3552015.USbqv0nN58@p1> Message-ID: Well earned Slawek Kaplonski ezt írta (időpont: 2020. nov. 3., K, 12:25): > Hi, > > I'm glad to announce that I just fast-tracked back Oleg to the Neutron > core > team. > Oleg was Neutron core in the past and now, since few months he is again > one of > the most active Neutron reviewers [1]. > Welcome back in the core team Oleg :) > > [1] https://www.stackalytics.com/report/contribution/neutron-group/90 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Tue Nov 3 12:08:54 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Tue, 03 Nov 2020 13:08:54 +0100 Subject: [placement][nova][cinder][neutron][blazar][tc][zun] Placement governance switch(back) In-Reply-To: <56f987f0-20a5-7163-062e-9e02a0f99057@openstack.org> References: <51776398bc77d6ec24a230ab4c2a5913@sslemail.net> <3e09b1015433411e9752d3543d63d331@inspur.com> <56f987f0-20a5-7163-062e-9e02a0f99057@openstack.org> Message-ID: Hi, Sorry for the top post but I want to give a summary where we are with the Placement governance question. During the PTG both the TC and the Nova teams discussed the situation. We decided if there is no volunteers for the rest of the needed liaisons until the end of the PTG then we move the Placement project under Nova governance. As we hit that deadline I've proposed the governance patch[1]. Cheers, gibi [1] https://review.opendev.org/#/c/760917 On Tue, Oct 13, 2020 at 11:34, Thierry Carrez wrote: > Brin Zhang(张百林) wrote: >> Hi, gibi, I am a contributor for nova and placement, and do many >> feature from Stein release, I would like to take on the role of >> *Release liaison*[1] in Placement to help more people. > > Thanks Brin Zhang for volunteering! > > We still need (at least) one volunteer for security liaison and one > volunteer for TaCT SIG (infra) liaison. > > Anyone else interested in helping? > > -- > Thierry Carrez (ttx) > From skaplons at redhat.com Tue Nov 3 12:25:27 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 03 Nov 2020 13:25:27 +0100 Subject: [neutron] Team meeting Message-ID: <10236937.s4mitKBmyh@p1> Hi, Just a quick reminder. Today at 14:00 UTC we will have neutron team meeting in the #openstack-meeting-3 room @freenode. Meeting's agenda is available in [1]. If You have any topics which You want to discuss, please add them to the "On demand" section. See You all at the meeting :) [1] https://wiki.openstack.org/wiki/Network/Meetings -- Slawek Kaplonski Principal Software Engineer Red Hat From bdobreli at redhat.com Tue Nov 3 13:11:26 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Tue, 3 Nov 2020 14:11:26 +0100 Subject: [tripleo][ci] Monday Nov 2 In-Reply-To: References: Message-ID: <0b8e1c52-640d-9370-981d-c6db3814626c@redhat.com> On 10/30/20 6:31 PM, Wesley Hayutin wrote: > Greetings, > > The tripleo ci team has identified a handful of patches that we'd like > to land prior to Nov 2, the day docker.io goes away. >  We've hit some new bugs and have also tuned a few things to try and > make sure we can get patches to merge. > > Our current focus is across master, victoria, ussuri and centos-8 train, > and queens while reducing coverage in rocky and stein. > > A list of the prioritized gerrit reviews can be found here: > https://hackmd.io/MlbZ_izSTEuZsCWTJvu_Kg?view > > The entire topic can be found here: > https://review.opendev.org/#/q/topic:new-ci-job In that list there are patches to puppet modules and openstack services what run a single standalone tripleo CI job. I don't think creating an extra provider job to run a single consumer job sounds reasonable. > > Thanks all. -- Best regards, Bogdan Dobrelya, Irc #bogdando From tpb at dyncloud.net Tue Nov 3 15:10:29 2020 From: tpb at dyncloud.net (Tom Barron) Date: Tue, 3 Nov 2020 10:10:29 -0500 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) In-Reply-To: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> References: <16c93e9a-765e-f254-a915-8e92fb6a7dcb@me.com> Message-ID: <20201103151029.dyixpfoput5rgyae@barron.net> On 01/11/20 08:55 +0100, Oliver Weinmann wrote: >Hi, > >I'm still in the process of preparing a OpenStack POC. I'm 100% sure >that I want to use CEPH and so I purchased the book Mastering CEPH 2nd >edition. First of all, It's a really good book. It basically explains >the various methods how ceph can be deployed and also the components >that CEPH is build of. So I played around a lot with ceph-ansible and >rook in my virtualbox environment. I also started to play with tripleo >ceph deployment, although I haven't had the time yet to sucessfully >deploy a openstack cluster with CEPH. Now I'm wondering, which of >these 3 methods should I use? > >rook > >ceph-ansible > >tripleo > >I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of >VMs running under vSphere that currently consume storage from a ZFS >storage via CIFS and NFS. I don't know if rook can be used for this. I >have the feeling that it is purely meant to be used for kubernetes? >And If I would like to have CIFS and NFS maybe tripleo is not capable >of enabling these features in CEPH? So I would be left with >ceph-ansible? > >Any suggestions are very welcome. > >Best Regards, > >Oliver > > If your core use case is OpenStack (OpenStack POC with CEPH) then of the three tools mentioned only tripleo will deploy OpenStack. It can deploy a Ceph cluster for use by OpenStack as well, or you can deploy Ceph externally and it will "point" to it from OpenStack. For an OpenStack POC (and maybe for production too) I'd just have TripleO deploy the Ceph cluster as well. TripleO will use a Ceph deployment tool -- today, ceph-ansible; in the future, cephadm -- to do the actual Ceph cluster deployment but it figures out how to run the Ceph deployment for you. I don't think any of these tools will install a Samba/CIFs gateway to CephFS. It's reportedly on the road map for the new cephadm tool. You may be able to manually retrofit it to your Ceph cluster by consulting upstream guidance such as [1]. NFS shares provisioned in OpenStack (Manila file service) *could* be shared out-of-cloud to VSphere VMs if networking is set up to make them accessible but the shares would remain OpenStack managed. To use the same Ceph cluster for OpenStack and vSphere but have separate storage pools behind them, and independent management, you'd need to deploy the Ceph cluster independently of OpenStack and vSphere. TripleO could "point" to this cluster as mentioned previously. I agree with your assessment that rook is intended to provide PVs for Kubernetes consumers. You didn't mention Kubernetes as a use case, but as John Fulton has mentioned it is possible to use rook on Kubernetes in a mode where it "points" to an external ceph cluster rather than provisioning its own as well. Or if you run k8s clusters on OpenStack, they can just consume the OpenStack storage, which will be Ceph storage when that is used to back OpenStack Cinder and Manila. -- Tom Barron [1] https://www.youtube.com/watch?v=5v8L7FhIyOw&list=PLrBUGiINAakNCnQUosh63LpHbf84vegNu&index=19&t=0s&app=desktop From rosmaita.fossdev at gmail.com Tue Nov 3 15:21:53 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 3 Nov 2020 10:21:53 -0500 Subject: [cinder] reminder: weekly meeting wednesday 4 nov at 1400utc Message-ID: <2ac365c4-3e07-a6f1-0b54-819478782717@gmail.com> After two weeks of not having meetings due to summit/forum/ptg, the cinder team will be holding its weekly meeting at the usual time (1400 UTC) and location (#openstack-meeting-alt) tomorrow (wednesday 4 november). People who had been following daylight saving (or summer) time until recently, keep in mind that although the UTC time of the meeting has not changed, it may be an hour earlier for you in your local time. cheers, brian From whayutin at redhat.com Tue Nov 3 15:30:00 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 3 Nov 2020 08:30:00 -0700 Subject: [tripleo][ci] Monday Nov 2 In-Reply-To: <0b8e1c52-640d-9370-981d-c6db3814626c@redhat.com> References: <0b8e1c52-640d-9370-981d-c6db3814626c@redhat.com> Message-ID: On Tue, Nov 3, 2020 at 6:20 AM Bogdan Dobrelya wrote: > On 10/30/20 6:31 PM, Wesley Hayutin wrote: > > Greetings, > > > > The tripleo ci team has identified a handful of patches that we'd like > > to land prior to Nov 2, the day docker.io goes > away. > > We've hit some new bugs and have also tuned a few things to try and > > make sure we can get patches to merge. > > > > Our current focus is across master, victoria, ussuri and centos-8 train, > > and queens while reducing coverage in rocky and stein. > > > > A list of the prioritized gerrit reviews can be found here: > > https://hackmd.io/MlbZ_izSTEuZsCWTJvu_Kg?view > > > > The entire topic can be found here: > > https://review.opendev.org/#/q/topic:new-ci-job > > In that list there are patches to puppet modules and openstack services > what run a single standalone tripleo CI job. I don't think creating an > extra provider job to run a single consumer job sounds reasonable. > > > > > Thanks all. > So our first pass there I think should be a content-provider. However we could potentially drop the content-provider and override docker.io -> quay.io as well. We are not certain yet how well quay.io will perform so we're being cautious atm. > > > -- > Best regards, > Bogdan Dobrelya, > Irc #bogdando > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From knikolla at bu.edu Tue Nov 3 15:31:12 2020 From: knikolla at bu.edu (Nikolla, Kristi) Date: Tue, 3 Nov 2020 15:31:12 +0000 Subject: [keystone] No keystone team meeting Nov 3 Message-ID: <703CF7AD-7036-4DFD-AACD-9F51F06F4F00@bu.edu> Hi all, There will be no keystone team meeting on IRC today, on November 3rd. Best, Kristi From gagehugo at gmail.com Tue Nov 3 15:33:02 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Tue, 3 Nov 2020 09:33:02 -0600 Subject: [openstack-helm] No OSH meeting today Message-ID: Hello, the meeting for today has been cancelled, we will meet next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Tue Nov 3 15:43:12 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 3 Nov 2020 17:43:12 +0200 Subject: [tripleo][ci] Monday Nov 2 In-Reply-To: References: <0b8e1c52-640d-9370-981d-c6db3814626c@redhat.com> Message-ID: On Tue, Nov 3, 2020 at 5:31 PM Wesley Hayutin wrote: > > > On Tue, Nov 3, 2020 at 6:20 AM Bogdan Dobrelya > wrote: > >> On 10/30/20 6:31 PM, Wesley Hayutin wrote: >> > Greetings, >> > >> > The tripleo ci team has identified a handful of patches that we'd like >> > to land prior to Nov 2, the day docker.io goes >> away. >> > We've hit some new bugs and have also tuned a few things to try and >> > make sure we can get patches to merge. >> > >> > Our current focus is across master, victoria, ussuri and centos-8 >> train, >> > and queens while reducing coverage in rocky and stein. >> > >> > A list of the prioritized gerrit reviews can be found here: >> > https://hackmd.io/MlbZ_izSTEuZsCWTJvu_Kg?view >> > >> > The entire topic can be found here: >> > https://review.opendev.org/#/q/topic:new-ci-job >> >> In that list there are patches to puppet modules and openstack services >> what run a single standalone tripleo CI job. I don't think creating an >> extra provider job to run a single consumer job sounds reasonable. >> >> > >> > Thanks all. >> > > So our first pass there I think should be a content-provider. However we > could potentially drop the content-provider and override docker.io -> > quay.io as well. We are not certain yet how well quay.io will perform so > we're being cautious atm. > > or as currently discussing with sshnaidm on irc the job itself can build the containers instead of having a content provider do that 17:38 < sshnaidm|rover> maybe in puppet repos we will just build containers? 17:38 < sshnaidm|rover> it's one standalone job there only, irrc 17:38 < sshnaidm|rover> I think bogdan is right 17:39 < marios> sshnaidm|rover: but is it worth it to special case that? in the end it is still just 'build one set of containers' does it matter if it happens in a content provider or in the job itself? I guess it depends how stable are the cotnent providers and the answer is 'not always' ... :/ 17:40 < sshnaidm|rover> marios, it will remain one job there, not two 17:40 < sshnaidm|rover> and no need to change layouts, just adding one variable 17:40 < sshnaidm|rover> these repos anyway don't expect to run anything else from tripleo 17:41 < sshnaidm|rover> ~10 repos * N branches, will save us a little work.. 17:41 < marios> sshnaidm|rover: ack ... if it is easy enough to have that as special case then OK. and yes having one job instead of 2 (one content provider) brings its own benefits > > > >> >> >> -- >> Best regards, >> Bogdan Dobrelya, >> Irc #bogdando >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Nov 3 16:11:55 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 3 Nov 2020 17:11:55 +0100 Subject: [masakari] Wallaby PTG Summary Message-ID: Hello Everyone! Masakari had 3 2-hour sessions and we managed to discuss a broad range of proposals and assign tasks to interested parties. (And this was my first time chairing a PTG as PTL). Thank you, everyone who participated! Here is a summary of the PTG discussions. The whole discussion has been written down in the etherpad. [1] We acknowledged that the project has lost its momentum during the last cycle (Victoria) and we hope to restore it now. To this end, we are reviving interesting old blueprints and specs. Some of these are already at the implementation stage, ready for reviews. Some of the discussed proposals fall into the 'low code impact, high (positive) usability impact' category, the most prominent being ways to have more flexibility in disabling/enabling the HA scope as well as generally more friendly approach towards user errors (not evacuating 'pinned' instances). On the other hand, some proposals involve deeper architectural changes to Masakari, such as the shift from reliance on Pacemaker/Corosync stack to Consul and/or etcd (possibly also for coordination), as well as making Masakari inspect the host status via both inbound and outbound means (including power management for checks and fencing). To this end, we plan to investigate a possible integration with Ironic as our gate to the baremetal world. Another proposal worth noting is the Masakari 'restore previous state' workflow, meant to help operators move instances back to their original location once the local problem is solved. Yet another proposal is increasing the resilience under the current scope (Pacemaker/Corosync) by retrying host checks (several sampled observations rather than one) and implementing adaptive rules to consider large-scale failures. We also discussed general maintenance tasks. We decided to give some love to the documentation and make sure it's not self-contradictory and also in agreement with the actual code, as well as include general recommendation on how Masakari is supposed to be deployed to have it work optimally. Finally, we discussed a few miscellaneous proposals such as making Masakari a better container world citizen by removing the reliance on systemd. [1] https://etherpad.opendev.org/p/masakari-wallaby-vptg Kind regards, -yoctozepto From dtantsur at redhat.com Tue Nov 3 16:41:13 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 3 Nov 2020 17:41:13 +0100 Subject: [ironic] Configuration mold follow-up In-Reply-To: References: Message-ID: Hi Aija, On Mon, Nov 2, 2020 at 1:55 PM Jaunteva, Aija wrote: > Hi, > > > > to follow up on Ironic PTG session about Configuration molds [1] I’m > scheduling a call to discuss remaining items (mainly storage of the molds). > Honestly, I feel like it's a bit premature to discuss the potential storage of the molds until we get the first version out and tested in the wild. Dmitry > > > Anyone interested please add your availability in Doodle [2]. > > When time slot decided, will share call details. > > > > Regards, > > Aija > > > > [1] https://etherpad.opendev.org/p/ironic-wallaby-ptg line 216 > > [2] https://doodle.com/poll/dry4x5tbmhi6x6p3 > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Nov 3 19:25:23 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 03 Nov 2020 13:25:23 -0600 Subject: [qa][policy] Testing strategy discussion for new RBAC (new scope and default role combinations) Message-ID: <1758f922fe5.b2f4514a220341.5965440098876721164@ghanshyammann.com> Hello Everyone, To continue the discussion on how projects should test the new RBAC (scope as well as the new defaults combination), we will host a call on bluejeans on Tuesday 10th Nov 16.00 UTC. Lance has set up the room for that: https://bluejeans.com/749897818 Feel free to join if you are interested in that or would like to help with this effort. -gmann From oliver.weinmann at me.com Tue Nov 3 20:05:01 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Tue, 3 Nov 2020 21:05:01 +0100 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) In-Reply-To: <20201103151029.dyixpfoput5rgyae@barron.net> References: <20201103151029.dyixpfoput5rgyae@barron.net> Message-ID: <3BF6161C-B8D5-46E1-ACA3-3516BE901D1F@me.com> Hi Tom, Thanks a lot for your input. Highly appreciated. John really convinced me to deploy a Standalone cluster with cephadm and so I started to deploy It. I stumbled upon a few obstacles, but mostly because commands didn’t behave as expected or myself missing some important steps like adding 3 dashes in a yml file. Cephadm looks really promising. I hope that by tomorrow I will have a simple 3 node cluster. I have to look deeper into it as of how to configure separate networks for the cluster and the Frontend but it shouldn’t be too hard. Once the cluster is functioning I will re-deploy my tripleo POC pointing to the standalone Ceph cluster. Currently I have a zfs nfs Storage configured that I would like to keep. I have to figure out how to configure multiple backends in tripleo. Cheers, Oliver Von meinem iPhone gesendet > Am 03.11.2020 um 16:20 schrieb Tom Barron : > > On 01/11/20 08:55 +0100, Oliver Weinmann wrote: >> Hi, >> >> I'm still in the process of preparing a OpenStack POC. I'm 100% sure that I want to use CEPH and so I purchased the book Mastering CEPH 2nd edition. First of all, It's a really good book. It basically explains the various methods how ceph can be deployed and also the components that CEPH is build of. So I played around a lot with ceph-ansible and rook in my virtualbox environment. I also started to play with tripleo ceph deployment, although I haven't had the time yet to sucessfully deploy a openstack cluster with CEPH. Now I'm wondering, which of these 3 methods should I use? >> >> rook >> >> ceph-ansible >> >> tripleo >> >> I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of VMs running under vSphere that currently consume storage from a ZFS storage via CIFS and NFS. I don't know if rook can be used for this. I have the feeling that it is purely meant to be used for kubernetes? And If I would like to have CIFS and NFS maybe tripleo is not capable of enabling these features in CEPH? So I would be left with ceph-ansible? >> >> Any suggestions are very welcome. >> >> Best Regards, >> >> Oliver >> >> > > If your core use case is OpenStack (OpenStack POC with CEPH) then of the three tools mentioned only tripleo will deploy OpenStack. It can deploy a Ceph cluster for use by OpenStack as well, or you can deploy Ceph externally and it will "point" to it from OpenStack. For an OpenStack POC (and maybe for production too) I'd just have TripleO deploy the Ceph cluster as well. TripleO will use a Ceph deployment tool -- today, ceph-ansible; in the future, cephadm -- to do the actual Ceph cluster deployment but it figures out how to run the Ceph deployment for you. > > I don't think any of these tools will install a Samba/CIFs gateway to CephFS. It's reportedly on the road map for the new cephadm tool. You may be able to manually retrofit it to your Ceph cluster by consulting upstream guidance such as [1]. > > NFS shares provisioned in OpenStack (Manila file service) *could* be shared out-of-cloud to VSphere VMs if networking is set up to make them accessible but the shares would remain OpenStack managed. To use the same Ceph cluster for OpenStack and vSphere but have separate storage pools behind them, and independent management, you'd need to deploy the Ceph cluster independently of OpenStack and vSphere. TripleO could "point" to this cluster as mentioned previously. > > I agree with your assessment that rook is intended to provide PVs for Kubernetes consumers. You didn't mention Kubernetes as a use case, but as John Fulton has mentioned it is possible to use rook on Kubernetes in a mode where it "points" to an external ceph cluster rather than provisioning its own as well. Or if you run k8s clusters on OpenStack, they can just consume the OpenStack storage, which will be Ceph storage when that is used to back OpenStack Cinder and Manila. > > -- Tom Barron > > [1] https://www.youtube.com/watch?v=5v8L7FhIyOw&list=PLrBUGiINAakNCnQUosh63LpHbf84vegNu&index=19&t=0s&app=desktop > From dmendiza at redhat.com Tue Nov 3 21:35:44 2020 From: dmendiza at redhat.com (Douglas Mendizabal) Date: Tue, 3 Nov 2020 15:35:44 -0600 Subject: [barbican] Stable Branch Liaison Update Message-ID: <5ecc1230-b3e1-bf23-3a37-2c05f0b50f8d@redhat.com> Hi openstack-discuss, I've updated the Cross Project Liaison Wiki [1] to revert our Stable Branch liaison to myself as current Barbican PTL. I am wondering what the process is to get myself added to the barbican-stable-maint group in gerrit? [2] Unfortunately, our only team member on that team has not been very active the last few months and we have a growing list of patches to the stable branches that we have been unable to merge. Any pointers would be greatly appreciated. Thanks - Douglas Mendizábal [1] https://wiki.openstack.org/wiki/CrossProjectLiaisons#Stable_Branch [2] https://review.opendev.org/#/admin/groups/1097 -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_0xF2294C99E916A3FB.asc Type: application/pgp-keys Size: 36788 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From sean.mcginnis at gmx.com Tue Nov 3 22:03:08 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 3 Nov 2020 16:03:08 -0600 Subject: [barbican] Stable Branch Liaison Update In-Reply-To: <5ecc1230-b3e1-bf23-3a37-2c05f0b50f8d@redhat.com> References: <5ecc1230-b3e1-bf23-3a37-2c05f0b50f8d@redhat.com> Message-ID: > I am wondering what the process is to get myself added to the > barbican-stable-maint group in gerrit? [2] > > Unfortunately, our only team member on that team has not been very > active the last few months and we have a growing list of patches to the > stable branches that we have been unable to merge. > > Any pointers would be greatly appreciated. > > Thanks > - Douglas Mendizábal We usually just want to see that you have been doing stable reviews and show an understanding of the differences with things to look for when reviewing those rather than reviews on master. I took a look at some and they looked fine to me. Obligatory reminder though to read through the stable branch policies, particularly the review guidelines: https://docs.openstack.org/project-team-guide/stable-branches.html#review-guidelines I have added you to the barbican-stable-maint group. Please raise any questions if you have any doubts about any proposed changes. Thanks! Sean From rosmaita.fossdev at gmail.com Wed Nov 4 01:08:47 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 3 Nov 2020 20:08:47 -0500 Subject: [cinder] Wallaby PTG summary available Message-ID: I've posted a summary of the discussions we had last week: https://wiki.openstack.org/wiki/CinderWallabyPTGSummary Feel free to make any corrections or clarifications. Also, look through to remind yourself what actions you volunteered to perform (you can search for your IRC nick or the text ACTION). cheers, brian From satish.txt at gmail.com Wed Nov 4 02:51:10 2020 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 3 Nov 2020 21:51:10 -0500 Subject: Intel 82599 dpdk support with centos8 Message-ID: Folks, I am trying to get dpdk working with my OVS on CentOS-8 and I am stuck here so I need advice from folks who can help me here. I have an HP c7000 chassis with ProLiant BL460c Gen9 blade. I am running CentOS-8 with following version of ovs-dpdk [root at compute-2 usertools]# ovs-vswitchd --version ovs-vswitchd (Open vSwitch) 2.12.0 DPDK 18.11.2 I am using openstack-ansible to set up openvswitch with dpdk, and so far everything went well but I am stuck in the end. [root at compute-2 usertools]# ovs-vsctl add-port br-provider dpdk-0 -- set Interface dpdk-0 type=dpdk options:dpdk-devargs=0000:06:00.0 ovs-vsctl: Error detected while setting up 'dpdk-0': Error attaching device '0000:06:00.0' to DPDK. See ovs-vswitchd log for details. ovs-vsctl: The default log directory is "/var/log/openvswitch". Here is the error message. [root at compute-2 usertools]# tail -f /var/log/openvswitch/ovs-vswitchd.log 2020-11-04T02:46:31.751Z|00303|dpdk|INFO|EAL: PCI device 0000:06:00.0 on NUMA socket 0 2020-11-04T02:46:31.751Z|00304|dpdk|INFO|EAL: probe driver: 8086:10f8 net_ixgbe 2020-11-04T02:46:31.751Z|00305|dpdk|ERR|EAL: Driver cannot attach the device (0000:06:00.0) 2020-11-04T02:46:31.751Z|00306|dpdk|ERR|EAL: Failed to attach device on primary process 2020-11-04T02:46:31.751Z|00307|netdev_dpdk|WARN|Error attaching device '0000:06:00.0' to DPDK 2020-11-04T02:46:31.751Z|00308|netdev|WARN|dpdk-0: could not set configuration (Invalid argument) 2020-11-04T02:46:31.751Z|00309|dpdk|ERR|Invalid port_id=32 This is what my NIC firmware/driver [root at compute-2 usertools]# ethtool -i eno50 driver: ixgbe version: 5.1.0-k-rh8.2.0 firmware-version: 0x800008fb, 1.2028.0 expansion-rom-version: bus-info: 0000:06:00.1 [root at compute-2 usertools]# python3 ./dpdk-devbind.py --status Network devices using DPDK-compatible driver ============================================ 0000:06:00.0 '82599 10 Gigabit Dual Port Backplane Connection 10f8' drv=vfio-pci unused=ixgbe Network devices using kernel driver =================================== 0000:06:00.1 '82599 10 Gigabit Dual Port Backplane Connection 10f8' if=eno50 drv=ixgbe unused=vfio-pci I did enabled IOMMU & huge mem in grub iommu=pt intel_iommu=on hugepagesz=2M hugepages=30000 transparent_hugepage=never Not sure what is wrong here and the big question is does this NIC support DPDK or not? From dsneddon at redhat.com Wed Nov 4 06:13:40 2020 From: dsneddon at redhat.com (Dan Sneddon) Date: Tue, 3 Nov 2020 22:13:40 -0800 Subject: [rhos-dev] [Neutron] PTG summary In-Reply-To: <6315331.olFIrCdlcp@p1> References: <6315331.olFIrCdlcp@p1> Message-ID: <9a4115a0-5fec-7342-daf6-82478b65799e@redhat.com> On 11/2/20 1:56 PM, Slawek Kaplonski wrote: > Hi, > > Below is my summary of the Neutron team sessions which we had during the > virtual PTG last week. > Etherpad with notes from the discussions can be found at [1]. > > ## Retrospective of the Victoria cycle >>From the good things during _Victoria_ cycle team pointed: > * Complete 8 blueprints including the _Metadata over IPv6_ [2], > * Improved feature parity in the OVN driver, > * Good review velocity > >>From the not so good things we mentioned: > * CI instability - average number of rechecks needed to merge patch in last > year can be found at [3], > * Too much "Red Hat" in the Neutron team - and percentage of the reviews (and > patches) done by people from Red Hat is constantly increasing over last few > cycles. As a main reason of that we pointed that it is hard to change > companies way of thinking that dedicating developer to the upstream project > means that they lose developer in downstream. > * Migration of our CI and Zuul v3 took us a lot of time and effort - but good > thing is that we accomplished that in the Victoria cycle :) > > During that session we also agreed on some action items for the Wallaby cycle: > * keep talking about OVN details - Miguel Lavalle and me will work on some way > to deliver talks about Neutron OVN backend and OVN internals to the community > during next months. The idea is to maybe propose some talks ever 6 weeks or > so. This may make ovn driver development more diverse and let operators to > thing about migration to the ovn backend. > > ## Review of the existing meetings > We reviewed list of our existing upstream meetings and discussed about ideas > on how to increase number of attendees on the meetings. > We decided to: > * drop neutron-qos meeting as it's not needed anymore > * advertise more meetings and meetings' agenda on the OpenStack mailing list - > I will send reminders with links to the agenda before every meeting > * Together with Lajos Katona we will give some introduction to the debugging > of CI issues in Neutron* > > ## Support for old versions > Bernard started discussion about support for the old releases in Neutron and > Neutron stadium projects. > For Neutron we decided to mark __Ocata__ branch as unmaintained already as its > gate is already broken. > For the __Pike__ and never branches we will keep them in the __EM__ phase as > there is still some community interest to keep those branches open. > For the stadium projects we decided to do it similary to what we did while > looking for new maintainers for the projects. We will send email "call for > maintainers" for such stable branches. If there will be no voluneers to step > in, fix gate issues and keep those branches healthy, we will mark them as > __unmaintained__ and later as __End of Life__ (EOL). > Currently broken CI is in projects: > * networking-sfc, > * networking-bgpvpn/bagpipe, > * neutron-fwaas > > And those are candidates to be marked as unmaintained if there will be no > volunteers to fix them. > Bernard Cafarelli volunteered to work on that in next days/weeks. > > > ## Healtcheck API endpoint > We discussed as our healtcheck API should works. During the discussion we > decided that: > * healtcheck result should __NOT__ rely on the agents status, it should rely > on worker's ability to connect to the DB and MQ (rabbitmq) > * Lajos will ask community (API experts) about some guidance how it should > works on the whole OpenStack level, > * As for reference implementation we can check e.g. Octavia [4] and Keystone > [5] which already implemented it. > > ## Squash of the DB migration script > Rodolfo explained us what are benefits of doing such squash of the db migration > scripts from the old versions: > * Deployment is faster: we don't need to create/delete tables or create+update > other ones - the win is small possibly in the magnitude of 5s per job, > * DB definition is centralized in one place, not in original definition plus > further migrations - that is most important reason why we really should do > that, > * UTs faster: removal of some older checks. > > The problem with this may be that we need to do that carefully and be really > verbose about with such changes we may break stadium projects or 3rd party > projects which are doing db migration too. > To minimalize potential breakage, we will announce such changes on the > OpenStack discuss mailing list. > Rodolfo volunteered to take propose squash up to Liberty release in W cycle. > Together with this squash we will also document that process so in next cycles > we should be able to do squashes for newer releases in easier way. > Lajos volunteered to help with fixing Neutron stadium projects if that will be > needed. > > ## Switch to the new engine facade > We were discussing how to move on and finally finish old Blueprint [6]. We > decided that together with Rodolfo we will try how this new engine facade will > work without using transaction guards in the code. Hopefully that will let us > move on with this. If not, we will try to reach out to some DB experts for > some help with this. > > ## Change from rootwrap to the privsep > This is now community goal during the Wallaby cycle so we need to focus on it > to accomplish that transition finally. > This transition may speed up and make our code a bit more secure. > Rodolfo explained us multiple possible strategies of migration: > * move to native, e.g. > * replace ps with python psutils, not using rootwrap or privsep > * replace ip commands with pyroute2, under a privsep context (elevated > permissions needed) > * directly translate rootwrap to privsep, executing the same shell command but > under a privsep context > > To move on with this I will create list of the pieces of code which needs to > be transitioned in the Neutron repo and in the stadium projects. > Current todo items can be found on the storyboard [7]. > > ## Migration to the NFtables > During this session we were discussing potential strategies on how to migrate > from the old iptables to the new nftables. We need to start planning that work > as it major Linux distributions (e.g. RHEL) are planning to deprecate iptables > in next releases. > It seems that currently all major distros (Ubuntu, Centos, OpenSuSE) supports > nftables already. > We decided that in Wallaby cycle we will propose new _Manager_ class and we > will add some config option which will allow people to test new solution. > In next cycles we will continue work on it to make it stable and to make > upgrade and migration path for users as easy as possible. > There is already created blueprint to track progress on that topic [8]. > We need to migrate: > * Linuxbridge firewall, iptables OVS hybrid firewall, > * L3 code (legacy router, DVR router, conntrack, port forwarding), > * iptables metering, > * metadata proxy, > * dhcp agent for when it does metadata for isolated networks and namespace > creation, > * neutron-vpnaas - ipsec code, > * and maybe something else what we didn't found yet. > > ## Nova-Neutron cross project session > We had very interesting discussion with Nova team. We were discussing topics > like: > * NUMA affinity in the neutron port > * vhost-vdpa support > * default __vnic_type__/__port flavour__ > > Notes from that discussion are available in the nova's etherpad [9]. > > ## Neutron scalling issues > At this session we were discussing issues mentioned by operators on the Forum > sessions a week before the PTG. There was couple of issues mentioned there: > * problems with retries of the DB operations - we should migrate all our code > to the oslo.db retries mechanism - new blueprint [10] is created to track > progress on that one. > * problems with maintenance of the agents, like e.g. DHCP or L3 agents - many > of those issues are caused by how our agents are designed and to really fix > that we would need very deep and huge changes. But also many of those issues > can be solved by the __ovn__ backend - **and that is strategic direction in > which neutron wants to go in the next cycles**, > * Miguel Lavalle and I volunteered to do some profiling of the agents to see > where we are loosing most of the time - maybe we will be able to find some _low > hanging fruits_ which can be fixed and improve the situation at least a bit, > * Similar problem with neutron-ovs-agent and especially security groups which > are using _remove group id_ as a reference - here we also need some volunteers > who will try to optimize that. > > ## CI (in)stablility > On Thursday we were discussing how to improve our very poor CI. Finally we > decided to: > * not recheck patches without giving reason of recheck in the comment - there > should be already reported bug which should be linked in the _recheck_ > comment, or user should open new one and link to it also. IN case if the > problem was e.g. related to infra some simple comment like _infra issue_ will > also be enough there, > * To lower number of existing jobs we will do some changes like: > * move *-neutron-lib-master and *-ovs-master jobs to the experimental and > periodic queues to not run them on every patch, > * I will switch _neutron-tempest-plugin-api_ job to be deployed with uwsgi > so we can drop _neutron-tempest-with-uwsgi_ job, > * Consolidate _neutron-tempest-plugin-scenario-linuxbridge_ and _neutron- > tempest-linuxbridge_ jobs, > * Consolidate _neutron-tempest-plugin-scenario-iptables_hybrid and _neutron- > tempest-iptables_hybrid jobs, > > Later we also discussed about the way how to run or skip tests which can be > only run when some specific feature is available in the cloud (e.g. _Metadata > over IPv6_). After some discussion we decided to add new config option with > list of enabled features. It will be very similar to the existing option > _api_extensions_. Lajos volunteered to work on that. > > As last CI related topic we discussed about testing DVR in our CI. Oleg > Bondarev volunteered to check and try to fix broken _neutron-tempest-plugin- > dvr-multinode-scenario_ job. > > ## Flow based DHCP > > Liu Yulong raised topic about new way of doing fully distributed DHCP service, > instead of using _DHCP agent_ on the nodes - RFE is proposed at [11]. His > proposal of doing Open Flow based DHCP (similar to what e.g. ovn-controller is > doing) is described in [12]. It could be implemented as an L2 agent extension > and enabled by operators in the config when they would need it. > As a next step Liu will now propose spec with details about this solution and > we will continue discussion about it in the spec's review. When retiring the DHCP agent was discussed in Shanghai it was assumed that the flow-based DHCP server would not be compatible with Ironic. Currently the OVN native implementation is not compatible and DHCP agent is required, but OVN is planning on implementing support for native DHCP for Ironic soon (IIUC). Was there any discussion about what it might take to extend the flow-based DHCP server to support direct connection to VLAN/flat networks and the DHCP options required for PXE/iPXE for Ironic? Is that a possibility in the future, or would we need to continue to maintain the DHCP agent even if OVN no longer requires it? > > ## Routed provider networks limited to one host > > As a lost topic on Thursday we briefly talked about old RFE [13]. Miguel > Lavalle told us that his company, Verizon Media, is interested in working on > this RFE in next cycles. This also involves some work on Nova's side which was > started by Sylvain Bauza already. Miguel will sync with Sylvain on that RFE. > > ## L3 feature improvements > > On Friday we were discussing some potential improvements in the L3 area. Lajos > and Bence shown us some features which their company is interested in and on > which they plan to work. Those are things like: > * support for Bidirectional Forwarding Detection > * some additional API to set additional router parameters like: > * ECMP max path, > * ECMP hash algorith > * --provider-allocation-pool parameter in the subnets - in some specific cases > it may help to use IPs from such _special_ pool for some infrastructure needs, > more details about that will come in the RFE in future, > For now all those described above improvements are in very early planning > phase but Bence will sync with Liu and Liu will dedicate some time to discuss > progress on them during the __L3 subteam meetings__. I submitted a spec for installing FRRouting (FRR) via TripleO: https://review.opendev.org/#/c/758249/ This could be used for ECMP, as well as for routing traffic to the HAProxy load balancers fronting the control plane, and advertising routes to Neutron IPs on dynamically routed networks (VM IPs and/or floating IPs). The goal is to have a very simple implementation where IP addresses would be added to a default or alternate namespace (depending on the use case) as loopback addresses with a /32 (v4) or /128 (v6) CIDR. In the case of FRR the Zebra daemon receives updates via Netlink when these IP addresses are created locally and redistributes them to BGP peers. In theory this may allow a different BGP daemon such as Bird or perhaps ExaBGP to be easily swapped for FRR. I will look forward to seeing more on the --provider-allocation-pool parameter. > > ## Leveraging routing-on-the-host in Neutron in our next-gen clusters > > As a last topic on Friday we were discussing potential solutions of the _L3 on > the host_ in the Neutron. The idea here is very similar to what e.g. __Calico > plugin__ is doing currently. > More details about potential solutions are described in the etherpad [14]. > During the discussion Dawid Deja from OVH told us that OVH is also using very > similar, downstream only solution. > Conclusion of that discussion was that we may have most of the needed code > already in Neutron and some stadium projects so as a first step people who are > interested in that topic, like Jan Gutter, Miguel and Dawid will work on some > deployment guide for such use case. Is there any public info on the OVH approach available? > > ## Team photo > During the PTG we also made team photos which You can find at [15]. > > [1] https://etherpad.opendev.org/p/neutron-wallaby-ptg > [2] https://blueprints.launchpad.net/neutron/+spec/metadata-over-ipv6 > [3] https://ibb.co/12sB9N9 > [4] https://opendev.org/openstack/octavia/src/branch/master/octavia/api/ > healthcheck/healthcheck_plugins.py > [5] https://docs.openstack.org/keystone/victoria/admin/health-check-middleware.html > [6] https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch > [7] https://storyboard.openstack.org/#!/story/2007686 > [8] https://blueprints.launchpad.net/neutron/+spec/nftables-migration > [9] https://etherpad.opendev.org/p/nova-wallaby-ptg > [10] https://blueprints.launchpad.net/neutron/+spec/oslo-db-retries > [11] https://bugs.launchpad.net/neutron/+bug/1900934 > [12] https://github.com/gotostack/shanghai_ptg/blob/master/ > shanghai_neutron_ptg_topics_liuyulong.pdf > [13] https://bugs.launchpad.net/neutron/+bug/1764738 > [14] https://etherpad.opendev.org/p/neutron-routing-on-the-host > [15] http://kaplonski.pl/files/Neutron_virtual_PTG_October_2020.tar.gz > -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter From skaplons at redhat.com Wed Nov 4 08:35:57 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 04 Nov 2020 09:35:57 +0100 Subject: [rhos-dev] [Neutron] PTG summary In-Reply-To: <9a4115a0-5fec-7342-daf6-82478b65799e@redhat.com> References: <6315331.olFIrCdlcp@p1> <9a4115a0-5fec-7342-daf6-82478b65799e@redhat.com> Message-ID: <1955767.2ncHDK62ns@p1> Hi, Dnia środa, 4 listopada 2020 07:13:40 CET Dan Sneddon pisze: > On 11/2/20 1:56 PM, Slawek Kaplonski wrote: > > Hi, > > > > Below is my summary of the Neutron team sessions which we had during the > > virtual PTG last week. > > Etherpad with notes from the discussions can be found at [1]. > > > > ## Retrospective of the Victoria cycle > > > >>From the good things during _Victoria_ cycle team pointed: > > * Complete 8 blueprints including the _Metadata over IPv6_ [2], > > * Improved feature parity in the OVN driver, > > * Good review velocity > > > >>From the not so good things we mentioned: > > * CI instability - average number of rechecks needed to merge patch in > > last > > year can be found at [3], > > * Too much "Red Hat" in the Neutron team - and percentage of the reviews > > (and patches) done by people from Red Hat is constantly increasing over > > last few cycles. As a main reason of that we pointed that it is hard to > > change companies way of thinking that dedicating developer to the > > upstream project means that they lose developer in downstream. > > * Migration of our CI and Zuul v3 took us a lot of time and effort - but > > good thing is that we accomplished that in the Victoria cycle :) > > > > During that session we also agreed on some action items for the Wallaby > > cycle: * keep talking about OVN details - Miguel Lavalle and me will work > > on some way to deliver talks about Neutron OVN backend and OVN internals > > to the community during next months. The idea is to maybe propose some > > talks ever 6 weeks or so. This may make ovn driver development more > > diverse and let operators to thing about migration to the ovn backend. > > > > ## Review of the existing meetings > > We reviewed list of our existing upstream meetings and discussed about > > ideas on how to increase number of attendees on the meetings. > > We decided to: > > * drop neutron-qos meeting as it's not needed anymore > > * advertise more meetings and meetings' agenda on the OpenStack mailing > > list - I will send reminders with links to the agenda before every > > meeting * Together with Lajos Katona we will give some introduction to > > the debugging of CI issues in Neutron* > > > > ## Support for old versions > > Bernard started discussion about support for the old releases in Neutron > > and Neutron stadium projects. > > For Neutron we decided to mark __Ocata__ branch as unmaintained already as > > its gate is already broken. > > For the __Pike__ and never branches we will keep them in the __EM__ phase > > as there is still some community interest to keep those branches open. > > For the stadium projects we decided to do it similary to what we did > > while looking for new maintainers for the projects. We will send email > > "call for maintainers" for such stable branches. If there will be no > > voluneers to step in, fix gate issues and keep those branches healthy, we > > will mark them as __unmaintained__ and later as __End of Life__ (EOL). > > Currently broken CI is in projects: > > * networking-sfc, > > * networking-bgpvpn/bagpipe, > > * neutron-fwaas > > > > And those are candidates to be marked as unmaintained if there will be no > > volunteers to fix them. > > Bernard Cafarelli volunteered to work on that in next days/weeks. > > > > > > ## Healtcheck API endpoint > > We discussed as our healtcheck API should works. During the discussion we > > decided that: > > * healtcheck result should __NOT__ rely on the agents status, it should > > rely on worker's ability to connect to the DB and MQ (rabbitmq) > > * Lajos will ask community (API experts) about some guidance how it should > > works on the whole OpenStack level, > > * As for reference implementation we can check e.g. Octavia [4] and > > Keystone [5] which already implemented it. > > > > ## Squash of the DB migration script > > Rodolfo explained us what are benefits of doing such squash of the db > > migration scripts from the old versions: > > * Deployment is faster: we don't need to create/delete tables or > > create+update other ones - the win is small possibly in the magnitude of > > 5s per job, * DB definition is centralized in one place, not in original > > definition plus further migrations - that is most important reason why we > > really should do that, > > * UTs faster: removal of some older checks. > > > > The problem with this may be that we need to do that carefully and be > > really verbose about with such changes we may break stadium projects or > > 3rd party projects which are doing db migration too. > > To minimalize potential breakage, we will announce such changes on the > > OpenStack discuss mailing list. > > Rodolfo volunteered to take propose squash up to Liberty release in W > > cycle. Together with this squash we will also document that process so in > > next cycles we should be able to do squashes for newer releases in easier > > way. Lajos volunteered to help with fixing Neutron stadium projects if > > that will be needed. > > > > ## Switch to the new engine facade > > We were discussing how to move on and finally finish old Blueprint [6]. We > > decided that together with Rodolfo we will try how this new engine facade > > will work without using transaction guards in the code. Hopefully that > > will let us move on with this. If not, we will try to reach out to some > > DB experts for some help with this. > > > > ## Change from rootwrap to the privsep > > This is now community goal during the Wallaby cycle so we need to focus on > > it to accomplish that transition finally. > > This transition may speed up and make our code a bit more secure. > > Rodolfo explained us multiple possible strategies of migration: > > * move to native, e.g. > > > > * replace ps with python psutils, not using rootwrap or privsep > > * replace ip commands with pyroute2, under a privsep context (elevated > > > > permissions needed) > > * directly translate rootwrap to privsep, executing the same shell command > > but under a privsep context > > > > To move on with this I will create list of the pieces of code which needs > > to be transitioned in the Neutron repo and in the stadium projects. > > Current todo items can be found on the storyboard [7]. > > > > ## Migration to the NFtables > > During this session we were discussing potential strategies on how to > > migrate from the old iptables to the new nftables. We need to start > > planning that work as it major Linux distributions (e.g. RHEL) are > > planning to deprecate iptables in next releases. > > It seems that currently all major distros (Ubuntu, Centos, OpenSuSE) > > supports nftables already. > > We decided that in Wallaby cycle we will propose new _Manager_ class and > > we > > will add some config option which will allow people to test new solution. > > In next cycles we will continue work on it to make it stable and to make > > upgrade and migration path for users as easy as possible. > > There is already created blueprint to track progress on that topic [8]. > > We need to migrate: > > * Linuxbridge firewall, iptables OVS hybrid firewall, > > * L3 code (legacy router, DVR router, conntrack, port forwarding), > > * iptables metering, > > * metadata proxy, > > * dhcp agent for when it does metadata for isolated networks and namespace > > creation, > > * neutron-vpnaas - ipsec code, > > * and maybe something else what we didn't found yet. > > > > ## Nova-Neutron cross project session > > We had very interesting discussion with Nova team. We were discussing > > topics like: > > * NUMA affinity in the neutron port > > * vhost-vdpa support > > * default __vnic_type__/__port flavour__ > > > > Notes from that discussion are available in the nova's etherpad [9]. > > > > ## Neutron scalling issues > > At this session we were discussing issues mentioned by operators on the > > Forum sessions a week before the PTG. There was couple of issues > > mentioned there: * problems with retries of the DB operations - we should > > migrate all our code to the oslo.db retries mechanism - new blueprint > > [10] is created to track progress on that one. > > * problems with maintenance of the agents, like e.g. DHCP or L3 agents - > > many of those issues are caused by how our agents are designed and to > > really fix that we would need very deep and huge changes. But also many > > of those issues can be solved by the __ovn__ backend - **and that is > > strategic direction in which neutron wants to go in the next cycles**, > > * Miguel Lavalle and I volunteered to do some profiling of the agents to > > see where we are loosing most of the time - maybe we will be able to find > > some _low hanging fruits_ which can be fixed and improve the situation at > > least a bit, * Similar problem with neutron-ovs-agent and especially > > security groups which are using _remove group id_ as a reference - here > > we also need some volunteers who will try to optimize that. > > > > ## CI (in)stablility > > On Thursday we were discussing how to improve our very poor CI. Finally we > > decided to: > > * not recheck patches without giving reason of recheck in the comment - > > there should be already reported bug which should be linked in the > > _recheck_ comment, or user should open new one and link to it also. IN > > case if the problem was e.g. related to infra some simple comment like > > _infra issue_ will also be enough there, > > > > * To lower number of existing jobs we will do some changes like: > > * move *-neutron-lib-master and *-ovs-master jobs to the experimental > > and > > > > periodic queues to not run them on every patch, > > > > * I will switch _neutron-tempest-plugin-api_ job to be deployed with > > uwsgi > > > > so we can drop _neutron-tempest-with-uwsgi_ job, > > > > * Consolidate _neutron-tempest-plugin-scenario-linuxbridge_ and > > _neutron- > > > > tempest-linuxbridge_ jobs, > > > > * Consolidate _neutron-tempest-plugin-scenario-iptables_hybrid and > > _neutron-> > > tempest-iptables_hybrid jobs, > > > > Later we also discussed about the way how to run or skip tests which can > > be > > only run when some specific feature is available in the cloud (e.g. > > _Metadata over IPv6_). After some discussion we decided to add new config > > option with list of enabled features. It will be very similar to the > > existing option _api_extensions_. Lajos volunteered to work on that. > > > > As last CI related topic we discussed about testing DVR in our CI. Oleg > > Bondarev volunteered to check and try to fix broken > > _neutron-tempest-plugin- dvr-multinode-scenario_ job. > > > > ## Flow based DHCP > > > > Liu Yulong raised topic about new way of doing fully distributed DHCP > > service, instead of using _DHCP agent_ on the nodes - RFE is proposed at > > [11]. His proposal of doing Open Flow based DHCP (similar to what e.g. > > ovn-controller is doing) is described in [12]. It could be implemented as > > an L2 agent extension and enabled by operators in the config when they > > would need it. > > As a next step Liu will now propose spec with details about this solution > > and we will continue discussion about it in the spec's review. > > When retiring the DHCP agent was discussed in Shanghai it was assumed > that the flow-based DHCP server would not be compatible with Ironic. > Currently the OVN native implementation is not compatible and DHCP agent > is required, but OVN is planning on implementing support for native DHCP > for Ironic soon (IIUC). > > Was there any discussion about what it might take to extend the > flow-based DHCP server to support direct connection to VLAN/flat > networks and the DHCP options required for PXE/iPXE for Ironic? Is that > a possibility in the future, or would we need to continue to maintain > the DHCP agent even if OVN no longer requires it? For now we didn't discuss that. And we also don't have plans to drop support for DHCP agent. This new proposal is for sure not good for every usecase and will be (at least for now) proposed as an alternative solution which can be used if features like e.g. dns name resolutions are not needed. In the future we can of course thing about extending this new solution to something more complete. > > > ## Routed provider networks limited to one host > > > > As a lost topic on Thursday we briefly talked about old RFE [13]. Miguel > > Lavalle told us that his company, Verizon Media, is interested in working > > on this RFE in next cycles. This also involves some work on Nova's side > > which was started by Sylvain Bauza already. Miguel will sync with Sylvain > > on that RFE. > > > > ## L3 feature improvements > > > > On Friday we were discussing some potential improvements in the L3 area. > > Lajos and Bence shown us some features which their company is interested > > in and on which they plan to work. Those are things like: > > * support for Bidirectional Forwarding Detection > > > > * some additional API to set additional router parameters like: > > * ECMP max path, > > * ECMP hash algorith > > > > * --provider-allocation-pool parameter in the subnets - in some specific > > cases it may help to use IPs from such _special_ pool for some > > infrastructure needs, more details about that will come in the RFE in > > future, > > For now all those described above improvements are in very early planning > > phase but Bence will sync with Liu and Liu will dedicate some time to > > discuss progress on them during the __L3 subteam meetings__. > > I submitted a spec for installing FRRouting (FRR) via TripleO: > > https://review.opendev.org/#/c/758249/ > > This could be used for ECMP, as well as for routing traffic to the > HAProxy load balancers fronting the control plane, and advertising > routes to Neutron IPs on dynamically routed networks (VM IPs and/or > floating IPs). > > The goal is to have a very simple implementation where IP addresses > would be added to a default or alternate namespace (depending on the use > case) as loopback addresses with a /32 (v4) or /128 (v6) CIDR. In the > case of FRR the Zebra daemon receives updates via Netlink when these IP > addresses are created locally and redistributes them to BGP peers. In > theory this may allow a different BGP daemon such as Bird or perhaps > ExaBGP to be easily swapped for FRR. Thx for the info on that. I sent it to Miguel Lavalle and Jan Gutter who were mostly interested in that work in upstream. > > I will look forward to seeing more on the --provider-allocation-pool > parameter. > > > ## Leveraging routing-on-the-host in Neutron in our next-gen clusters > > > > As a last topic on Friday we were discussing potential solutions of the > > _L3 on the host_ in the Neutron. The idea here is very similar to what > > e.g. __Calico plugin__ is doing currently. > > More details about potential solutions are described in the etherpad [14]. > > During the discussion Dawid Deja from OVH told us that OVH is also using > > very similar, downstream only solution. > > Conclusion of that discussion was that we may have most of the needed code > > already in Neutron and some stadium projects so as a first step people who > > are interested in that topic, like Jan Gutter, Miguel and Dawid will work > > on some deployment guide for such use case. > > Is there any public info on the OVH approach available? I don't think so but I can try to explain it to You more or less if You want as I was original co-author of that solution many years ago. Please ping me off the list if You are interested :) > > > ## Team photo > > During the PTG we also made team photos which You can find at [15]. > > > > [1] https://etherpad.opendev.org/p/neutron-wallaby-ptg > > [2] https://blueprints.launchpad.net/neutron/+spec/metadata-over-ipv6 > > [3] https://ibb.co/12sB9N9 > > [4] https://opendev.org/openstack/octavia/src/branch/master/octavia/api/ > > healthcheck/healthcheck_plugins.py > > [5] > > https://docs.openstack.org/keystone/victoria/admin/health-check-middlewar > > e.html [6] > > https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch [7] > > https://storyboard.openstack.org/#!/story/2007686 > > [8] https://blueprints.launchpad.net/neutron/+spec/nftables-migration > > [9] https://etherpad.opendev.org/p/nova-wallaby-ptg > > [10] https://blueprints.launchpad.net/neutron/+spec/oslo-db-retries > > [11] https://bugs.launchpad.net/neutron/+bug/1900934 > > [12] https://github.com/gotostack/shanghai_ptg/blob/master/ > > shanghai_neutron_ptg_topics_liuyulong.pdf > > [13] https://bugs.launchpad.net/neutron/+bug/1764738 > > [14] https://etherpad.opendev.org/p/neutron-routing-on-the-host > > [15] http://kaplonski.pl/files/Neutron_virtual_PTG_October_2020.tar.gz > > -- > Dan Sneddon | Senior Principal Software Engineer > dsneddon at redhat.com | redhat.com/cloud > dsneddon:irc | @dxs:twitter -- Slawek Kaplonski Principal Software Engineer Red Hat From skaplons at redhat.com Wed Nov 4 08:46:42 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 04 Nov 2020 09:46:42 +0100 Subject: [Neutron] PTG summary In-Reply-To: <16c52015-e038-317b-27b7-4831987efa1b@debian.org> References: <6315331.olFIrCdlcp@p1> <16c52015-e038-317b-27b7-4831987efa1b@debian.org> Message-ID: <1946063.3LDORFbjrG@p1> Hi, Dnia poniedziałek, 2 listopada 2020 23:59:58 CET Thomas Goirand pisze: > Hi Slawek, > > Thanks a lot for the summary, that's very useful. > > On 11/2/20 10:56 PM, Slawek Kaplonski wrote: > > * replace ip commands with pyroute2, under a privsep context (elevated > > > > permissions needed) > > Please, please, please, do this, and give it some high priority. > Spawning thousands of times the ip command simply doesn't scale. Yes, we know that :) And it's one of our priorities in this cycle. > > > ## Migration to the NFtables > > During this session we were discussing potential strategies on how to > > migrate from the old iptables to the new nftables. We need to start > > planning that work as it major Linux distributions (e.g. RHEL) are > > planning to deprecate iptables in next releases. > > Did you know that Debian uses nftables by default since Buster, and that > one must set iptables-legacy as alternative, otherwise Neutron becomes > mad and fails applying firewall rules? Yes, that work already has been started - see https://review.opendev.org/#/c/ 759874/ But it's a lot of work to do so it may not be very fast and help is welcome in that area :) > > I'm not sure about Bullseye, but maybe there, iptables-legacy will even > be gone?!? > > > ## Leveraging routing-on-the-host in Neutron in our next-gen clusters > > > > As a last topic on Friday we were discussing potential solutions of the > > _L3 on the host_ in the Neutron. The idea here is very similar to what > > e.g. __Calico plugin__ is doing currently. > > More details about potential solutions are described in the etherpad [14]. > > During the discussion Dawid Deja from OVH told us that OVH is also using > > very similar, downstream only solution. > > Conclusion of that discussion was that we may have most of the needed code > > already in Neutron and some stadium projects so as a first step people who > > are interested in that topic, like Jan Gutter, Miguel and Dawid will work > > on some deployment guide for such use case. > > It'd be great if people were sharing code for this. I've seen at least 3 > or 4 companies doing it, none sharing any bits... :/ Yes, I think that OVH may consider that. And also there should be now some collaboration betweem Jan, Miguel and maybe others on that topic. > > How well is the Calico plugin working for this? Do we know? Has anyone > tried it in production? Does it scale well? > > Cheers, > > Thomas Goirand (zigo) -- Slawek Kaplonski Principal Software Engineer Red Hat From Aija.Jaunteva at dell.com Wed Nov 4 09:35:19 2020 From: Aija.Jaunteva at dell.com (Jaunteva, Aija) Date: Wed, 4 Nov 2020 09:35:19 +0000 Subject: [ironic] Configuration mold follow-up In-Reply-To: References: Message-ID: Hi Dmitry, I don't see this to be the case. They need to be stored somewhere at the very beginning. This has been proposed to be a Wallaby priority and this meeting is necessary to move forward. Let's discuss this and other questions in the meeting. Regards, Aija From: Dmitry Tantsur Sent: Tuesday, November 3, 2020 18:41 To: openstack-discuss at lists.openstack.org Cc: Jaunteva, Aija Subject: Re: [ironic] Configuration mold follow-up [EXTERNAL EMAIL] Hi Aija, On Mon, Nov 2, 2020 at 1:55 PM Jaunteva, Aija > wrote: Hi, to follow up on Ironic PTG session about Configuration molds [1] I’m scheduling a call to discuss remaining items (mainly storage of the molds). Honestly, I feel like it's a bit premature to discuss the potential storage of the molds until we get the first version out and tested in the wild. Dmitry Anyone interested please add your availability in Doodle [2]. When time slot decided, will share call details. Regards, Aija [1] https://etherpad.opendev.org/p/ironic-wallaby-ptg line 216 [2] https://doodle.com/poll/dry4x5tbmhi6x6p3 -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Wed Nov 4 10:11:44 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Wed, 4 Nov 2020 11:11:44 +0100 Subject: [ironic] Configuration mold follow-up In-Reply-To: References: Message-ID: Hi, On Wed, Nov 4, 2020 at 10:35 AM Jaunteva, Aija wrote: > Hi Dmitry, > > > > I don't see this to be the case. They need to be stored somewhere at the > very beginning. > Just pass the URLs in and out, no need to get fancy at this point. > This has been proposed to be a Wallaby priority and this meeting is > necessary to move forward. > > > > Let's discuss this and other questions in the meeting. > I'm not sure I'll have energy for yet another meeting, so please record my objection to adding an artefact storage API to ironic. Dmitry > > > Regards, > > Aija > > > > *From:* Dmitry Tantsur > *Sent:* Tuesday, November 3, 2020 18:41 > *To:* openstack-discuss at lists.openstack.org > *Cc:* Jaunteva, Aija > *Subject:* Re: [ironic] Configuration mold follow-up > > > > [EXTERNAL EMAIL] > > Hi Aija, > > > > On Mon, Nov 2, 2020 at 1:55 PM Jaunteva, Aija > wrote: > > Hi, > > > > to follow up on Ironic PTG session about Configuration molds [1] I’m > scheduling a call to discuss remaining items (mainly storage of the molds). > > > > Honestly, I feel like it's a bit premature to discuss the potential > storage of the molds until we get the first version out and tested in the > wild. > > > > Dmitry > > > > > > Anyone interested please add your availability in Doodle [2]. > > When time slot decided, will share call details. > > > > Regards, > > Aija > > > > [1] https://etherpad.opendev.org/p/ironic-wallaby-ptg line 216 > > [2] https://doodle.com/poll/dry4x5tbmhi6x6p3 > > > > -- > > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael > O'Neill > -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From themasch at gmx.net Wed Nov 4 10:36:09 2020 From: themasch at gmx.net (MaSch) Date: Wed, 4 Nov 2020 11:36:09 +0100 Subject: multiple nfs shares with cinder-backup Message-ID: Hello all! I'm currently using openstack queens with cinder 12.0.10. I would like to a backend I'm using a NFS-share. Now i would like to spit my backups up to two nfs-shares. I have seen that the cinder.volume.driver can handle multiple nfs shares. But it seems that cinder.backup.driver can not. Is there a way to use two nfs shares for backups? Or is it maybe possible with a later release of Cinder? regards, MaSch From thierry at openstack.org Wed Nov 4 11:31:42 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 4 Nov 2020 12:31:42 +0100 Subject: [largescale-sig] Future meeting time Message-ID: <580916f8-dd2e-fbca-56e4-b3ae0dfca85d@openstack.org> Hi everyone, During the PTG we discussed our meeting time. Currently we are rotating every two weeks between a APAC-EU friendly time (8utc) and a EU-US friendly time (16utc). However, rotating the meeting between two timezones resulted in fragmenting the already-small group, with only Europeans being regular attendees. After discussing this at the PTG with Gene Kuo, we think it's easier to have a single meeting time, if we can find one that works for all active members. Please enter your preferences into this date poll for Wednesday times: https://framadate.org/ue2eOBTT3QXoFhKg Feel free to add comments pointing to better times for you, like other days that would work better for you than Wednesdays for example. -- Thierry Carrez (ttx) From smooney at redhat.com Wed Nov 4 13:10:59 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 04 Nov 2020 13:10:59 +0000 Subject: [nova] Why nova needs password-less SSH to do live migraiton? In-Reply-To: References: <9fb653f8-026d-d1de-8be6-6e7e39fca209@debian.org> , <2462c573-0a7a-8a27-b637-aadaba911580@debian.org> Message-ID: <8613e4169fce1123acf4faf5b687d696fa868505.camel@redhat.com> On Tue, 2020-11-03 at 11:15 +0000, Zhi CZ Chang wrote: > Alright, do you mean that the libvirtd daemon is started by the nova user? And the nova user has the same privilege as the root user? nova need ssh on live migration to do a few things. first is to test if the storage is shared. nova create a temp dir on the souce node then sshs to the dest node and checks if its visable. this is needed to determin if you mounted the instance state dir on nfs for example. the second reason is to copy some files that wont be copied by libvirt like vtpm data and in the past i think it also copied the config drive or console log. the third and most important usecase is establising the connection over which the qemu data is transfered. before libvirt/qemu supported native tls encryption of the transfered data ssh was the primary way to transfer the vm data in an encrypted form. the ssh tunnel was used to pipe the data form one qemu to another instead of using plain text tcp. in all 3 of these cases you only need to use the nova user not root. the nova user needs to be part of the libvit/qemu/kvm group depending on what OS you are on to manage vms but that also provides it with the requried permissions to live migrate the vm and update the instance state dir. root should not be needed and the nova user does not need full root permisions for live migration. >   > Thanks > Zhi Chang >   > > ----- Original message ----- > > From: Thomas Goirand > > To: "openstack-discuss at lists.openstack.org" > > Cc: > > Subject: [EXTERNAL] Re: [nova] Why nova needs password-less SSH to do live migraiton? > > Date: Tue, Nov 3, 2020 18:27 > >   > > On 11/3/20 9:18 AM, Zhi CZ Chang wrote: > > > Hi, Thomas > > >   > > > Thanks for your reply. > > >   > > > In your environment, you use the "root" user for authenticating with > > > each other compute node, rather than the "nova" user, right? > > > If so, why use the "root" user rather than the "nova" user then > > > privilege the root permission to the "nova" user? > > >   > > > Thanks > > > Zhi Chang > > > > Hi, > > > > No, the username is "nova", not "root". > > > > Thomas Goirand (zigo) > > > > P.S: Please don't CC me, I'm registered to the list. > >   >   > From ykarel at redhat.com Wed Nov 4 13:19:18 2020 From: ykarel at redhat.com (Yatin Karel) Date: Wed, 4 Nov 2020 18:49:18 +0530 Subject: [tripleo][ci] Monday Nov 2 In-Reply-To: References: <0b8e1c52-640d-9370-981d-c6db3814626c@redhat.com> Message-ID: Hi, On Tue, Nov 3, 2020 at 9:19 PM Marios Andreou wrote: > > > > On Tue, Nov 3, 2020 at 5:31 PM Wesley Hayutin wrote: >> >> >> >> On Tue, Nov 3, 2020 at 6:20 AM Bogdan Dobrelya wrote: >>> >>> On 10/30/20 6:31 PM, Wesley Hayutin wrote: >>> > Greetings, >>> > >>> > The tripleo ci team has identified a handful of patches that we'd like >>> > to land prior to Nov 2, the day docker.io goes away. >>> > We've hit some new bugs and have also tuned a few things to try and >>> > make sure we can get patches to merge. >>> > >>> > Our current focus is across master, victoria, ussuri and centos-8 train, >>> > and queens while reducing coverage in rocky and stein. >>> > >>> > A list of the prioritized gerrit reviews can be found here: >>> > https://hackmd.io/MlbZ_izSTEuZsCWTJvu_Kg?view >>> > >>> > The entire topic can be found here: >>> > https://review.opendev.org/#/q/topic:new-ci-job >>> >>> In that list there are patches to puppet modules and openstack services >>> what run a single standalone tripleo CI job. I don't think creating an >>> extra provider job to run a single consumer job sounds reasonable. >>> >>> > >>> > Thanks all. >> >> >> So our first pass there I think should be a content-provider. However we could potentially drop the content-provider and override docker.io -> quay.io as well. We are not certain yet how well quay.io will perform so we're being cautious atm. >> > > or as currently discussing with sshnaidm on irc the job itself can build the containers instead of having a content provider do that > > > 17:38 < sshnaidm|rover> maybe in puppet repos we will just build containers? > 17:38 < sshnaidm|rover> it's one standalone job there only, irrc > 17:38 < sshnaidm|rover> I think bogdan is right > 17:39 < marios> sshnaidm|rover: but is it worth it to special case that? in the end it is still just 'build one > set of containers' does it matter if it happens in a content provider or in the job itself? I > guess it depends how stable are the cotnent providers and the answer is 'not always' ... :/ > 17:40 < sshnaidm|rover> marios, it will remain one job there, not two > 17:40 < sshnaidm|rover> and no need to change layouts, just adding one variable > 17:40 < sshnaidm|rover> these repos anyway don't expect to run anything else from tripleo > 17:41 < sshnaidm|rover> ~10 repos * N branches, will save us a little work.. > 17:41 < marios> sshnaidm|rover: ack ... if it is easy enough to have that as special case then OK. and yes > having one job instead of 2 (one content provider) brings its own benefits > I raised it in https://review.opendev.org/#/c/760420 couple of days ago, just a note for standalone it should work fine but for other job like undercloud ones which also runs in some projects would need to add support for container-builds via build_container_images: true for those non provider jobs. >> >> >> >>> >>> >>> >>> -- >>> Best regards, >>> Bogdan Dobrelya, >>> Irc #bogdando >>> >>> Thanks and Regards Yatin Karel From hberaud at redhat.com Wed Nov 4 13:31:44 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 4 Nov 2020 14:31:44 +0100 Subject: [oslo][release] Oslo's Transition To Independent Message-ID: Greetings, Is it time for us to move some parts of Oslo to the independent release model [1]? I think we could consider Oslo world is mostly stable enough to answer yes to the previous question. However, the goal of this email is to trigger the debat and see which deliverables could be transitioned to the independent model. Do we need to expect that major changes will happen within Oslo and who could warrant to continue to follow cycle-with-intermediary model [2]? The following etherpad [3] is designed to help us to consider which deliverables could be moved. Concerning the roots of this topic, it was raised during the PTG of the Release Management Team [4]. [1] https://releases.openstack.org/reference/release_models.html#independent [2] https://releases.openstack.org/reference/release_models.html#cycle-with-intermediary [3] https://etherpad.opendev.org/p/oslo-transition-to-independent [4] https://etherpad.opendev.org/p/wallaby-ptg-os-relmgt /me takes off Oslo team hat and puts on release team hat. This kind of transition could be an option for all stable libraries even outside the Oslo scope, don't hesitate to consider this as an option for your projects. Thanks for reading, -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From deepa.kr at fingent.com Wed Nov 4 13:33:19 2020 From: deepa.kr at fingent.com (Deepa KR) Date: Wed, 4 Nov 2020 19:03:19 +0530 Subject: Random 503 error while running using api/command line Message-ID: Hi All Good Day We have a Train Openstack setup using juju maas configured in HA with 3 controllers. When using the openstack api getting a timeout error randomly. Also sometimes getting "Unknown Error (HTTP 503)" or Remote end closed connection without response while using the command line . openstack server list --all-projects *Unknown Error (HTTP 503)* Kindly suggest to look in the right direction. Regards, Deepa K R -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Wed Nov 4 14:04:31 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Wed, 4 Nov 2020 09:04:31 -0500 Subject: [tc] weeklu update Message-ID: Hi everyone, Here's an update for what happened in the OpenStack TC this week. You can get more information by checking for changes in openstack/governance repository. # Patches ## Open Reviews - Adding reno and warnings example for JSON->YAML goal doc https://review.opendev.org/760956 - Select JSON to YAML goal for Wallaby cycle https://review.opendev.org/760957 - Add goal for migrate RBAC policy format from JSON to YAML https://review.opendev.org/759881 - Add diablo_rojo liaison preferences https://review.opendev.org/759718 - Add danms liaison preferences https://review.opendev.org/758822 - Add ricolin liaison preferences https://review.opendev.org/759704 - Add gmann liaison preference https://review.opendev.org/758602 - Resetting projects' TC liaisons empty https://review.opendev.org/758600 - Move Placement under Nova's governance https://review.opendev.org/760917 - Clarify the requirements for supports-api-interoperability https://review.opendev.org/760562 - Add Resolution of TC stance on the OpenStackClient https://review.opendev.org/759904 ## General Changes - Implement distributed leadership in tools and schema https://review.opendev.org/757966 - Adding missing IRC nick https://review.opendev.org/759724 - Update dansmith IRC nick https://review.opendev.org/760561 - Update projects with Telemetry PTL election result https://review.opendev.org/760206 - Appoint XueFeng Liu as Senlin PTL https://review.opendev.org/758122 - Begin the 'X' release naming process https://review.opendev.org/759888 - Appoint Adam Harwell as Octavia PTL https://review.opendev.org/758121 - Add nomination for TC chair https://review.opendev.org/758181 - Appoint Rafael Weingartner as CloudKitty PTL https://review.opendev.org/758120 - Appoint Frode Nordahl as OpenStack Charms PTL https://review.opendev.org/758119 - Select Privsep as the Wallaby Goal https://review.opendev.org/755590 ## Project Updates - Adopting the DPL governance model for oslo https://review.opendev.org/757906 Thanks for reading! Mohammed -- Mohammed Naser VEXXHOST, Inc. From openstack at nemebean.com Wed Nov 4 14:49:57 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 4 Nov 2020 08:49:57 -0600 Subject: [oslo][release] Oslo's Transition To Independent In-Reply-To: References: Message-ID: <4a9dc3ba-ef77-fdfd-2729-b7cb6357e3ed@nemebean.com> On 11/4/20 7:31 AM, Herve Beraud wrote: > Greetings, > > Is it time for us to move some parts of Oslo to the independent release > model [1]? > > I think we could consider Oslo world is mostly stable enough to answer > yes to the previous question. > > However, the goal of this email is to trigger the debat and see which > deliverables could be transitioned to the independent model. > > Do we need to expect that major changes will happen within Oslo and who > could warrant to continue to follow cycle-with-intermediary model [2]? I would hesitate to try to predict the future on what will see major changes and what won't. I would prefer to look at this more from the perspective of what Oslo libraries are tied to the OpenStack version. For example, I don't think oslo.messaging should be moved to independent. It's important that RPC has a sane version to version upgrade path, and the easiest way to ensure that is to keep it on the regular cycle release schedule. The same goes for other libraries too: oslo.versionedobjects, oslo.policy, oslo.service, oslo.db, possibly also things like oslo.config and oslo.context (I suspect contexts need to be release-specific, but maybe someone from a consuming project can weigh in). Even oslo.serialization could have upgrade impacts if it is being used to serialize internal data in a service. That said, many of the others can probably be moved. oslo.i18n and oslo.upgradecheck are both pretty stable at this point and not really tied to a specific release. As long as we're responsible with any future changes to them it should be pretty safe to make them independent. This does raise a question though: Are the benefits of going independent with the release model sufficient to justify splitting the release models of the Oslo projects? I assume the motivation is to avoid having to do as many backports of bug fixes, but if we're mostly doing this with low-volume, stable projects does it gain us that much? I guess I don't have a strong opinion one way or another on this yet, and would defer to our release liaisons if they want to go one way or other other. Hopefully this provides some things to think about though. -Ben From rosmaita.fossdev at gmail.com Wed Nov 4 15:48:49 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 4 Nov 2020 10:48:49 -0500 Subject: [cinder] Block Storage API v2 removal Message-ID: <8c96d38f-c14e-e246-d119-c83326555f45@gmail.com> The Block Storage API v2 was deprecated in the Pike release by Change-Id: I913c44799cddc37c3342729ec0ef34068db5b2d4 The Cinder team will be removing it during the Wallaby development cycle. At the Wallaby PTG, the question came up about Block Storage API v2 support in the python-cinderclient. We looked at some proposals [0] at today's Cinder meeting and plan to make a decision at next week's meeting [1]. If you have strong feelings about this issue, please make them known on the etherpad [0], the ML, or by attending the next Cinder meeting (1400 UTC in #openstack-meeting-alt on Wednesday 11 November). [0] https://etherpad.opendev.org/p/python-cinderclient-v2-support-removal [1] https://etherpad.opendev.org/p/cinder-wallaby-meetings From radoslaw.piliszek at gmail.com Wed Nov 4 15:54:48 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 4 Nov 2020 16:54:48 +0100 Subject: [ptl] Victoria Release Community Meeting In-Reply-To: <1604353327.602612275@apps.rackspace.com> References: <1602712314.02751333@apps.rackspace.com> <1603213221.77557521@apps.rackspace.com> <20201021082822.5cocsx4zhhxs7n3q@p1.internet.domowy> <1604353327.602612275@apps.rackspace.com> Message-ID: Hi Helena, I will deliver the slides and the video for Masakari. Kind regards, -yoctozepto On Mon, Nov 2, 2020 at 11:00 PM helena at openstack.org wrote: > > Hello PTLs, > > > > The community meeting for the Victoria release will be November 12th at 16:00 UTC. Please let me know if there is a conflict with this time/date. > > > > I have attached a template for the slides that you may use if you wish. The video should be around 10 minutes. Please send in your video and slide for the community meeting EOD Friday, November 6. > > > > The community meeting will consist of the pre-recorded videos followed by a live Q&A session. > > > > Right now, we only have 4 PTLs signed up and we would welcome more folks. Please let me know if you are interested in participating. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > > -----Original Message----- > From: "Slawek Kaplonski" > Sent: Wednesday, October 21, 2020 4:28am > To: "helena at openstack.org" > Cc: "Goutham Pacha Ravi" , "Kendall Nelson" , "OpenStack Discuss" > Subject: Re: [ptl] Victoria Release Community Meeting > > Hi, > > Count me in for the Neutron project. > > On Tue, Oct 20, 2020 at 01:00:21PM -0400, helena at openstack.org wrote: > > Hi Goutham and other interested PTLs, > > > > > > > > A date has not yet been set for the community meeting, but we will confirm one > > after the PTG. A week or two after PTG is when it would be a good time to have > > slides back to us; I have attached a template for the slides that you may use > > if you wish. The video should be around 10 minutes. > > > > > > > > Let me know if you have any other questions! > > > > > > > > Thank you for participating, > > > > Helena > > > > > > > > > > > > -----Original Message----- > > From: "Goutham Pacha Ravi" > > Sent: Wednesday, October 14, 2020 7:08pm > > To: "Kendall Nelson" > > Cc: "helena at openstack.org" , "OpenStack Discuss" > > > > Subject: Re: [ptl] Victoria Release Community Meeting > > > > > > Helena / Kendall, > > Interested! > > A couple of follow up questions: Should we adhere to a time limit / slide > > format? Is a date set for the community meeting? When are these recordings due? > > Thanks, > > Goutham > > > > > > On Wed, Oct 14, 2020 at 3:08 PM Kendall Nelson wrote: > > > > Hello! > > You can think of these pre-recorded snippets as Project Updates since we > > aren't doing a summit for each release anymore. We hope to get them higher > > visibility by having them recorded and posted in the project navigator. > > Hope this helps :) > > -Kendall Nelson (diablo_rojo) > > > > On Wed, Oct 14, 2020 at 2:52 PM helena at openstack.org > > wrote: > > > > > > Hi Everyone, > > > > > > > > We are looking to do a community meeting following The Open > > Infrastructure Summit and PTG to discuss the Victoria Release. If > > you’re a PTL, please let me know if you’re interested in doing a > > prerecorded explanation of your project’s key features for the release. > > We will show a compilation of these recordings at the community meeting > > and follow it with a live Q&A session. Post community meeting we will > > have this recording live in the project navigator. > > > > > > > > Cheers, > > > > Helena > > > > > > > > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > From anusuya at nidrive.jp Wed Nov 4 01:41:18 2020 From: anusuya at nidrive.jp (ANUSUYA MANI) Date: Wed, 4 Nov 2020 10:41:18 +0900 Subject: Openstack - File system changing to read-only mode Message-ID: Openstack - File system changing to read-only mode after 1 or 2 days of creating the instance. Not able to access any files after that. I am using Ubuntu 18.04 instances. I have installed openstack on my server which is also ubuntu 18.04. I have enough disk space. After this issues, when i try to remount , i am not able to do. what is the cause of this kind of issue and how to resolve this. Kindly help me in resolving this. Attaching logs here. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: dmesg-log.JPG Type: image/jpeg Size: 200399 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log1.JPG Type: image/jpeg Size: 317010 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: log2.JPG Type: image/jpeg Size: 278315 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: syslog-error&failed.JPG Type: image/jpeg Size: 462523 bytes Desc: not available URL: From anusuya at nidrive.jp Wed Nov 4 01:43:36 2020 From: anusuya at nidrive.jp (ANUSUYA MANI) Date: Wed, 4 Nov 2020 10:43:36 +0900 Subject: In Openstack (devstack) i am able to create maximum of 3 volumes only Message-ID: I have 3 volumes of 80GB each and when i try to create a volume snapshot or a new volume i am not able to do that. i get the error 'schedule allocate volume:Could not find any available weighted backend'. The volume quotas are set properly yet i am not able to create more than 3 volumes. i have attached screenshots of the volume quotas and error message. Kindly requesting to help me solve this issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: volumequota.JPG Type: image/jpeg Size: 94630 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: volumeerror.JPG Type: image/jpeg Size: 80627 bytes Desc: not available URL: From marios at redhat.com Wed Nov 4 16:47:34 2020 From: marios at redhat.com (Marios Andreou) Date: Wed, 4 Nov 2020 18:47:34 +0200 Subject: [tripleo] next meeting Tuesday Nov 10 @ 1400 UTC in #tripleo Message-ID: Hello tripleo as discussed at PTG - we voted [1] to have IRC meetings every 2 weeks and next week will be 2 weeks since PTG :) Keeping the same day and time our next meeting will be ** Tuesday 10th November at 1400 UTC in #tripleo. ** Our regular agenda is at https://wiki.openstack.org/wiki/Meetings/TripleO Please add items you want to raise at https://etherpad.opendev.org/p/tripleo-meeting-items . There were concerns voiced at PTG about low attendance in these meetings but at the same time quite a few folks voted to have it. So, let's try again and as with all the things we will re-evaluate based on all our feedback, Can I get a o/ if you plan on attending these meetings? Or even if you can't but plan to read the meetbot logs ;) Obviously now would be a good time to speak up if you hate tuesdays at 1400 UTC and want to propose some alternative for us to consider, thanks, marios [1] https://etherpad.opendev.org/p/tripleo-ptg-wallaby-community-process -------------- next part -------------- An HTML attachment was scrubbed... URL: From sethp at gurukuli.co.uk Wed Nov 4 16:54:21 2020 From: sethp at gurukuli.co.uk (Seth Tunstall) Date: Wed, 4 Nov 2020 16:54:21 +0000 Subject: [placement] Train upgrade warning In-Reply-To: <92da9dd6-d84f-efa5-ab8e-fc8124548b89@gmail.com> References: <54217759-eed4-5330-8b55-735ab622074c@gmail.com> <1603620657403.92138@univ-lyon1.fr> <272037d7-48f7-2f10-d7fb-cd1cc7b71e87@gmail.com> <79e6ffd0-83d1-f1ab-b0fa-6a4e8fc9a93c@gmail.com> <2d6ee04a-cff7-c3ad-4a1f-221c03dc0ef3@gurukuli.co.uk> <92da9dd6-d84f-efa5-ab8e-fc8124548b89@gmail.com> Message-ID: <26c491ef-2473-00d1-8b4a-7386758ea7ec@gurukuli.co.uk> Hello, In case it helps anyone else searching for this in future: Melanie's suggestion to clean out the orphaned consumers worked perfectly in my situation. The last two I had were apparently left over from the original build of this environment. I brute-force cleaned them out of the DB manually: DELETE FROM nova_cell0.block_device_mapping WHERE nova_cell0.block_device_mapping.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_faults WHERE nova_cell0.instance_faults.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_extra WHERE nova_cell0.instance_extra.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_info_caches WHERE nova_cell0.instance_info_caches.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_system_metadata WHERE nova_cell0.instance_system_metadata.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instances WHERE nova_cell0.instances.uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); Caveat: I am not intimately familiar with how the ORM handles these DB tables, I may have done something stupid here. I tried to run: nova-manage db archive_deleted_rows --verbose --until-complete --all-cells but nova-db-manage complained that it didn't recognise --no-cells Thanks very much for your help, Melanie Seth On 30/10/2020 16:50, melanie witt wrote: > On 10/30/20 01:37, Seth Tunstall wrote: >> Hello, >> >> On 10/28/20 12:01, melanie witt wrote: >>  >> The main idea of the row deletions is to delete "orphan" records >> which are records tied to an instance's lifecycle when that instance >> no longer exists. Going forward, nova will delete these records itself >> at instance deletion time but did not in the past because of bugs, and >> any records generated before a bug was fixed will become orphaned once >> the associated instance is deleted. >> >> I've done the following in this order: >> >> nova-manage api_db sync >> >> nova-manage db sync >> >> (to bring the DBs up to the version I'm upgrading to (Train) >> >> nova-manage db archive_deleted_rows --verbose --until-complete > > The thing I notice here ^ is that you didn't (but should) use > --all-cells to also clean up based on the nova_cell0 database (where > instances that failed scheduling go). If you've ever had an instance go > into ERROR state for failing the scheduling step and you deleted it, its > nova_api.instance_mappings record would be a candidate for being > archived (removed). > > > >> # placement-status upgrade check >> +-----------------------------------------------------------------------+ >> | Upgrade Check Results | >> +-----------------------------------------------------------------------+ >> | Check: Missing Root Provider IDs | >> | Result: Success | >> | Details: None | >> +-----------------------------------------------------------------------+ >> | Check: Incomplete Consumers | >> | Result: Warning | >> | Details: There are -2 incomplete consumers table records for existing | >> | allocations. Run the "placement-manage db | >> | online_data_migrations" command. | >> +-----------------------------------------------------------------------+ >> >> argh! again a negative number! But at least it's only 2, which is well >> within the realm of manual fixes. > > The only theory I have for how this occurred is you have 2 consumers > that are orphaned due to missing the nova_cell0 during database > archiving ... Like if you have a couple of deleted instances in > nova_cell0 and thus still have nova_api.instance_mappings and without > --all-cells those instance_mappings didn't get removed and so affected > the manual cleanup query you ran (presence of instance_mappings > prevented deletion of 2 orphaned consumers). > > If that's not it, then I'm afraid I don't have any other ideas at the > moment. > > -melanie From sethp at gurukuli.co.uk Wed Nov 4 16:54:32 2020 From: sethp at gurukuli.co.uk (Seth Tunstall) Date: Wed, 4 Nov 2020 16:54:32 +0000 Subject: [placement] Train upgrade warning In-Reply-To: <92da9dd6-d84f-efa5-ab8e-fc8124548b89@gmail.com> References: <54217759-eed4-5330-8b55-735ab622074c@gmail.com> <1603620657403.92138@univ-lyon1.fr> <272037d7-48f7-2f10-d7fb-cd1cc7b71e87@gmail.com> <79e6ffd0-83d1-f1ab-b0fa-6a4e8fc9a93c@gmail.com> <2d6ee04a-cff7-c3ad-4a1f-221c03dc0ef3@gurukuli.co.uk> <92da9dd6-d84f-efa5-ab8e-fc8124548b89@gmail.com> Message-ID: Hello, In case it helps anyone else searching for this in future: Melanie's suggestion to clean out the orphaned consumers worked perfectly in my situation. The last two I had were apparently left over from the original build of this environment. I brute-force cleaned them out of the DB manually: DELETE FROM nova_cell0.block_device_mapping WHERE nova_cell0.block_device_mapping.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_faults WHERE nova_cell0.instance_faults.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_extra WHERE nova_cell0.instance_extra.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_info_caches WHERE nova_cell0.instance_info_caches.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instance_system_metadata WHERE nova_cell0.instance_system_metadata.instance_uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); DELETE FROM nova_cell0.instances WHERE nova_cell0.instances.uuid IN (SELECT uuid FROM nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT nova_api.allocations.consumer_id FROM nova_api.allocations)); Caveat: I am not intimately familiar with how the ORM handles these DB tables, I may have done something stupid here. I tried to run: nova-manage db archive_deleted_rows --verbose --until-complete --all-cells but nova-db-manage complained that it didn't recognise --no-cells Thanks very much for your help, Melanie Seth On 30/10/2020 16:50, melanie witt wrote: > On 10/30/20 01:37, Seth Tunstall wrote: >> Hello, >> >> On 10/28/20 12:01, melanie witt wrote: >>  >> The main idea of the row deletions is to delete "orphan" records >> which are records tied to an instance's lifecycle when that instance >> no longer exists. Going forward, nova will delete these records itself >> at instance deletion time but did not in the past because of bugs, and >> any records generated before a bug was fixed will become orphaned once >> the associated instance is deleted. >> >> I've done the following in this order: >> >> nova-manage api_db sync >> >> nova-manage db sync >> >> (to bring the DBs up to the version I'm upgrading to (Train) >> >> nova-manage db archive_deleted_rows --verbose --until-complete > > The thing I notice here ^ is that you didn't (but should) use > --all-cells to also clean up based on the nova_cell0 database (where > instances that failed scheduling go). If you've ever had an instance go > into ERROR state for failing the scheduling step and you deleted it, its > nova_api.instance_mappings record would be a candidate for being > archived (removed). > > > >> # placement-status upgrade check >> +-----------------------------------------------------------------------+ >> | Upgrade Check Results | >> +-----------------------------------------------------------------------+ >> | Check: Missing Root Provider IDs | >> | Result: Success | >> | Details: None | >> +-----------------------------------------------------------------------+ >> | Check: Incomplete Consumers | >> | Result: Warning | >> | Details: There are -2 incomplete consumers table records for existing | >> | allocations. Run the "placement-manage db | >> | online_data_migrations" command. | >> +-----------------------------------------------------------------------+ >> >> argh! again a negative number! But at least it's only 2, which is well >> within the realm of manual fixes. > > The only theory I have for how this occurred is you have 2 consumers > that are orphaned due to missing the nova_cell0 during database > archiving ... Like if you have a couple of deleted instances in > nova_cell0 and thus still have nova_api.instance_mappings and without > --all-cells those instance_mappings didn't get removed and so affected > the manual cleanup query you ran (presence of instance_mappings > prevented deletion of 2 orphaned consumers). > > If that's not it, then I'm afraid I don't have any other ideas at the > moment. > > -melanie From mark at stackhpc.com Wed Nov 4 17:06:30 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 4 Nov 2020 17:06:30 +0000 Subject: [kolla] Kolla klub meetings paused Message-ID: Hi, We agreed at the PTG to try rebooting the Kolla klub meetings at a different time to involve some new members. Let's pause the meetings until then. Regards, Mark From melwittt at gmail.com Wed Nov 4 17:08:50 2020 From: melwittt at gmail.com (melanie witt) Date: Wed, 4 Nov 2020 09:08:50 -0800 Subject: [placement] Train upgrade warning In-Reply-To: References: <54217759-eed4-5330-8b55-735ab622074c@gmail.com> <1603620657403.92138@univ-lyon1.fr> <272037d7-48f7-2f10-d7fb-cd1cc7b71e87@gmail.com> <79e6ffd0-83d1-f1ab-b0fa-6a4e8fc9a93c@gmail.com> <2d6ee04a-cff7-c3ad-4a1f-221c03dc0ef3@gurukuli.co.uk> <92da9dd6-d84f-efa5-ab8e-fc8124548b89@gmail.com> Message-ID: <90557ebe-caec-bfa7-79f1-f909474235ff@gmail.com> On 11/4/20 08:54, Seth Tunstall wrote: > Hello, > > In case it helps anyone else searching for this in future: Melanie's > suggestion to clean out the orphaned consumers worked perfectly in my > situation. > > The last two I had were apparently left over from the original build of > this environment. I brute-force cleaned them out of the DB manually: > > DELETE FROM nova_cell0.block_device_mapping WHERE > nova_cell0.block_device_mapping.instance_uuid IN (SELECT uuid FROM > nova_api.consumers WHERE nova_api.consumers.uuid NOT IN (SELECT > nova_api.allocations.consumer_id FROM nova_api.allocations)); > Caveat: I am not intimately familiar with how the ORM handles these DB > tables, I may have done something stupid here. Hm, sorry, this isn't what I was suggesting you do ... I was making a guess that you might have instances with 'deleted' != 0 in your nova_cell0 database and that if so, they needed to be archived using 'nova-manage db archive_deleted_rows' and then that might take care of removing their corresponding nova_api.instance_mappings which would make the manual cleanup find more rows (the rows that were being complained about). What you did is "OK" (not harmful) if the nova_cell0.instances records associated with those records were 'deleted' column != 0. But there's likely more cruft rows left behind that will never be removed. nova-manage db archive_deleted_rows should be used whenever possible because it knows how to remove all the things. > I tried to run: > > nova-manage db archive_deleted_rows --verbose --until-complete --all-cells > > but nova-db-manage complained that it didn't recognise --no-cells This is with the train code? --all-cells was added in train [1]. If you are running with code prior to train, you have to pass a nova config file to the nova-manage command that has its [api_database]connection set to the nova_api database connection url and the [database]connection set to the nova_cell0 database. Example: nova-manage --config-file db archive_deleted_rows ... Cheers, -melanie [1] https://docs.openstack.org/nova/train/cli/nova-manage.html#nova-database From piotrmisiak1984 at gmail.com Wed Nov 4 17:45:01 2020 From: piotrmisiak1984 at gmail.com (Piotr Misiak) Date: Wed, 4 Nov 2020 18:45:01 +0100 Subject: [neutron][ovn] Neutron <-> OVN DB synchronization code Message-ID: Hi, Do you know why Neutron <-> OVN DB sync code synchronizes "DEVICE_OWNER_ROUTER_HA_INTF" router ports here in this line: https://github.com/openstack/neutron/blob/164f12349fd8be09b9fd4f23b8cf8d2f3eccd11b/neutron/plugins/ml2/drivers/ovn/mech_driver/ovsdb/ovn_db_sync.py#L460 ? As far as know there is no need for HA network ports for L3 virtual routers implementation in OVN, because OVN uses the internal BFD mechanism for Gateway Chassis reachability tests. Thanks, Piotr -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: pEpkey.asc Type: application/pgp-keys Size: 2464 bytes Desc: not available URL: From vladimir.blando at gmail.com Wed Nov 4 18:37:20 2020 From: vladimir.blando at gmail.com (vladimir franciz blando) Date: Wed, 4 Nov 2020 12:37:20 -0600 Subject: Editing libvirt xml file In-Reply-To: References: Message-ID: Already found a solution which entails hacking nova database, specifically the instance_system_metadata table Regards Vlad On Fri, Oct 30, 2020 at 4:57 PM vladimir franciz blando < vladimir.blando at gmail.com> wrote: > Yes, I am fully aware of support implications if you edit this. But > fortunately this is not in any way a vendor support OpenStack deployment > > On Fri, Oct 30, 2020 at 1:29 PM Sean Mooney wrote: > >> On Fri, 2020-10-30 at 17:23 +0000, Stephen Finucane wrote: >> > On Fri, 2020-10-30 at 11:56 -0500, vladimir franciz blando wrote: >> > > Hi, >> > > I edited the libvirt xml (virsh edit...) file of a running Windows >> > > server instance because I changed the video model type from cirrus to >> > > vmvga so it can support a higher resolution on a console (cirrus >> > > supports up to 1280 only). >> > > After editing, i "soft-rebooted" the instance and the new >> > > configuration sticks. That's awesome. >> > >> > Please don't do this :) You shouldn't change things about an instance >> > behind nova's back. It breaks resource tracking and can cause all sorts >> > of horribleness. >> >> if you have support form an openstack vendor like redhat editing the >> domain >> xml also make that vm unsupported until its restroed to the way nova >> generated. >> it might also make other issue caused as a resulted unsupported. >> so unless you are self supporting you cloud be very carful with doing >> this. >> > >> > > But when you do a "hard-reboot" it will revert back to the original >> > > config. >> > >> > Hard reboot builds configuration from scratch, dumping the existing >> > XML. This is built from the same things used when you first create the >> > instance (flavor + extra specs, image + metadata, host-level config, >> > ...). >> > >> > What you want is the 'hw_video_model' image metadata property. Set this >> > and rebuild your instance: >> > >> > openstack image set --property hw_video_model=vmvga $IMAGE >> this is the supported way to change this value ^ >> it will take effect for new instances. >> > openstack server rebuild $SERVER $IMAGE >> and this is the only supported api action that can also change this value. >> however i dont think rebuilding to the same image will update the image >> metadata. >> you currenlty need to rebuild to a new image. this is due to an >> optiomisation we >> have to avoid needing to go to the scheuler if we use the same image. >> >> the unsupported way to change this without rebuilding the instance is to >> add >> img_hw_video_model=vmvga to the system metadata table for that instance. >> >> if your cloud is supported by a vendor this will also invalidate your >> suport >> in all likely hood as db modificaiton generally are unsupported without >> an exception. >> if you can rebuild the guest that is the best approch but it wont work >> for boot form volume >> instances and it willl destory all data in the root disk of all other >> instances. >> >> > >> > Note that rebuild is a destructive operation so be sure you're aware of >> > what this is doing before you do it. >> > >> > Stephen >> > >> > > I also tried to do a "shut-down" then "start" again, but it still >> > > reads the original config. Not sure where it is reading the config >> > > from... >> > > >> > > Regards >> > > Vlad >> > >> > >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Nov 4 20:12:30 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 4 Nov 2020 21:12:30 +0100 Subject: [masakari] Teams cleanup Message-ID: Hello Everyone! To avoid the ghost town effect in Masakari teams, I propose to clean up the memberships. I've already had a chance to discuss it with Takashi Kajinami on IRC [0] as he wanted to remove himself. We have 3 teams (1 gerrit, 2 launchpad): - Gerrit masakari-core [1] - Launchpad masakari-drivers [2] - Launchpad masakari-bugs [3] I plan to set these to the following people: - myself [Radosław Piliszek] (the new PTL) - suzhengwei (new active contributor and core) - Jegor van Opdorp (new active contributor and core) - Sampath Priyankara (the previous PTL) - Tushar Patil (the previously most active core) I will enact the change in a week from now if there are no objections. I also see a few issues likely to discuss with the infra team: 1) Launchpad masakari-drivers - there is a pending invitation for the release team - I doubt it has much use but I would love to see it accepted/rejected/accepted&removed. 2) Gerrit masakari-core - there is an external CI in the core team - why would it need this level of privilege? It does not add its flags to changes, only replies with job status. [0] http://eavesdrop.openstack.org/irclogs/%23openstack-masakari/%23openstack-masakari.2020-10-21.log.html#t2020-10-21T12:42:14 [1] masakari-core: https://review.opendev.org/#/admin/groups/1448,members [2] https://launchpad.net/~masakari-drivers/+members [3] https://launchpad.net/~masakari-bugs/+members Kind regards, -yoctozepto From melwittt at gmail.com Wed Nov 4 23:30:20 2020 From: melwittt at gmail.com (melanie witt) Date: Wed, 4 Nov 2020 15:30:20 -0800 Subject: [gate][all] assorted tempest jobs fail when IPv4 default route is not available Message-ID: <772eb358-2eb0-3bfb-cfd5-a9c8df333b1e@gmail.com> Hi all, We have a gate bug [1] where if a CI VM does not have an IPv4 default route configured, the job will fail with one of the following errors: "[ERROR] /opt/stack/devstack/functions-common:230 Failure retrieving default route device" [2] or "[ERROR] /opt/stack/devstack/functions-common:104 Failure retrieving default IPv4 route devices" [3] because the die_if_not_set is being triggered. Most of the time, CI VMs come configured with an IPv4 default route but something changed recently (around Oct 23 or 24) to where we often get VMs that do not have an IPv4 default route configured and presumably only have an IPv6 default route. According to this logstash query [4] we have: 58 hits in the last 7 days, check and gate, all failures, all on the limestone-regionone node provider, various projects, various jobs Nate Johnson has proposed a fix here: https://review.opendev.org/761178 Reviews appreciated. Cheers, -melanie [1] https://bugs.launchpad.net/devstack/+bug/1902002 [2] example: https://zuul.openstack.org/build/7543021732cc4886b608553da58e81f9/log/controller/logs/devstacklog.txt#134 [3] example: https://zuul.opendev.org/t/openstack/build/7fff839dd0bc4214ba6470ef40cf78fd/log/job-output.txt#1875 [4] http://logstash.openstack.org/#/dashboard/file/logstash.json?query=message:%5C%22failure%20retrieving%20default%5C%22%20AND%20tags:%5C%22devstacklog.txt%5C%22%20AND%20message:%5C%22ERROR%5C%22&from=7d From gouthampravi at gmail.com Wed Nov 4 23:54:57 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 4 Nov 2020 15:54:57 -0800 Subject: [manila][ptg] Manila Wallaby PTG Summary Message-ID: Hello Zorillas and all other interested Stackers! The OpenStack Manila Project Technical Gathering for the Wallaby development cycle was a successful meeting of project developers, users, operators and contributors to the rest of OpenStack, Kubernetes, Ceph and other communities - simply all zorillas and friends! I would like to thank the PTG organizing committee, (the Kendalls! :D) the infrastructure team, and everyone that contributed to the discussions. The following is my summary of the proceedings. I've linked associated etherpads, and resources to dig in further, and chat with us here, or on freenode's #openstack-manila! == Day 1 - Oct 26, 2020, Mon == === Manila Interop testing === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila-interop Discussion Items: - OSF Foundation Board and Interop Working Group members were apprised about Manila, its capabilities and problem space - Manila contributors answered questions regarding user workflows, tempest integration, vendor coverage and project maturity - Manila's addition to the Interop refstack suite will allow: * OpenStack clouds and distributions to certify their manila APIs for the "OpenStack Powered" trademark program, as well as, * OpenStack Manila third party integrations can claim the "OpenStack Compatible" label - A plan was charted to target an upcoming interop advisory with manila as an "Add-On" component - Interop team presented their idea for E2E testing for running container workloads on top of openstack Actions: - Vida Haririan (vhari), Liron Kuchlani (lkuchlan) and Goutham Pacha Ravi (gouthamr) will create the interop test plan for Manila - Test plan target is the upcoming 2020.10 advisory === Manila Retrospective === Topic Etherpad: https://etherpad.opendev.org/p/manila-victoria-retrospective Discussion Items: - Victoria Cycle Bug/Documentation Squash days were a big hit. 45% of the bug backlog was addressed with these squashes - thanks vhari/vkmc! - Team's doing a great job of farming low-hanging-fruit; but resolution is taking longer, we should timebox fixes for Medium prio low-hanging-fruit; high prio bugs cannot be tagged "low-hanging-fruit" - Team recognized the mentorship eagerness, and the opportunity providers in Victoria/Wallaby - Grace Hopper Celebration, Outreachy, Boston University Project and other venues - We couldn't complete the "migrate to focal fossa goal" - team was short handed at the end of the release, thanks to the goal champion, Ghanshyam Mann for ensuring we can make decent progress and target completion in wallaby - New TC tags (kubernetes-in-virt, assert:supports-accessible-upgrade, erstwhile tc:approved-release) were great, more to come in Wallaby (vulnerability-managed, supports-api-interoperability) - A GREAT DEAL of work was done on manila-tempest-plugin, and the team is thankful, and is sure users are appreciative of the focus we have on CI. - Great PTL leadership - but there's room to grow :D - Collaborative review sessions are a hit. Let's keep doing them! Actions: - We'll have a bug squash event closer to Wallaby-1 for bigger bugs that require API/DB changes as identified in the last Cycle (vkmc/vhari) - We'll have Bug/Documentation Squash after Wallaby-3 as well - Work on TC tags by Wallaby-1 - Contributors can actively seek Collaborative Review sessions for the work they're doing - CI flakiness needs to be audited and recurring test failures will be identified (dviroel) and discussed at an upcoming IRC meeting - Backend Feature Support Matrix will be improved and doc bugs will be filed for other issues identified (dviroel) === Cephadm, Triple-O and NFS-Ganesha === Topic Etherpad: https://etherpad.opendev.org/p/tripleo-ceph Discussion Items: - fultonj/gfidente/fpantano presented three specs to chart the future of ceph deployment with tripleo. The ceph community will no longer support ceph-ansible as a deployment tool. - particularly interesting to the manila team was the fate of nfs-ganesha ("ceph-nfs") which doesn't have adequate support in ceph's new deployment tooling (cephadm and ceph orch) to cover tripleo's requirements of HA, or allow deployment flexibility like ceph-ansible used to (ceph-ansible would allow deploying nfs-ganesha as a standalone service on tripleo hosts) - the proposal was to retain the portions of ceph-ansible that were used for nfs-ganesha in tripleo while replacing the use of ceph-ansible with cephadm/ceph orch. - tripleo contributors were concerned about the future supportability of taking over the portion of the setup that ceph ansible does for tripleo today; they would be more comfortable if cephadm can be fixed in time for wallaby, or the portions of the code taken from ceph-ansible be in an independent repo Actions: - pursue cephadm/orch support for nfs-ganesha with ceph-orchestration team and understand timeline and upgrade implications when substituting the work - advance the work via specs == Day 2 - Oct 27, 2020, Tue == === Sub teams === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila-subteams Discussion Items: - gouthamr/vkmc proposed that client repos (python-manilaclient, manila-ui) and tempest (manila-tempest-plugin) have their own core teams - manila-core can be part of the initial core teams of those repos - the idea is to create focussed subteams that can to some degree of autonomy, govern these projects with the domain knowledge they have, have independent meetings and bug triages if they wish and dictate a roadmap - this way, review bandwidth can be adequately shared - the sub teams will allow manila core team mentoring efforts and provide an easier on ramp for new maintainers - team agreed that this is a good way forward, and acknowledged that they would not want to create more maintenance burden or create silos. Actions: - get https://review.opendev.org/758868/ merged - to avoid siloing, weekly IRC meeting will call upon subteams to share updates - focussing on a sub project doesn't mean we don't pay attention to other repos - attempt to record mentoring sessions and post it publicly for posterity === Extending shares via the Manila scheduler === Topic Etherpad: https://etherpad.opendev.org/p/share-stuck-in-extend-discussion Discussion Items: - manila share extension currently doesn't go through the scheduler - they directly hit the share service/host responsible for the share ( https://bugs.launchpad.net/manila/+bug/1855391) - this needs to be fixed for two reasons - a) capacity based calculations are off if scheduler is ignored, b) service health checks are not accounted for - there was a concern this change would break administrators that rely on this wrong behavior as storage pools fill up - this was called out as a corner case, and instead of having a global way to opt-in/out of using the scheduler for share extensions, we can build a "force" flag for privileged users that will preserve existing behavior Actions: - haixin will propose changes to the patch based on this feedback === Virtio-FS === Topic Etherpad: https://etherpad.opendev.org/p/nova-manila-virtio-fs-support-vptg-october-2020 Discussion Items: - virtiofs support has debuted in newer linux kernels and the libvirt/qemu/kvm communities have adopted it - in OpenStack, Nova can be made to use virtio-fs and expose Manila shares to tenant VMs - this provides a user experience improvement for manila users where mount automation has been a concern for a while, along with the desire to have strong multi-tenant separation on the network path which only a subset of manila backends are able to provide directly - virtiofs support does not replace DHSS=True or use of gateways like NFS-Ganesha (e.g.: CephFS, GPFS, etc) since those are still very much valuable when the consumer isn't nova VMs but baremetals, containers or non-OpenStack VMs. - nova team helped understand the feature submission process and provided guidance on how to integrate this feature in stages - the user experience, data models and cross service RPCs for this can be modeled around block device mapping, however we don't need to share or copy the implementation - this is surely a large multi-cycle work - nova team suggested that we begin with only attach/detach workflows in Wallaby Actions: - tbarron will propose a nova spec to allow "nova attach fs" (and detach) workflows - we'll continue to design "boot with share" or "boot from share" APIs, including scheduler changes through this cycle === Share Server Updates === Topic Etherpad: https://etherpad.opendev.org/p/share-server-update Discussion Items: - objective is to be able to update security services and share network subnets after share servers have been provisioned - users that want to make these updates have no access to share servers - a security service update can impact multiple share networks (and hence multiple share servers) and in turn multiple share resources, so there was a concern with granularity of the update - operators don't want to be involved with the updates, want users to be able to make the updates on resources they can see (security services or share networks) - if we allow end users to make these changes, we'll still need to break down the operation into more granular bits, and provide APIs to perform validation and multi-step updates Actions: - dviroel will propose spec updates based on the feedback gathered here: https://review.opendev.org/729292 - need other DHSS=True driver maintainers to pay attention to these changes === Share Affinity Hints === Topic Etherpad: https://etherpad.opendev.org/p/manila-anti-affinity Discussion Items: - need a way to specify affinity or anti-affinity placement hints wrt existing resources (e.g: shares, share groups) - share groups provide a way to indicate affinity, but they may also encase shares into a construct and make individual operations on shares harder (you cannot add existing shares to a group or remove shares from a share group without deleting them, you can only create new members in a share group) - need a vehicle other than groups or share type extra specs to carry scheduling decisions that affect single shares - this mechanism shouldn't break cloud storage abstraction - cinder has had the ability for end users to add placement hints for affinity without breaking abstractions Actions: - carthaca will propose a specification for this work - we'll seek reviews from cinder contributors on the spec == Day 3 - Oct 28, 2020, Wed == === Responsibility for Code Reviews === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila Discussion Items: - a bulk of the day-2 operations that are being enhanced have significant impact on vendor driver implementations - we do have first party "reference" driver support for testing and designing manila APIs in a generic fashion but, not having vendor driver maintainer feedback is unfortunate - operator feedback is also very much appreciated, and we've made significant decisions because of it Actions: - proposers for new features that affect drivers must actively seek out vendor driver maintainers and operators and add them to review. The team will help identify these individuals - we'll track vendor/operator feedback and reviews as part of the review focus tracking we do through the cycle === Acting on Deprecation removals === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila Discussion Items: - tbarron proposed https://review.opendev.org/745206/ to remove old deprecated config options from manila - we should be doing this more actively each cycle, however, we held off from merging the code change above considering the impact to configuration and deployment tooling - driver module maintainers must adjust the deployment tooling/other usages accordingly asap Actions: - dviroel, carloss and gouthamr will review and merge https://review.opendev.org/745206/ === rootwrap-to-privsep migration === Topic Etherpad: https://etherpad.opendev.org/p/manila-migration-from-rootwrap-to-privsep Discussion Items: - this has been selected as a community goal for Wallaby - privsep has not been used in manila so far - most first party and reference drivers (lvm, zfsonlinux, generic, container, ceph) use rootwrap heavily - some third party drivers also use rootwrap - team was concerned with the completion criteria wrt third party drivers - a stretch goal would be to investigate if dropping privileges for operations by substituting mechanisms is possible Actions: - carloss will work on this goal, no specification is necessary - adequate testing, admin/operator documentation will be provided - carloss will reach out to the third party driver maintainers to act on rootwrap removals - carloss/team will attack de-escalation/removing use of root privs to perform an operation, opportunistically === Secure RBAC improvements === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila-policy-changes Discussion Items: - manila will need to support splitting of the existing admin, member roles to system, domain and project admins and members - operators can define personas with these default roles using keystone's "role scope" - an accompanying feature is to provide "reader" role permissions as well - manila accepts the use of "policy.yaml" to drive RBAC, over "policy.json", but, the default is still "policy.json" and this needs to change due to several challenges, including the ability to have comments in the files Actions: - gouthamr will propose a spec for policy refresh across manila APIs/resources using these new role personas - making policy.yaml the default for policy overrides is going to be a community goal. We're targeting this transition to wallaby-1 in manila - we'll attempt to complete this effort in wallaby. there are several parts, collaboration is welcome! === CephFS Updates === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila-cephfs-driver-update Discussion Items: - vkmc provided an update on the cephfs feature roadmap - we need changes in cephfs (preferably in ceph-nautilus) to perform authorization, subvolume cloning, capacity reporting etc. This work is being tracked through tracker tickets in Ceph's tracking system - we stopped using shaman builds for ceph and nfs-ganesha in victoria cycle - consuming nfs-ganesha from ppa's has introduced new scenario test failures, vkmc is currently triaging them Actions: - start testing ceph octopus in the wallaby cycle - keep momentum on the ceph trackers and switch over from ceph-volume-client to ceph-mgr and implement create share from snapshot in the cephfs drivers - engage with ceph and nfs-ganesha communities to create a support matrix across nfs-ganesha versions, manila and cephfs === Divisive Language Correction === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila Discussion Items: - spotz joined us, and recapped the diversity and inclusion wg's work so far - manila team is determined to help resolve any divisive language and references in code and documentation - the team realizes that unconscious use of negative language is affecting the community's inherent desire to be welcoming and inclusive - no manila architecture or code references currently *need* the use of the divisive words identified by the working group and the wider openstack community - making changes is important to us, but we recognize it's just part of the story Actions: - we'll begin with a documentation change to remove the words identified in https://etherpad.opendev.org/p/divisivelanguage - we'll wait to coordinate a change of name for the development/trunk branch, and other "upstream" changes to databases, git, opendev - team actively welcomes any feedback == Day 5 - Oct 30, 2020, Fri == === Share and Server Migration graduation === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila Discussion Items: - we've graduated one major feature through the last couple of cycles, share migration is next - team agreed to keep share server migration as experimental APIs since they haven't had sufficient soak time - no significant changes have been made to share migration migration APIs since Ocata, leading to believe the APIs are stable and can be improved via microversions where necessary - not a lot of vendor driver implementations for driver optimized migration, NetApp driver only allows moving within a backend at the moment - generic migration improvements have been deprioritized, and need help in terms of High Availability of the Data service as well as providing a jobs table to track, and coordinate ongoing migrations - tests are being skipped at the gate because of flakiness Actions: - team will get in touch with third party vendors and guage interest in implementing driver optimized migration (or testing/supporting generic migration with their drivers) (carloss). Feedback from the vendors will drive our decision to graduate the share migration APIs in Wallaby. === Metadata for all share resources === Topic Etherpad: https://etherpad.opendev.org/p/wallaby-ptg-manila-metadata-controller Discussion Items: - the proposal is to introduce a metadata API controller that can be inherited by the resource controllers - all user facing resources will have metadata capabilities, with consistent API methods - this is expected to benefit automation use cases, manila-csi and the upcoming virtio-fs effort Actions: - gouthamr will propose the spec for team's review, including identifying the resources that will be targeted in the Wallaby release - share metadata APIs will be improved to adhere to the API SIG spec: https://specs.openstack.org/openstack/api-sig/guidelines/metadata.html === Client Project updates === Topic Etherpads: https://etherpad.opendev.org/p/manila-osc-wallaby-ptg and https://etherpad.opendev.org/p/manila-ui-wallaby-ptg Discussion Items: - the team met with Nicole Chen, Ashley Rodriguez, Mark Tony from Boston University who will be working on the OpenStackSDK integration for Manila APIs - maaritamm provided an update on the OSC plugin interface implementation - vkmc provided an update on manila-ui feature gaps and the upcoming Outreachy internship Actions: - for some OSC plugin command implementations, we'll benefit from integrating with OpenStackSDK rather than manilaclient SDK, so we'll coordinate that work - maaritamm/vkmc will send an email to openstack-discuss to prioritize the commands left to implement in the OSC plugin - vkmc will drive consistency across cinder/manila for the user messages dashboards that were added in Victoria - we'll continue to work on closing the feature gap in manila-ui === Manila CSI Wallaby roadmap === Topic Document: https://docs.google.com/document/u/1/d/1YJcv0EROWzYbV_8DM3DHVvDqdhlPhYRV8Npl7latttY/edit# Discussion Items: - users of manila-csi today deploy one driver per share protocol - future work on monitoring, common node plugin interactions requires us to rework the architecture to have one driver multiplex with multiple share protocol node plugins - upgrade impact as well as UI/UX impact was discussed and gman0 has documented them - share metadata is necessary to tag and identify shares Actions: - gman0 has started working on the code patches, and will benefit from reviews - mfedosin will take a look at the upgrade impact and share details for automation that can be done in the csi-operator - mfedosin will test gman0's latest patch to add share metadata via the storage class (as will folks from CERN) and provide feedback Thanks for reading thus far, there were a number of items that we couldn't cover, that you can see on the PTG Planning etherpad [1], if you own those topics, please add them to the weekly IRC meeting agenda [2] and we can go over them. The master meeting minutes document is available as well [3]. As usual, the whole PTG was recorded and posted on the OpenStack Manila Youtube channel [4] We had a great turn out and we heard from a diverse set of contributors, operators, interns and users from several affiliations and time zones. On behalf of the OpenStack Manila team, I deeply appreciate your time, and help in keeping the momentum on the project! [1] https://etherpad.opendev.org/p/wallaby-ptg-manila-planning [2] https://wiki.openstack.org/wiki/Manila/Meetings [3] https://etherpad.opendev.org/p/wallaby-ptg-manila [4] https://www.youtube.com/playlist?list=PLnpzT0InFrqATBbJ60FesKGYRejnTIoCW -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Thu Nov 5 01:38:54 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Wed, 4 Nov 2020 20:38:54 -0500 Subject: Openstack - File system changing to read-only mode In-Reply-To: References: Message-ID: What is the underlying storage system for the VM? Ceph? ISCSI? Local Storage? It looks like there could have been a storage/network blip and the OS FS might have been set RO. You should be able to run an fsck during the boot and see if that removes the RO flag. On Wed, Nov 4, 2020 at 12:02 PM ANUSUYA MANI wrote: > Openstack - File system changing to read-only mode after 1 or 2 days of > creating the instance. Not able to access any files after that. I am using > Ubuntu 18.04 instances. I have installed openstack on my server which is > also ubuntu 18.04. I have enough disk space. After this issues, when i try > to remount , i am not able to do. what is the cause of this kind of issue > and how to resolve this. Kindly help me in resolving this. Attaching logs > here. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From akekane at redhat.com Thu Nov 5 08:47:48 2020 From: akekane at redhat.com (Abhishek Kekane) Date: Thu, 5 Nov 2020 14:17:48 +0530 Subject: [glance] Single store support removal Message-ID: Hi All The Single store glance support(default) was deprecated in Rocky release by Change-Id: I50189862454ada978eb401ec24a46988517ea73b The glance team will start working on removing single store support from the wallaby cycle and intended it to finish the same in wallaby or during milestone 1 of the next cycle. Below is the roadmap for removing single store configuration support from glance and glance_store repository. 1. Default Glance to configure multiple stores - Wallaby milestone 1 - https://review.opendev.org/#/c/741802/ 2. Convert single store unit and functional tests to use multiple stores - Wallaby milestone 1 3. Remove use of single store from glance - Wallaby milestone 2 4. Remove single stores support from glance_store - Wallaby milestone 3 to X milestone 1 We would like to encourage our users to start moving to multiple stores of glance if you are still using traditional single stores of glance in your deployments. Please let us know if you have any queries or suggestions about the same. Thank you, Abhishek -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Nov 5 11:01:31 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 5 Nov 2020 12:01:31 +0100 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: Message-ID: zuul at openstack.org wrote: > - release-openstack-python https://zuul.opendev.org/t/openstack/build/652843f2641e47d781264a815a14894d : POST_FAILURE in 2m 54s During the release job for ansible-role-redhat-subscription 1.1.1, twine upload to PyPI failed with: HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/ The description failed to render in the default format of reStructuredText. See https://pypi.org/help/#description-content-type for more information. It's weird because it looks like this was addressed about a year ago, and nothing significant changed in that area in the months following 1.1.0 release. Current status: Tag was pushed OK No tarball upload No PyPI upload Once the issue is fixed, the tag reference should be reenqueued. -- Thierry Carrez (ttx) From marios at redhat.com Thu Nov 5 12:05:03 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 5 Nov 2020 14:05:03 +0200 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: Message-ID: On Thu, Nov 5, 2020 at 1:03 PM Thierry Carrez wrote: > zuul at openstack.org wrote: > > - release-openstack-python > https://zuul.opendev.org/t/openstack/build/652843f2641e47d781264a815a14894d > : POST_FAILURE in 2m 54s > > During the release job for ansible-role-redhat-subscription 1.1.1, twine > upload to PyPI failed with: > > HTTPError: 400 Bad Request from https://upload.pypi.org/legacy/ > The description failed to render in the default format of > reStructuredText. See https://pypi.org/help/#description-content-type > for more information. > > It's weird because it looks like this was addressed about a year ago, > and nothing significant changed in that area in the months following > 1.1.0 release. > > Hi Thierry, thanks for the update on that I would have missed it if you didn't send this mail. I am a bit confused though as you mentioned 1.1.0 here do you mean this is something to do with the ansible-role-redhat-subscription repo itself? > Current status: > Tag was pushed OK > No tarball upload > No PyPI upload > > Once the issue is fixed, the tag reference should be reenqueued. > > Indeed I see the tag is pushed OK https://opendev.org/openstack/ansible-role-redhat-subscription/src/tag/1.1.1 - so with respect to the pypi upload is there something for us to investigate or are we waiting on a rerun of the job? thanks, marios > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Nov 5 13:23:02 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 5 Nov 2020 14:23:02 +0100 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: Message-ID: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> Marios Andreou wrote: > thanks for the update on that I would have missed it if you didn't send > this mail. This is actually in reaction to a release job failure email that I received, so that we discuss the solution on the list. There was no prior email on the topic. > I am a bit confused though as you mentioned 1.1.0 here do you > mean this is something to do with the ansible-role-redhat-subscription > repo itself? The failure is linked to the content of the repository: basically, PyPI is rejecting how the README is specified. So it will likely require a patch to ansible-role-redhat-subscription itself, in which case we'd push a new tag (1.1.2). That said, it's unclear what the problem is, as the same problem was fixed[1] a year ago already and nothing really changed since the (successful) 1.1.0 publication. We'll explore more and let you know if there is anything you can help with :) [1] https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 -- Thierry Carrez (ttx) From satish.txt at gmail.com Thu Nov 5 13:58:39 2020 From: satish.txt at gmail.com (Satish Patel) Date: Thu, 5 Nov 2020 08:58:39 -0500 Subject: ovs-dpdk bonding question Message-ID: Folks, I have a compute node with only 2x10G port in that case, how do I configure ovs-dpdk with bonding support? I can understand if i have more than 2 nic ports then i can make one of nic port to management but in my case i have only two. Does that mean I can't utilize bonding features, is that true? How do other folks deal with this kind of scenario? (my deployment tool is openstack-ansible) ~S From whayutin at redhat.com Thu Nov 5 14:51:12 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 5 Nov 2020 07:51:12 -0700 Subject: [tripleo][ci] ubi-8:3 has broken all centos-8 tripleo jobs Message-ID: Greetings, Bug: https://bugs.launchpad.net/tripleo/+bug/1902846 Patch: https://review.opendev.org/#/c/761463 Status: RED Thanks to Sagi and Sandeep ( ruck / rovers ) the issue was caught yesterday. Few iterations on where and how to pin this back to a working ubi-8 image for containers builds. We are confident the current patch will set things back to working. Context: https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image CentOS / RHEL ecosystem are not quite in sync yet. -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Nov 5 15:17:26 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 5 Nov 2020 17:17:26 +0200 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> Message-ID: On Thu, Nov 5, 2020 at 3:24 PM Thierry Carrez wrote: > Marios Andreou wrote: > > thanks for the update on that I would have missed it if you didn't send > > this mail. > > This is actually in reaction to a release job failure email that I > received, so that we discuss the solution on the list. There was no > prior email on the topic. > > > I am a bit confused though as you mentioned 1.1.0 here do you > > mean this is something to do with the ansible-role-redhat-subscription > > repo itself? > > The failure is linked to the content of the repository: basically, PyPI > is rejecting how the README is specified. So it will likely require a > patch to ansible-role-redhat-subscription itself, in which case we'd > push a new tag (1.1.2). > > That said, it's unclear what the problem is, as the same problem was > fixed[1] a year ago already and nothing really changed since the > (successful) 1.1.0 publication. We'll explore more and let you know if > there is anything you can help with :) > > ack! Thank you much clearer for me now. If there is an update required then no problem we can address that and update with a newer tag > [1] > > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > -- > Thierry Carrez (ttx) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Thu Nov 5 15:33:04 2020 From: hberaud at redhat.com (Herve Beraud) Date: Thu, 5 Nov 2020 16:33:04 +0100 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> Message-ID: I confirm the markdown issue was fixed 1 year ago [1] by myself and since 3 new versions of ansible-role-redhat-subscription have been released [2]. The markdown fix is embedded in the three releases: $ git tag --contains fceb51c66e 1.0.4 1.1.0 1.1.1 2 releases was successfully uploaded to pypi: - https://pypi.org/project/ansible-role-redhat-subscription/1.1.0/ - https://pypi.org/project/ansible-role-redhat-subscription/1.0.4/ I saw a couple of changes in your README and they seem to mix markdown and restructuredText [3], maybe it could explain this issue but these changes are also in previous versions: $ git tag --contains 0949f34ffb 1.0.3 1.0.4 1.1.0 1.1.1 Maybe pypa introduced more strict checks on their side since our last release... I tried to generate a PKG-INFO locally and everything seems ok. Also I didn't see any new bugs that seem related to this issue on the pypa side [4]. [1] https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 [2] https://opendev.org/openstack/releases/commits/branch/master/deliverables/_independent/ansible-role-redhat-subscription.yaml [3] https://opendev.org/openstack/ansible-role-redhat-subscription/commit/0949f34ffb10e51787e824b00f0b5ae69592cdda [4] https://github.com/search?q=org%3Apypa+The+description+failed+to+render+in+the+default+format+of+reStructuredText&type=issues Le jeu. 5 nov. 2020 à 14:26, Thierry Carrez a écrit : > Marios Andreou wrote: > > thanks for the update on that I would have missed it if you didn't send > > this mail. > > This is actually in reaction to a release job failure email that I > received, so that we discuss the solution on the list. There was no > prior email on the topic. > > > I am a bit confused though as you mentioned 1.1.0 here do you > > mean this is something to do with the ansible-role-redhat-subscription > > repo itself? > > The failure is linked to the content of the repository: basically, PyPI > is rejecting how the README is specified. So it will likely require a > patch to ansible-role-redhat-subscription itself, in which case we'd > push a new tag (1.1.2). > > That said, it's unclear what the problem is, as the same problem was > fixed[1] a year ago already and nothing really changed since the > (successful) 1.1.0 publication. We'll explore more and let you know if > there is anything you can help with :) > > [1] > > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > -- > Thierry Carrez (ttx) > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- Le jeu. 5 nov. 2020 à 16:21, Marios Andreou a écrit : > > > On Thu, Nov 5, 2020 at 3:24 PM Thierry Carrez > wrote: > >> Marios Andreou wrote: >> > thanks for the update on that I would have missed it if you didn't send >> > this mail. >> >> This is actually in reaction to a release job failure email that I >> received, so that we discuss the solution on the list. There was no >> prior email on the topic. >> >> > I am a bit confused though as you mentioned 1.1.0 here do you >> > mean this is something to do with the ansible-role-redhat-subscription >> > repo itself? >> >> The failure is linked to the content of the repository: basically, PyPI >> is rejecting how the README is specified. So it will likely require a >> patch to ansible-role-redhat-subscription itself, in which case we'd >> push a new tag (1.1.2). >> >> That said, it's unclear what the problem is, as the same problem was >> fixed[1] a year ago already and nothing really changed since the >> (successful) 1.1.0 publication. We'll explore more and let you know if >> there is anything you can help with :) >> >> > ack! Thank you much clearer for me now. If there is an update required > then no problem we can address that and update with a newer tag > > > > >> [1] >> >> https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 >> >> -- >> Thierry Carrez (ttx) >> >> -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pub.virtualization at gmail.com Thu Nov 5 05:28:39 2020 From: pub.virtualization at gmail.com (Henry lol) Date: Thu, 5 Nov 2020 14:28:39 +0900 Subject: Question about the instance snapshot Message-ID: Hello, everyone I'm wondering whether the snapshot from the instance saves all disks attached to the instance or only the main(=first) disk, because I can't find any clear description for it. If the latter is true, should I detach all disks except for the main from the instance before taking a snapshot, and why doesn't it support all attached disks? Thanks Sincerely, -------------- next part -------------- An HTML attachment was scrubbed... URL: From ali74.ebrahimpour at gmail.com Thu Nov 5 09:09:39 2020 From: ali74.ebrahimpour at gmail.com (Ali Ebrahimpour) Date: Thu, 5 Nov 2020 12:39:39 +0330 Subject: freezer Message-ID: hi i recently install the freezer api and i have problem with using it if run this command: freezer-agent --debug --action backup --nova-inst-id a143d11d-fa27-4716-9795-e03b4e601df4 --storage local --container /tmp/ali --backup-name test --mode nova --engine nova --no-incremental true --log-file a.log --consistency-check and freezer api doesn't work current and error with: Engine error: a bytes-like object is required, not 'str' Engine error: Forced stop Adn attach the log please help me ! thank you for attention -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a.log Type: application/octet-stream Size: 90423 bytes Desc: not available URL: From ali74.ebrahimpour at gmail.com Thu Nov 5 09:12:06 2020 From: ali74.ebrahimpour at gmail.com (Ali Ebrahimpour) Date: Thu, 5 Nov 2020 12:42:06 +0330 Subject: Fwd: freezer In-Reply-To: References: Message-ID: hi i recently install the freezer api and i have problem with using it if run this command: freezer-agent --debug --action backup --nova-inst-id a143d11d-fa27-4716-9795-e03b4e601df4 --storage local --container /tmp/ali --backup-name test --mode nova --engine nova --no-incremental true --log-file a.log --consistency-check and freezer api doesn't work current and error with: Engine error: a bytes-like object is required, not 'str' Engine error: Forced stop Adn attach the log please help me ! thank you for attention -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: a.log Type: application/octet-stream Size: 90423 bytes Desc: not available URL: From whayutin at redhat.com Thu Nov 5 19:31:35 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 5 Nov 2020 12:31:35 -0700 Subject: [tripleo][ci] ubi-8:3 has broken all centos-8 tripleo jobs In-Reply-To: References: Message-ID: On Thu, Nov 5, 2020 at 7:51 AM Wesley Hayutin wrote: > Greetings, > > Bug: > https://bugs.launchpad.net/tripleo/+bug/1902846 > > Patch: > https://review.opendev.org/#/c/761463 > > Status: RED > Thanks to Sagi and Sandeep ( ruck / rovers ) the issue was caught > yesterday. Few iterations on where and how to pin this back to a working > ubi-8 image for containers builds. We are confident the current patch will > set things back to working. > > Context: > https://www.redhat.com/en/blog/introducing-red-hat-universal-base-image > CentOS / RHEL ecosystem are not quite in sync yet. > OK.. the patch has merged. We'll be monitoring the gate and see if we see anything else come up. As usual look at #tripleo for status... going to yellow. DON'T recheck the world yet -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Nov 5 20:35:47 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 05 Nov 2020 21:35:47 +0100 Subject: [neutron]Drivers meeting 06.11.2020 - cancelled Message-ID: <3846730.JagzSZ0HJ7@p1> Hi, We don't have any new RFEs to discuss so lets cancel tomorrow's drivers meeting. There is one fresh RFE [1] but we discussed that durig the PTG so now lets wait for proposed spec with more details and we will get back to discussion about approval of it in the drivers meeting in next weeks. Have a great weekend. [1] https://bugs.launchpad.net/neutron/+bug/1900934 -- Slawek Kaplonski Principal Software Engineer Red Hat From rafaelweingartner at gmail.com Thu Nov 5 21:05:47 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Thu, 5 Nov 2020 18:05:47 -0300 Subject: [neutron]Drivers meeting 06.11.2020 - cancelled In-Reply-To: <3846730.JagzSZ0HJ7@p1> References: <3846730.JagzSZ0HJ7@p1> Message-ID: I had a topic for this meeting. I wanted to ask about the spec https://review.opendev.org/#/c/739549/. Do we need other meetings to discuss it? Or, is the spec and the discussions we had there enough? We wanted to see it going out in Wallaby release. On Thu, Nov 5, 2020 at 5:39 PM Slawek Kaplonski wrote: > Hi, > > We don't have any new RFEs to discuss so lets cancel tomorrow's drivers > meeting. > There is one fresh RFE [1] but we discussed that durig the PTG so now lets > wait for proposed spec with more details and we will get back to > discussion > about approval of it in the drivers meeting in next weeks. > Have a great weekend. > > [1] https://bugs.launchpad.net/neutron/+bug/1900934 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > > -- Rafael Weingärtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Nov 5 21:49:50 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 05 Nov 2020 21:49:50 +0000 Subject: Fwd: freezer In-Reply-To: References: Message-ID: <859b373263dd608c04fae208a89b038e1cb7efc9.camel@redhat.com> On Thu, 2020-11-05 at 12:42 +0330, Ali Ebrahimpour wrote: > hi i recently install the freezer api and i have problem with using it > if run this command: > freezer-agent --debug --action backup --nova-inst-id > a143d11d-fa27-4716-9795-e03b4e601df4 --storage local --container /tmp/ali > --backup-name test  --mode nova --engine nova --no-incremental true >  --log-file a.log --consistency-check > > and freezer api doesn't work current and error with: > Engine error: a bytes-like object is required, not 'str' >  Engine error: Forced stop that looks like a python 3 issue im not sure how active it still is but it looks like its an issue between text string and byte strings > > Adn attach the log > please help me ! > > thank you for attention From haleyb.dev at gmail.com Fri Nov 6 03:15:59 2020 From: haleyb.dev at gmail.com (Brian Haley) Date: Thu, 5 Nov 2020 22:15:59 -0500 Subject: [neutron]Drivers meeting 06.11.2020 - cancelled In-Reply-To: References: <3846730.JagzSZ0HJ7@p1> Message-ID: <4adbaaa5-f795-0537-f26e-5547bad5d978@gmail.com> On 11/5/20 4:05 PM, Rafael Weingärtner wrote: > I had a topic for this meeting. I wanted to ask about the spec > https://review.opendev.org/#/c/739549/. > Do we need other meetings to discuss it? Or, is the spec  and the > discussions we had there enough? We wanted to see it going out in > Wallaby release. From https://bugs.launchpad.net/neutron/+bug/1885921/comments/4 this RFE was approved. I see you just had to re-target it to Wallaby, which is fine, I think we can continue any further discussions in the reviews. -Brian > On Thu, Nov 5, 2020 at 5:39 PM Slawek Kaplonski > wrote: > > Hi, > > We don't have any new RFEs to discuss so lets cancel tomorrow's drivers > meeting. > There is one fresh RFE [1] but we discussed that durig the PTG so > now lets > wait for proposed spec with more details and we will get back to > discussion > about approval of it in the drivers meeting in next weeks. > Have a great weekend. > > [1] https://bugs.launchpad.net/neutron/+bug/1900934 > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > > > > -- > Rafael Weingärtner From walsh277072 at gmail.com Fri Nov 6 05:20:30 2020 From: walsh277072 at gmail.com (WALSH CHANG) Date: Fri, 6 Nov 2020 05:20:30 +0000 Subject: [cinder][ceilometer][heat][instances] Installation Problems Message-ID: To whom it may concern, I am new to openstack, I got several errors when I installed the service followed by the installation guide. My openstack version : stein Ubuntu 18.0.4 1. Cinder-api is in /usr/bin/cinder-api, but when I type service cinder-api status, it shows Unit cinder-api.service could not be found. 2. Ceilometer and Heat tab didn't show in the Dashboard. 3. I was trying to launch an instance, but I got the status error and I tried to delete, but the instances can not be deleted. And I used nova force-delete instance-id, the error message is ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-e6c0d966-c7ec-4f20-8e9f-c788018a8d81) I explored the openstack installation myself, just wondering if there is any online hands-on training or course that I can enroll in? Thanks. Kind regards, Walsh -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Fri Nov 6 06:47:35 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Fri, 6 Nov 2020 06:47:35 +0000 (UTC) Subject: [openstack][InteropWG] : Interop weekly Friday call In-Reply-To: <1757fc3ee30.eea7e45388786.4328160222881040161@ghanshyammann.com> References: <1295138775.3349683.1603717569398.ref@mail.yahoo.com> <1295138775.3349683.1603717569398@mail.yahoo.com> <1791751856.3420249.1603726712148@mail.yahoo.com> <352076537.476027.1604015407570@mail.yahoo.com> <1829403956.43078.1604102260422@mail.yahoo.com> <1757fc3ee30.eea7e45388786.4328160222881040161@ghanshyammann.com> Message-ID: <1676177545.2373771.1604645255017@mail.yahoo.com> Hi all,Thanks Ghanshyam we may no impact based on zuul v3 for testing using tempest. If we submit a new guideline for 2020.10.json where will I see the patch under https://devops.org/osf/interior or patch?osf/interop  Let's ho over the call tomorrow 10 AM PST.Please join and add your topics to https://etherpad.opendev.org/p/interop The meet pad link for call is on eitherpad. ThanksPrakash Sent from Yahoo Mail on Android On Sat, Oct 31, 2020 at 10:45 AM, Ghanshyam Mann wrote: ---- On Fri, 30 Oct 2020 18:57:40 -0500 prakash RAMCHANDRAN wrote ---- >        Hi all, > We have few major Tasks  to elevate Interop WG to enable OpenStack + Open Infrastructure Branding and Logo Programs: > 1. https://opendev.org/osf/interop/src/branch/master/2020.06.json (To patch and update to 2020.10.json)All core module owners of Integrated OpenStack please respond before next Friday  Nov 6 -Interop WG call on meetpad > Interop Working Group - Weekly Friday 10-11 AM or UTC 17-18  (Next on Nov 6 & 13th)Lnink: https://meetpad.opendev.org/Interop-WG-weekly-meeting > Checklist for PTLs (core - keystone, glance, nova, neutron, cinder & swift) and add-ons (heat & designate) > 1. Did the Victoria switch for Jenkins to Zuul V3 and  move to new Ubuntu Focal impact your  DevStack and Tempesting module  feature testing in any way  for Interop? It is moved from zuulv2 native jobs to zuulv3 native and Ubuntu Bionic to Ubuntu Focal which basically are only testing framework side updates not on any user facing interface updates so it will not impact any interop way. -gmann > 2. Pease check  https://opendev.org/osf/interop/src/branch/master/2020.06.json#L72We will drop stein and add wallaby, hence train release notes will be the base line for feature retention and compliance baseline  testing > Please verify what are the changes you may need for Victoria cycle for Logo program for your modules list all"required": [] > "advisory": [], > "deprecated": [], > "removed": [] > > 3. Reply by email the changes expected for us the review based on your Victoria release notes or add notes to https://releases.openstack.org/victoria/?_ga=2.174650977.277942436.1604012691-802386709.1603466376 > > >  4. Discussions on Manila - Need Volunteer 5. Discussions on Ironic - Need Volunteers 6. Discussion on future Kata GoLang  -  Kata (with tempest vs k8s test tools  or  https://onsi.github.io/ginkgo/ +  Airship, Starlingx, Zuul on infralab ?) - Need Volunteers > > ThanksPrakashFor Interop WG > ==================================================================================================================                                                            >                >                >                        Hi all, > We have two sessions one at  13 UTC ( 6 AM PDT) and 16 UTC (9 AM PDT) > Based on our interactions and collaborations - Manila team met and I could not attend their call but here is the link to their efforts: > https://etherpad.opendev.org/p/wallaby-ptg-manila-interop >  We have 4 items to cover and seeking Volunteers for possible Manila & kubernetes Kata + RUNC based Container compliance (GoLang Must) > Also we are planning to introduce GoLang based Refstak/Tempest if there are volunteers ready to work with QA team (Gmann). > Thus we can finish in 1 session at 13 UTC and plan to  cancel the second session at 16 UTC.====================================================================================================Your comments > > > > > ======================================================================================================See you for Interop and add any comments above > ThanksPrakashFor Interop WG >                                                                                On Monday, October 26, 2020, 08:38:32 AM PDT, prakash RAMCHANDRAN wrote:                                >                >                        Hi all > My apology on the delayed start and here is the items we went over on first session. > Refer : https://etherpad.opendev.org/p/interop-wallaby-ptg (Monday 10/26 recorded the meeting for everyone to see, its cloud recording and will know from PTG hosts where they have that?) > Summary:1. Plan to Complete Victoria guidelines specs  for interop in 2-6 weeks depending on adding new  [OpenStack Powered File System Branding as Add-on Program] >    - last Ussuri  guidelines are at https://opendev.org/osf/interop/src/branch/master/2020.06.json > > > > New one will be 2020.10.json to be added >    - For existing integrated modules  [ Core OpenStack services including identity, compute, networking, block storage, and object storage ]    - for two existing add-ons [Orchestration (heat) , DNS (designate) "Not a core capability, add-on program is available"    - This is for Manila Filesystem add-on OpenDev Etherpad "Not a core capability, add-on program to be made available" >  OpenDev Etherpad > >  >  >  > > > > > We have one more opportunity  on 10/30 slot at the end of PTG if BMaaS (Ironic) if Julia has any proposals, besides  (Open Infra Kata + Docker) Container has additional proposal community is working on.    Look forward to seeing you all on October 30 as what the Road-map will look like for 2020 and 2021 going forward. > ThanksPrakash > >                                                                                On Monday, October 26, 2020, 06:09:00 AM PDT, Goutham Pacha Ravi wrote:                                >                >                Hi Prakash, > > We're here: https://zoom.us/j/92649902134?pwd=a01aMXl6ZlNEZDlsMjJMTGNMVUp1UT09 > > On Mon, Oct 26, 2020 at 6:06 AM prakash RAMCHANDRAN wrote: > https://meetpad.opendev.org/Interop-WG-weekly-meeting >                                                            -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Nov 6 08:29:56 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Fri, 06 Nov 2020 09:29:56 +0100 Subject: [nova] Wallaby spec review day Message-ID: Hi, Wallaby Milestone 1 is four weeks from now. As Sylvain raised on the weekly meeting it is a good time to have a spec review day somewhere before M1. So I'm proposing 17th of November as spec review day. Let me know if it what you think. Cheers, gibi From Cyrille.CARTIER at antemeta.fr Fri Nov 6 08:38:23 2020 From: Cyrille.CARTIER at antemeta.fr (Cyrille CARTIER) Date: Fri, 6 Nov 2020 08:38:23 +0000 Subject: Fwd: freezer In-Reply-To: <859b373263dd608c04fae208a89b038e1cb7efc9.camel@redhat.com> References: <859b373263dd608c04fae208a89b038e1cb7efc9.camel@redhat.com> Message-ID: <6cb413b63f30445b9e776f0c2e0ab153@GUYMAIL02.antemeta.net> Hi Ali, Which openstack version do you use ? I have a similar problem on my Train plateform (deployed by kolla-ansible), using freezer-scheduler. There are many commit for py3 correction for Ussuri and Victoria branch, in Git repository [1]. Did you try these version? I have already post the problem on StoryBoard [2] and Launchpad [3]. -----Message d'origine----- Objet : Re: Fwd: freezer On Thu, 2020-11-05 at 12:42 +0330, Ali Ebrahimpour wrote: > hi i recently install the freezer api and i have problem with using it > if run this command: > freezer-agent --debug --action backup --nova-inst-id > a143d11d-fa27-4716-9795-e03b4e601df4 --storage local --container > /tmp/ali --backup-name test  --mode nova --engine nova > --no-incremental true >  --log-file a.log --consistency-check > > and freezer api doesn't work current and error with: > Engine error: a bytes-like object is required, not 'str' >  Engine error: Forced stop that looks like a python 3 issue im not sure how active it still is but it looks like its an issue between text string and byte strings > > Adn attach the log > please help me ! > > thank you for attention [1]: https://opendev.org/openstack/freezer [2]: https://storyboard.openstack.org/#!/project/openstack/freezer [3]: https://bugs.launchpad.net/freezer/+bug/1901179 Cyrille From rafaelweingartner at gmail.com Fri Nov 6 09:56:35 2020 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Fri, 6 Nov 2020 06:56:35 -0300 Subject: [neutron]Drivers meeting 06.11.2020 - cancelled In-Reply-To: <4adbaaa5-f795-0537-f26e-5547bad5d978@gmail.com> References: <3846730.JagzSZ0HJ7@p1> <4adbaaa5-f795-0537-f26e-5547bad5d978@gmail.com> Message-ID: Great, thanks! Em sex, 6 de nov de 2020 00:16, Brian Haley escreveu: > On 11/5/20 4:05 PM, Rafael Weingärtner wrote: > > I had a topic for this meeting. I wanted to ask about the spec > > https://review.opendev.org/#/c/739549/. > > Do we need other meetings to discuss it? Or, is the spec and the > > discussions we had there enough? We wanted to see it going out in > > Wallaby release. > > From https://bugs.launchpad.net/neutron/+bug/1885921/comments/4 this > RFE was approved. I see you just had to re-target it to Wallaby, which > is fine, I think we can continue any further discussions in the reviews. > > -Brian > > > On Thu, Nov 5, 2020 at 5:39 PM Slawek Kaplonski > > wrote: > > > > Hi, > > > > We don't have any new RFEs to discuss so lets cancel tomorrow's > drivers > > meeting. > > There is one fresh RFE [1] but we discussed that durig the PTG so > > now lets > > wait for proposed spec with more details and we will get back to > > discussion > > about approval of it in the drivers meeting in next weeks. > > Have a great weekend. > > > > [1] https://bugs.launchpad.net/neutron/+bug/1900934 > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > > > > > > > > > > -- > > Rafael Weingärtner > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Nov 6 10:02:32 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 6 Nov 2020 11:02:32 +0100 Subject: [oslo][release] Oslo's Transition To Independent In-Reply-To: <4a9dc3ba-ef77-fdfd-2729-b7cb6357e3ed@nemebean.com> References: <4a9dc3ba-ef77-fdfd-2729-b7cb6357e3ed@nemebean.com> Message-ID: First, thanks for your answer. Le mer. 4 nov. 2020 à 15:50, Ben Nemec a écrit : > > > On 11/4/20 7:31 AM, Herve Beraud wrote: > > Greetings, > > > > Is it time for us to move some parts of Oslo to the independent release > > model [1]? > > > > I think we could consider Oslo world is mostly stable enough to answer > > yes to the previous question. > > > > However, the goal of this email is to trigger the debat and see which > > deliverables could be transitioned to the independent model. > > > > Do we need to expect that major changes will happen within Oslo and who > > could warrant to continue to follow cycle-with-intermediary model [2]? > > I would hesitate to try to predict the future on what will see major > changes and what won't. I would prefer to look at this more from the > perspective of what Oslo libraries are tied to the OpenStack version. > For example, I don't think oslo.messaging should be moved to > independent. It's important that RPC has a sane version to version > upgrade path, and the easiest way to ensure that is to keep it on the > regular cycle release schedule. The same goes for other libraries too: > oslo.versionedobjects, oslo.policy, oslo.service, oslo.db, possibly also > things like oslo.config and oslo.context (I suspect contexts need to be > release-specific, but maybe someone from a consuming project can weigh > in). Even oslo.serialization could have upgrade impacts if it is being > used to serialize internal data in a service. > Agreed, the goal here isn't to try to move everything to the independent model but more to identify which projects could be eligible for this switch. I strongly agree that the previous list of projects that you quote should stay binded to openstack cycles and should continue to rely on stable branches. These kinds of projects and also openstack's services are strongly tied to backends, their version, and available APIs and so to openstack's series, so they must remain linked to them. > That said, many of the others can probably be moved. oslo.i18n and > oslo.upgradecheck are both pretty stable at this point and not really > tied to a specific release. As long as we're responsible with any future > changes to them it should be pretty safe to make them independent. > Agreed. > This does raise a question though: Are the benefits of going independent > with the release model sufficient to justify splitting the release > models of the Oslo projects? I assume the motivation is to avoid having > to do as many backports of bug fixes, but if we're mostly doing this > with low-volume, stable projects does it gain us that much? > Yes, you're right, it could help us to reduce our needed maintenance and so our Oslo's activity in general. Indeed, about 35 projects are hosted by Oslo and concerning the active maintainers the trend isn't on the rise. So reducing the number of stable branches to maintain could benefit us, and it could be done by moving projects to an independent model. > > I guess I don't have a strong opinion one way or another on this yet, > and would defer to our release liaisons if they want to go one way or > other other. Hopefully this provides some things to think about though. > Yes you provided interesting observations, thanks. It could be interesting to get feedback from other cores. > -Ben > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Nov 6 10:33:16 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 6 Nov 2020 12:33:16 +0200 Subject: [magnum] heat-container-agent:victoria-dev Message-ID: Hi, 20 days ago the image was updated and now all clusters fail to deploy because it cannot find /usr/bin/kubectl, i think. /usr/bin/kubectl apply -f /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system Some services are installed using full path instead of simply kubectl. Should the image be fixed or reverted the changes to old revision or we should just fix the scripts? -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Nov 6 10:33:37 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 6 Nov 2020 12:33:37 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: I wanted to say, 20 hours ago instead of days. On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: > Hi, > > 20 days ago the image was updated and now all clusters fail to deploy > because it cannot find > /usr/bin/kubectl, i think. > > /usr/bin/kubectl apply -f > /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system > > Some services are installed using full path instead of simply kubectl. > > Should the image be fixed or reverted the changes to old revision or we > should just fix the scripts? > > -- > Ionut Biru - https://fleio.com > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Fri Nov 6 10:43:15 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Fri, 6 Nov 2020 11:43:15 +0100 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Am I missing something? It is there. Spyros ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version --client --short Client Version: v1.18.2 ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 version --client --short Client Version: v1.18.2 On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: > I wanted to say, 20 hours ago instead of days. > > On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: > >> Hi, >> >> 20 days ago the image was updated and now all clusters fail to deploy >> because it cannot find >> /usr/bin/kubectl, i think. >> >> /usr/bin/kubectl apply -f >> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >> >> Some services are installed using full path instead of simply kubectl. >> >> Should the image be fixed or reverted the changes to old revision or we >> should just fix the scripts? >> >> -- >> Ionut Biru - https://fleio.com >> > > > -- > Ionut Biru - https://fleio.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Nov 6 11:01:31 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 6 Nov 2020 13:01:31 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Hi, Not sure if because kubectl is not found but here are the logs, I cannot figure out why it is not working with the new heat image. journalctl https://paste.xinu.at/jsA/ heat-config master https://paste.xinu.at/pEz/ heat-config master cluster config https://paste.xinu.at/K85ZY5/ [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get pods --all-namespaces No resources found [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get all --all-namespaces NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default service/kubernetes ClusterIP 10.254.0.1 443/TCP 36m It fails to deploy and I don't have any services configured. On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis wrote: > Am I missing something? It is there. > > Spyros > > ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint > /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version > --client --short > Client Version: v1.18.2 > ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint > /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 > version --client --short > Client Version: v1.18.2 > > On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: > >> I wanted to say, 20 hours ago instead of days. >> >> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >> >>> Hi, >>> >>> 20 days ago the image was updated and now all clusters fail to deploy >>> because it cannot find >>> /usr/bin/kubectl, i think. >>> >>> /usr/bin/kubectl apply -f >>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>> >>> Some services are installed using full path instead of simply kubectl. >>> >>> Should the image be fixed or reverted the changes to old revision or we >>> should just fix the scripts? >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> >> >> -- >> Ionut Biru - https://fleio.com >> > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Fri Nov 6 11:23:28 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Fri, 6 Nov 2020 12:23:28 +0100 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: I see, this says kubectl misconfigured not missing. error: Missing or incomplete configuration info. Please point to an existing, complete config file: I guess you miss some patches: https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 Try using an older image of the agent or take the patch above. Spyros On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: > Hi, > > Not sure if because kubectl is not found but here are the logs, I cannot > figure out why it is not working with the new heat image. > > journalctl https://paste.xinu.at/jsA/ > heat-config master https://paste.xinu.at/pEz/ > heat-config master cluster config https://paste.xinu.at/K85ZY5/ > > [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get > pods --all-namespaces > No resources found > [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get > all --all-namespaces > NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP > PORT(S) AGE > default service/kubernetes ClusterIP 10.254.0.1 > 443/TCP 36m > > It fails to deploy and I don't have any services configured. > > On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis > wrote: > >> Am I missing something? It is there. >> >> Spyros >> >> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >> --client --short >> Client Version: v1.18.2 >> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >> version --client --short >> Client Version: v1.18.2 >> >> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >> >>> I wanted to say, 20 hours ago instead of days. >>> >>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >>> >>>> Hi, >>>> >>>> 20 days ago the image was updated and now all clusters fail to deploy >>>> because it cannot find >>>> /usr/bin/kubectl, i think. >>>> >>>> /usr/bin/kubectl apply -f >>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>> >>>> Some services are installed using full path instead of simply kubectl. >>>> >>>> Should the image be fixed or reverted the changes to old revision or we >>>> should just fix the scripts? >>>> >>>> -- >>>> Ionut Biru - https://fleio.com >>>> >>> >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> > > -- > Ionut Biru - https://fleio.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Nov 6 11:34:44 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 6 Nov 2020 13:34:44 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Hi, I'm using stable/victoria. It works fine with an older image. On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis wrote: > I see, this says kubectl misconfigured not missing. > error: Missing or incomplete configuration info. Please point to an > existing, complete config file: > > I guess you miss some patches: > https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 > > Try using an older image of the agent or take the patch above. > > Spyros > > On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: > >> Hi, >> >> Not sure if because kubectl is not found but here are the logs, I cannot >> figure out why it is not working with the new heat image. >> >> journalctl https://paste.xinu.at/jsA/ >> heat-config master https://paste.xinu.at/pEz/ >> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >> >> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get >> pods --all-namespaces >> No resources found >> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get >> all --all-namespaces >> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP >> PORT(S) AGE >> default service/kubernetes ClusterIP 10.254.0.1 >> 443/TCP 36m >> >> It fails to deploy and I don't have any services configured. >> >> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >> wrote: >> >>> Am I missing something? It is there. >>> >>> Spyros >>> >>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>> --client --short >>> Client Version: v1.18.2 >>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>> version --client --short >>> Client Version: v1.18.2 >>> >>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >>> >>>> I wanted to say, 20 hours ago instead of days. >>>> >>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >>>> >>>>> Hi, >>>>> >>>>> 20 days ago the image was updated and now all clusters fail to deploy >>>>> because it cannot find >>>>> /usr/bin/kubectl, i think. >>>>> >>>>> /usr/bin/kubectl apply -f >>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>> >>>>> Some services are installed using full path instead of simply kubectl. >>>>> >>>>> Should the image be fixed or reverted the changes to old revision or >>>>> we should just fix the scripts? >>>>> >>>>> -- >>>>> Ionut Biru - https://fleio.com >>>>> >>>> >>>> >>>> -- >>>> Ionut Biru - https://fleio.com >>>> >>> >> >> -- >> Ionut Biru - https://fleio.com >> > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Nov 6 12:02:55 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 6 Nov 2020 14:02:55 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Hi, I wonder if it is because the image is from fedora:rawhide and something is too new in there and breaks the flow? https://opendev.org/openstack/magnum/commit/1de7a6af5ee2b834360e3daba34adb6b908fa035 On Fri, Nov 6, 2020 at 1:34 PM Ionut Biru wrote: > Hi, > > I'm using stable/victoria. It works fine with an older image. > > On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis wrote: > >> I see, this says kubectl misconfigured not missing. >> error: Missing or incomplete configuration info. Please point to an >> existing, complete config file: >> >> I guess you miss some patches: >> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >> >> Try using an older image of the agent or take the patch above. >> >> Spyros >> >> On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: >> >>> Hi, >>> >>> Not sure if because kubectl is not found but here are the logs, I cannot >>> figure out why it is not working with the new heat image. >>> >>> journalctl https://paste.xinu.at/jsA/ >>> heat-config master https://paste.xinu.at/pEz/ >>> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >>> >>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get >>> pods --all-namespaces >>> No resources found >>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl get >>> all --all-namespaces >>> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP >>> PORT(S) AGE >>> default service/kubernetes ClusterIP 10.254.0.1 >>> 443/TCP 36m >>> >>> It fails to deploy and I don't have any services configured. >>> >>> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >>> wrote: >>> >>>> Am I missing something? It is there. >>>> >>>> Spyros >>>> >>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>>> --client --short >>>> Client Version: v1.18.2 >>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>>> version --client --short >>>> Client Version: v1.18.2 >>>> >>>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >>>> >>>>> I wanted to say, 20 hours ago instead of days. >>>>> >>>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> 20 days ago the image was updated and now all clusters fail to deploy >>>>>> because it cannot find >>>>>> /usr/bin/kubectl, i think. >>>>>> >>>>>> /usr/bin/kubectl apply -f >>>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>>> >>>>>> Some services are installed using full path instead of simply kubectl. >>>>>> >>>>>> Should the image be fixed or reverted the changes to old revision or >>>>>> we should just fix the scripts? >>>>>> >>>>>> -- >>>>>> Ionut Biru - https://fleio.com >>>>>> >>>>> >>>>> >>>>> -- >>>>> Ionut Biru - https://fleio.com >>>>> >>>> >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> > > -- > Ionut Biru - https://fleio.com > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Fri Nov 6 12:36:36 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Fri, 6 Nov 2020 13:36:36 +0100 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Check if you have this patch deployed: https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 On Fri, Nov 6, 2020 at 1:03 PM Ionut Biru wrote: > Hi, > > I wonder if it is because the image is from fedora:rawhide and something > is too new in there and breaks the flow? > > https://opendev.org/openstack/magnum/commit/1de7a6af5ee2b834360e3daba34adb6b908fa035 > > > On Fri, Nov 6, 2020 at 1:34 PM Ionut Biru wrote: > >> Hi, >> >> I'm using stable/victoria. It works fine with an older image. >> >> On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis >> wrote: >> >>> I see, this says kubectl misconfigured not missing. >>> error: Missing or incomplete configuration info. Please point to an >>> existing, complete config file: >>> >>> I guess you miss some patches: >>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>> >>> Try using an older image of the agent or take the patch above. >>> >>> Spyros >>> >>> On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: >>> >>>> Hi, >>>> >>>> Not sure if because kubectl is not found but here are the logs, I >>>> cannot figure out why it is not working with the new heat image. >>>> >>>> journalctl https://paste.xinu.at/jsA/ >>>> heat-config master https://paste.xinu.at/pEz/ >>>> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >>>> >>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>> get pods --all-namespaces >>>> No resources found >>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>> get all --all-namespaces >>>> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP >>>> PORT(S) AGE >>>> default service/kubernetes ClusterIP 10.254.0.1 >>>> 443/TCP 36m >>>> >>>> It fails to deploy and I don't have any services configured. >>>> >>>> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >>>> wrote: >>>> >>>>> Am I missing something? It is there. >>>>> >>>>> Spyros >>>>> >>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>>>> --client --short >>>>> Client Version: v1.18.2 >>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>>>> version --client --short >>>>> Client Version: v1.18.2 >>>>> >>>>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >>>>> >>>>>> I wanted to say, 20 hours ago instead of days. >>>>>> >>>>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> 20 days ago the image was updated and now all clusters fail to >>>>>>> deploy because it cannot find >>>>>>> /usr/bin/kubectl, i think. >>>>>>> >>>>>>> /usr/bin/kubectl apply -f >>>>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>>>> >>>>>>> Some services are installed using full path instead of simply >>>>>>> kubectl. >>>>>>> >>>>>>> Should the image be fixed or reverted the changes to old revision or >>>>>>> we should just fix the scripts? >>>>>>> >>>>>>> -- >>>>>>> Ionut Biru - https://fleio.com >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> Ionut Biru - https://fleio.com >>>>>> >>>>> >>>> >>>> -- >>>> Ionut Biru - https://fleio.com >>>> >>> >> >> -- >> Ionut Biru - https://fleio.com >> > > > -- > Ionut Biru - https://fleio.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ionut at fleio.com Fri Nov 6 12:40:10 2020 From: ionut at fleio.com (Ionut Biru) Date: Fri, 6 Nov 2020 14:40:10 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Hi, yes, it's deployed: https://paste.xinu.at/pEz/#n855 On Fri, Nov 6, 2020 at 2:36 PM Spyros Trigazis wrote: > Check if you have this patch deployed: > https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 > > On Fri, Nov 6, 2020 at 1:03 PM Ionut Biru wrote: > >> Hi, >> >> I wonder if it is because the image is from fedora:rawhide and something >> is too new in there and breaks the flow? >> >> https://opendev.org/openstack/magnum/commit/1de7a6af5ee2b834360e3daba34adb6b908fa035 >> >> >> On Fri, Nov 6, 2020 at 1:34 PM Ionut Biru wrote: >> >>> Hi, >>> >>> I'm using stable/victoria. It works fine with an older image. >>> >>> On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis >>> wrote: >>> >>>> I see, this says kubectl misconfigured not missing. >>>> error: Missing or incomplete configuration info. Please point to an >>>> existing, complete config file: >>>> >>>> I guess you miss some patches: >>>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>>> >>>> Try using an older image of the agent or take the patch above. >>>> >>>> Spyros >>>> >>>> On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: >>>> >>>>> Hi, >>>>> >>>>> Not sure if because kubectl is not found but here are the logs, I >>>>> cannot figure out why it is not working with the new heat image. >>>>> >>>>> journalctl https://paste.xinu.at/jsA/ >>>>> heat-config master https://paste.xinu.at/pEz/ >>>>> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >>>>> >>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>>> get pods --all-namespaces >>>>> No resources found >>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>>> get all --all-namespaces >>>>> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP >>>>> PORT(S) AGE >>>>> default service/kubernetes ClusterIP 10.254.0.1 >>>>> 443/TCP 36m >>>>> >>>>> It fails to deploy and I don't have any services configured. >>>>> >>>>> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >>>>> wrote: >>>>> >>>>>> Am I missing something? It is there. >>>>>> >>>>>> Spyros >>>>>> >>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>>>>> --client --short >>>>>> Client Version: v1.18.2 >>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>>>>> version --client --short >>>>>> Client Version: v1.18.2 >>>>>> >>>>>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >>>>>> >>>>>>> I wanted to say, 20 hours ago instead of days. >>>>>>> >>>>>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> 20 days ago the image was updated and now all clusters fail to >>>>>>>> deploy because it cannot find >>>>>>>> /usr/bin/kubectl, i think. >>>>>>>> >>>>>>>> /usr/bin/kubectl apply -f >>>>>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>>>>> >>>>>>>> Some services are installed using full path instead of simply >>>>>>>> kubectl. >>>>>>>> >>>>>>>> Should the image be fixed or reverted the changes to old revision >>>>>>>> or we should just fix the scripts? >>>>>>>> >>>>>>>> -- >>>>>>>> Ionut Biru - https://fleio.com >>>>>>>> >>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Ionut Biru - https://fleio.com >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Ionut Biru - https://fleio.com >>>>> >>>> >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> >> >> -- >> Ionut Biru - https://fleio.com >> > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Fri Nov 6 12:47:11 2020 From: thierry at openstack.org (Thierry Carrez) Date: Fri, 6 Nov 2020 13:47:11 +0100 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> Message-ID: The weird thing is that the PyPI error message says: "The description failed to render in the default format of reStructuredText" while we specify: long_description_content_type='text/markdown' in setup.py. It looks like PyPI is ignoring our indication in setup.py, and therefore (accurately) reporting failure to render in RST. Herve Beraud wrote: > I confirm the markdown issue was fixed 1 year ago [1] by myself and > since 3 new versions of ansible-role-redhat-subscription have been > released [2]. > > The markdown fix is embedded in the three releases: > $ git tag --contains fceb51c66e > 1.0.4 > 1.1.0 > 1.1.1 > > 2 releases was successfully uploaded to pypi: > - https://pypi.org/project/ansible-role-redhat-subscription/1.1.0/ > - https://pypi.org/project/ansible-role-redhat-subscription/1.0.4/ > > I saw a couple of changes in your README and they seem to mix markdown > and restructuredText [3], maybe it could explain this issue but these > changes are also in previous versions: > > $ git tag --contains 0949f34ffb > 1.0.3 > 1.0.4 > 1.1.0 > 1.1.1 > > Maybe pypa introduced more strict checks on their side since our last > release... > > I tried to generate a PKG-INFO locally and everything seems ok. > > Also I didn't see any new bugs that seem related to this issue on the > pypa side [4]. > > [1] > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > [2] > https://opendev.org/openstack/releases/commits/branch/master/deliverables/_independent/ansible-role-redhat-subscription.yaml > [3] > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/0949f34ffb10e51787e824b00f0b5ae69592cdda > [4] > https://github.com/search?q=org%3Apypa+The+description+failed+to+render+in+the+default+format+of+reStructuredText&type=issues > > > > Le jeu. 5 nov. 2020 à 14:26, Thierry Carrez > a écrit : > > Marios Andreou wrote: > > thanks for the update on that I would have missed it if you > didn't send > > this mail. > > This is actually in reaction to a release job failure email that I > received, so that we discuss the solution on the list. There was no > prior email on the topic. > > > I am a bit confused though as you mentioned 1.1.0 here do you > > mean this is something to do with the > ansible-role-redhat-subscription > > repo itself? > > The failure is linked to the content of the repository: basically, PyPI > is rejecting how the README is specified. So it will likely require a > patch to ansible-role-redhat-subscription itself, in which case we'd > push a new tag (1.1.2). > > That said, it's unclear what the problem is, as the same problem was > fixed[1] a year ago already and nothing really changed since the > (successful) 1.1.0 publication. We'll explore more and let you know if > there is anything you can help with :) > > [1] > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > -- > Thierry Carrez (ttx) > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > > Le jeu. 5 nov. 2020 à 16:21, Marios Andreou > a écrit : > > > > On Thu, Nov 5, 2020 at 3:24 PM Thierry Carrez > wrote: > > Marios Andreou wrote: > > thanks for the update on that I would have missed it if you > didn't send > > this mail. > > This is actually in reaction to a release job failure email that I > received, so that we discuss the solution on the list. There was no > prior email on the topic. > > > I am a bit confused though as you mentioned 1.1.0 here do you > > mean this is something to do with the > ansible-role-redhat-subscription > > repo itself? > > The failure is linked to the content of the repository: > basically, PyPI > is rejecting how the README is specified. So it will likely > require a > patch to ansible-role-redhat-subscription itself, in which case > we'd > push a new tag (1.1.2). > > That said, it's unclear what the problem is, as the same problem > was > fixed[1] a year ago already and nothing really changed since the > (successful) 1.1.0 publication. We'll explore more and let you > know if > there is anything you can help with :) > > > ack! Thank you much clearer for me now. If there is an update > required then no problem we can address that and update with a newer tag > > > [1] > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > -- > Thierry Carrez (ttx) > > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > -- Thierry Carrez (ttx) From skaplons at redhat.com Fri Nov 6 13:50:41 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 06 Nov 2020 14:50:41 +0100 Subject: [neutron][stable]Proposing Rodolfo Alonso Hernandez as a stable branches core reviewer In-Reply-To: <1766330.tdWV9SEqCh@p1> References: <1766330.tdWV9SEqCh@p1> Message-ID: <4792298.JM3M1uNXaj@p1> Hi, There is a week since I sent this nomination and there was no any negative feedback. So I asked today and Thierry added Rodolfo to the neutron-stable-maint group. Welcome in the Neutron stable core team Rodolfo :) Dnia czwartek, 29 października 2020 12:33:04 CET Slawek Kaplonski pisze: > Hi, > > I would like to propose Rodolfo Alonso Hernandez (ralonsoh) to be new member > of the Neutron stable core team. > Rodolfo works in Neutron since very long time. He is core reviewer in master > branch already. > During last few cycles he also proved his ability to help with stable > branches. He is proposing many backports from master to our stable branches > as well as doing reviews of other backports. > He has knowledge about stable branches policies. > I think that he will be great addition to our (not too big) stable core > team. > > I will open this nomination open for a week to get feedback about it. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- Slawek Kaplonski Principal Software Engineer Red Hat From hberaud at redhat.com Fri Nov 6 14:50:27 2020 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 6 Nov 2020 15:50:27 +0100 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> Message-ID: I reported a related bug on pypa: https://github.com/pypa/warehouse/issues/8791 Le ven. 6 nov. 2020 à 13:49, Thierry Carrez a écrit : > The weird thing is that the PyPI error message says: > > "The description failed to render in the default format of > reStructuredText" > > while we specify: > > long_description_content_type='text/markdown' > > in setup.py. It looks like PyPI is ignoring our indication in setup.py, > and therefore (accurately) reporting failure to render in RST. > > Herve Beraud wrote: > > I confirm the markdown issue was fixed 1 year ago [1] by myself and > > since 3 new versions of ansible-role-redhat-subscription have been > > released [2]. > > > > The markdown fix is embedded in the three releases: > > $ git tag --contains fceb51c66e > > 1.0.4 > > 1.1.0 > > 1.1.1 > > > > 2 releases was successfully uploaded to pypi: > > - https://pypi.org/project/ansible-role-redhat-subscription/1.1.0/ > > - https://pypi.org/project/ansible-role-redhat-subscription/1.0.4/ > > > > I saw a couple of changes in your README and they seem to mix markdown > > and restructuredText [3], maybe it could explain this issue but these > > changes are also in previous versions: > > > > $ git tag --contains 0949f34ffb > > 1.0.3 > > 1.0.4 > > 1.1.0 > > 1.1.1 > > > > Maybe pypa introduced more strict checks on their side since our last > > release... > > > > I tried to generate a PKG-INFO locally and everything seems ok. > > > > Also I didn't see any new bugs that seem related to this issue on the > > pypa side [4]. > > > > [1] > > > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > [2] > > > https://opendev.org/openstack/releases/commits/branch/master/deliverables/_independent/ansible-role-redhat-subscription.yaml > > [3] > > > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/0949f34ffb10e51787e824b00f0b5ae69592cdda > > [4] > > > https://github.com/search?q=org%3Apypa+The+description+failed+to+render+in+the+default+format+of+reStructuredText&type=issues > > > > > > > > Le jeu. 5 nov. 2020 à 14:26, Thierry Carrez > > a écrit : > > > > Marios Andreou wrote: > > > thanks for the update on that I would have missed it if you > > didn't send > > > this mail. > > > > This is actually in reaction to a release job failure email that I > > received, so that we discuss the solution on the list. There was no > > prior email on the topic. > > > > > I am a bit confused though as you mentioned 1.1.0 here do you > > > mean this is something to do with the > > ansible-role-redhat-subscription > > > repo itself? > > > > The failure is linked to the content of the repository: basically, > PyPI > > is rejecting how the README is specified. So it will likely require a > > patch to ansible-role-redhat-subscription itself, in which case we'd > > push a new tag (1.1.2). > > > > That said, it's unclear what the problem is, as the same problem was > > fixed[1] a year ago already and nothing really changed since the > > (successful) 1.1.0 publication. We'll explore more and let you know > if > > there is anything you can help with :) > > > > [1] > > > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > > > -- > > Thierry Carrez (ttx) > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > Le jeu. 5 nov. 2020 à 16:21, Marios Andreou > > a écrit : > > > > > > > > On Thu, Nov 5, 2020 at 3:24 PM Thierry Carrez > > wrote: > > > > Marios Andreou wrote: > > > thanks for the update on that I would have missed it if you > > didn't send > > > this mail. > > > > This is actually in reaction to a release job failure email that > I > > received, so that we discuss the solution on the list. There was > no > > prior email on the topic. > > > > > I am a bit confused though as you mentioned 1.1.0 here do you > > > mean this is something to do with the > > ansible-role-redhat-subscription > > > repo itself? > > > > The failure is linked to the content of the repository: > > basically, PyPI > > is rejecting how the README is specified. So it will likely > > require a > > patch to ansible-role-redhat-subscription itself, in which case > > we'd > > push a new tag (1.1.2). > > > > That said, it's unclear what the problem is, as the same problem > > was > > fixed[1] a year ago already and nothing really changed since the > > (successful) 1.1.0 publication. We'll explore more and let you > > know if > > there is anything you can help with :) > > > > > > ack! Thank you much clearer for me now. If there is an update > > required then no problem we can address that and update with a newer > tag > > > > > > [1] > > > https://opendev.org/openstack/ansible-role-redhat-subscription/commit/fceb51c66eb343a7cd20891da8728c65ff703ce7 > > > > -- > > Thierry Carrez (ttx) > > > > > > > > -- > > Hervé Beraud > > Senior Software Engineer at Red Hat > > irc: hberaud > > https://github.com/4383/ > > https://twitter.com/4383hberaud > > -----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > -- > Thierry Carrez (ttx) > > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Arkady.Kanevsky at dell.com Fri Nov 6 15:15:57 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Fri, 6 Nov 2020 15:15:57 +0000 Subject: [openstack][InteropWG] : Interop weekly Friday call In-Reply-To: <1676177545.2373771.1604645255017@mail.yahoo.com> References: <1295138775.3349683.1603717569398.ref@mail.yahoo.com> <1295138775.3349683.1603717569398@mail.yahoo.com> <1791751856.3420249.1603726712148@mail.yahoo.com> <352076537.476027.1604015407570@mail.yahoo.com> <1829403956.43078.1604102260422@mail.yahoo.com> <1757fc3ee30.eea7e45388786.4328160222881040161@ghanshyammann.com> <1676177545.2373771.1604645255017@mail.yahoo.com> Message-ID: Team, I will have to miss today’s call From: prakash RAMCHANDRAN Sent: Friday, November 6, 2020 12:48 AM To: Ghanshyam Mann Cc: openstack-discuss at lists.openstack.org; Voelker, Mark (VMware); Kanevsky, Arkady; Goutham Pacha Ravi Subject: Re: [openstack][InteropWG] : Interop weekly Friday call [EXTERNAL EMAIL] Hi all, Thanks Ghanshyam we may no impact based on zuul v3 for testing using tempest. If we submit a new guideline for 2020.10.json where will I see the patch under https://devops.org/osf/interior or patch? osf/interop Let's ho over the call tomorrow 10 AM PST. Please join and add your topics to https://etherpad.opendev.org/p/interop The meet pad link for call is on eitherpad. Thanks Prakash Sent from Yahoo Mail on Android On Sat, Oct 31, 2020 at 10:45 AM, Ghanshyam Mann > wrote: ---- On Fri, 30 Oct 2020 18:57:40 -0500 prakash RAMCHANDRAN > wrote ---- > Hi all, > We have few major Tasks to elevate Interop WG to enable OpenStack + Open Infrastructure Branding and Logo Programs: > 1. https://opendev.org/osf/interop/src/branch/master/2020.06.json (To patch and update to 2020.10.json)All core module owners of Integrated OpenStack please respond before next Friday Nov 6 -Interop WG call on meetpad > Interop Working Group - Weekly Friday 10-11 AM or UTC 17-18 (Next on Nov 6 & 13th)Lnink: https://meetpad.opendev.org/Interop-WG-weekly-meeting > Checklist for PTLs (core - keystone, glance, nova, neutron, cinder & swift) and add-ons (heat & designate) > 1. Did the Victoria switch for Jenkins to Zuul V3 and move to new Ubuntu Focal impact your DevStack and Tempesting module feature testing in any way for Interop? It is moved from zuulv2 native jobs to zuulv3 native and Ubuntu Bionic to Ubuntu Focal which basically are only testing framework side updates not on any user facing interface updates so it will not impact any interop way. -gmann > 2. Pease check https://opendev.org/osf/interop/src/branch/master/2020.06.json#L72We will drop stein and add wallaby, hence train release notes will be the base line for feature retention and compliance baseline testing > Please verify what are the changes you may need for Victoria cycle for Logo program for your modules list all"required": [] > "advisory": [], > "deprecated": [], > "removed": [] > > 3. Reply by email the changes expected for us the review based on your Victoria release notes or add notes to https://releases.openstack.org/victoria/?_ga=2.174650977.277942436.1604012691-802386709.1603466376 > > > 4. Discussions on Manila - Need Volunteer 5. Discussions on Ironic - Need Volunteers 6. Discussion on future Kata GoLang - Kata (with tempest vs k8s test tools or https://onsi.github.io/ginkgo/ + Airship, Starlingx, Zuul on infralab ?) - Need Volunteers > > ThanksPrakashFor Interop WG > ================================================================================================================== > > > Hi all, > We have two sessions one at 13 UTC ( 6 AM PDT) and 16 UTC (9 AM PDT) > Based on our interactions and collaborations - Manila team met and I could not attend their call but here is the link to their efforts: > https://etherpad.opendev.org/p/wallaby-ptg-manila-interop > We have 4 items to cover and seeking Volunteers for possible Manila & kubernetes Kata + RUNC based Container compliance (GoLang Must) > Also we are planning to introduce GoLang based Refstak/Tempest if there are volunteers ready to work with QA team (Gmann). > Thus we can finish in 1 session at 13 UTC and plan to cancel the second session at 16 UTC.====================================================================================================Your comments > > > > > ======================================================================================================See you for Interop and add any comments above > ThanksPrakashFor Interop WG > On Monday, October 26, 2020, 08:38:32 AM PDT, prakash RAMCHANDRAN > wrote: > > Hi all > My apology on the delayed start and here is the items we went over on first session. > Refer : https://etherpad.opendev.org/p/interop-wallaby-ptg (Monday 10/26 recorded the meeting for everyone to see, its cloud recording and will know from PTG hosts where they have that?) > Summary:1. Plan to Complete Victoria guidelines specs for interop in 2-6 weeks depending on adding new [OpenStack Powered File System Branding as Add-on Program] > - last Ussuri guidelines are at https://opendev.org/osf/interop/src/branch/master/2020.06.json > > > > New one will be 2020.10.json to be added > - For existing integrated modules [ Core OpenStack services including identity, compute, networking, block storage, and object storage ] - for two existing add-ons [Orchestration (heat) , DNS (designate) "Not a core capability, add-on program is available" - This is for Manila Filesystem add-on OpenDev Etherpad "Not a core capability, add-on program to be made available" > OpenDev Etherpad > > > > > > > > > We have one more opportunity on 10/30 slot at the end of PTG if BMaaS (Ironic) if Julia has any proposals, besides (Open Infra Kata + Docker) Container has additional proposal community is working on. Look forward to seeing you all on October 30 as what the Road-map will look like for 2020 and 2021 going forward. > ThanksPrakash > > On Monday, October 26, 2020, 06:09:00 AM PDT, Goutham Pacha Ravi > wrote: > > Hi Prakash, > > We're here: https://zoom.us/j/92649902134?pwd=a01aMXl6ZlNEZDlsMjJMTGNMVUp1UT09 > > On Mon, Oct 26, 2020 at 6:06 AM prakash RAMCHANDRAN > wrote: > https://meetpad.opendev.org/Interop-WG-weekly-meeting > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Fri Nov 6 15:37:31 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Fri, 6 Nov 2020 16:37:31 +0100 Subject: [neutron][stable]Proposing Rodolfo Alonso Hernandez as a stable branches core reviewer In-Reply-To: <4792298.JM3M1uNXaj@p1> References: <1766330.tdWV9SEqCh@p1> <4792298.JM3M1uNXaj@p1> Message-ID: Thank you all. It will be a pleasure to have the ability to break stables branches too! (I'm sorry in advance, Bernard) Regards. On Fri, Nov 6, 2020 at 2:50 PM Slawek Kaplonski wrote: > Hi, > > There is a week since I sent this nomination and there was no any negative > feedback. > So I asked today and Thierry added Rodolfo to the neutron-stable-maint > group. > Welcome in the Neutron stable core team Rodolfo :) > > Dnia czwartek, 29 października 2020 12:33:04 CET Slawek Kaplonski pisze: > > Hi, > > > > I would like to propose Rodolfo Alonso Hernandez (ralonsoh) to be new > member > > of the Neutron stable core team. > > Rodolfo works in Neutron since very long time. He is core reviewer in > master > > branch already. > > During last few cycles he also proved his ability to help with stable > > branches. He is proposing many backports from master to our stable > branches > > as well as doing reviews of other backports. > > He has knowledge about stable branches policies. > > I think that he will be great addition to our (not too big) stable core > > team. > > > > I will open this nomination open for a week to get feedback about it. > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oliver.wenz at dhbw-mannheim.de Fri Nov 6 15:43:51 2020 From: oliver.wenz at dhbw-mannheim.de (Oliver Wenz) Date: Fri, 6 Nov 2020 16:43:51 +0100 Subject: [openstack-ansible][nova] os-nova-install.yml playbook fails after being unable to restart libvirtd.service on compute nodes In-Reply-To: References: Message-ID: <39c3de9f-7017-1e57-1153-5fbf8c6c99d5@dhbw-mannheim.de> To enable live migration on an ussuri (deployed using OpenStack Ansible) I set up a shared filesystem (ocfs2) on the compute hosts. Then I added the following to /etc/openstack_deploy/user_variables.yml and reran the os-nova-install.yml playbook: ``` nova_nova_conf_overrides: libvirt: iscsi_use_multipath: true nova_system_user_uid: 999 nova_system_group_gid: 999 nova_novncproxy_vncserver_listen: 0.0.0.0 # Disable libvirtd's TLS listener nova_libvirtd_listen_tls: 0 # Enable libvirtd's plaintext TCP listener nova_libvirtd_listen_tcp: 1 nova_libvirtd_auth_tcp: none ``` The playbook failed with the following message: ``` RUNNING HANDLER [os_nova : Restart libvirt-bin] *************************************************************************** fatal: [compute1]: FAILED! => {"changed": false, "msg": "Unable to restart service libvirtd: Job for libvirtd.service failed because the control process exited with error code.\nSee \"systemctl status libvirtd.service\" and \"journalctl -xe\" for details.\n"} ``` (similar for the other compute nodes) Running 'systemctl status libvirtd.service' revealed that the service failed to start (code=exited, status=6). Here's the 'journalctl -xe' output: ``` Nov 06 12:42:20 bc1bl12 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=libvirtd comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd.service: Service hold-off time over, scheduling restart. Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd.service: Scheduled restart job, restart counter is at 5. -- Subject: Automatic restarting of a unit has been scheduled -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- Automatic restarting of the unit libvirtd.service has been scheduled, as the result for -- the configured Restart= setting for the unit. Nov 06 12:42:20 bc1bl12 systemd[1]: Stopped Virtualization daemon. -- Subject: Unit libvirtd.service has finished shutting down -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- Unit libvirtd.service has finished shutting down. Nov 06 12:42:20 bc1bl12 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=libvirtd comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 06 12:42:20 bc1bl12 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 msg='unit=libvirtd comm="systemd" exe="/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd.service: Start request repeated too quickly. Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd.service: Failed with result 'exit-code'. Nov 06 12:42:20 bc1bl12 systemd[1]: Failed to start Virtualization daemon. -- Subject: Unit libvirtd.service has failed -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- Unit libvirtd.service has failed. -- -- The result is RESULT. Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd-admin.socket: Failed with result 'service-start-limit-hit'. Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd.socket: Failed with result 'service-start-limit-hit'. Nov 06 12:42:20 bc1bl12 systemd[1]: libvirtd-ro.socket: Failed with result 'service-start-limit-hit'. Nov 06 12:42:20 bc1bl12 systemd[1]: Closed Libvirt local read-only socket. -- Subject: Unit libvirtd-ro.socket has finished shutting down -- Defined-By: systemd -- Support: http://www.ubuntu.com/support -- -- Unit libvirtd-ro.socket has finished shutting down. ``` The /etc/libvirt/libvirtd.conf file written by OpenStack Ansible looks like this: ``` listen_tls = 0 listen_tcp = 1 unix_sock_group = "libvirt" unix_sock_ro_perms = "0777" unix_sock_rw_perms = "0770" auth_unix_ro = "none" auth_unix_rw = "none" auth_tcp = "none" ``` I've encountered mentions of /etc/default/libvirt-bin in documentation for older OpenStack versions and I'm unsure if something went wrong with the playbooks, because the file doesn't exist on my compute nodes. Any ideas that might help are highly appreciated! Kind regards, Oliver From skaplons at redhat.com Fri Nov 6 16:45:36 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 06 Nov 2020 17:45:36 +0100 Subject: [neutron]Drivers meeting 06.11.2020 - cancelled In-Reply-To: References: <3846730.JagzSZ0HJ7@p1> <4adbaaa5-f795-0537-f26e-5547bad5d978@gmail.com> Message-ID: <3671273.rczrtDRm1T@p1> Hi, If You have something what You want to discuss during drivers meeting, please go to https://wiki.openstack.org/wiki/Meetings/NeutronDrivers page before the meeting and add Your topic to the "On Demand" agenda. And please do that on Thursday, before I will start preparing meeting which I usually do around Thursday evening CEST time. If I don't see anything in the agenda I'm cancelling the meeting but if there would be something added there, then meeting would be run normally. Dnia piątek, 6 listopada 2020 10:56:35 CET Rafael Weingärtner pisze: > Great, thanks! > > Em sex, 6 de nov de 2020 00:16, Brian Haley escreveu: > > On 11/5/20 4:05 PM, Rafael Weingärtner wrote: > > > I had a topic for this meeting. I wanted to ask about the spec > > > https://review.opendev.org/#/c/739549/. > > > Do we need other meetings to discuss it? Or, is the spec and the > > > discussions we had there enough? We wanted to see it going out in > > > Wallaby release. > > > > From https://bugs.launchpad.net/neutron/+bug/1885921/comments/4 this > > > > RFE was approved. I see you just had to re-target it to Wallaby, which > > is fine, I think we can continue any further discussions in the reviews. > > > > -Brian > > > > > On Thu, Nov 5, 2020 at 5:39 PM Slawek Kaplonski > > > > > > wrote: > > > Hi, > > > > > > We don't have any new RFEs to discuss so lets cancel tomorrow's > > > > drivers > > > > > meeting. > > > There is one fresh RFE [1] but we discussed that durig the PTG so > > > now lets > > > wait for proposed spec with more details and we will get back to > > > discussion > > > about approval of it in the drivers meeting in next weeks. > > > Have a great weekend. > > > > > > [1] https://bugs.launchpad.net/neutron/+bug/1900934 > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > -- > > > Rafael Weingärtner -- Slawek Kaplonski Principal Software Engineer Red Hat From fungi at yuggoth.org Fri Nov 6 17:35:21 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Nov 2020 17:35:21 +0000 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> Message-ID: <20201106173521.3xmgzwoqs2s4hkep@yuggoth.org> On 2020-11-06 13:47:11 +0100 (+0100), Thierry Carrez wrote: > The weird thing is that the PyPI error message says: > > "The description failed to render in the default format of reStructuredText" > > while we specify: > > long_description_content_type='text/markdown' > > in setup.py. It looks like PyPI is ignoring our indication in setup.py, and > therefore (accurately) reporting failure to render in RST. [...] After nearly hitting bedrock, I think I've dug down far enough to figure this one out: Between the working and failing uploads, we switched our CI system to install the wheel module from distro packaging rather than from PyPI by default. Since the recent build was performed on an Ubuntu 18.04 LTS instance, it used the available python3-wheel 0.30.0-0.2 available there. As wheel did not introduce support for Python packaging metadata version 2.1 until its 0.31.0 release, the failing upload declared "Metadata-Version: 2.0" in the METADATA file. This seems to cause the "Description-Content-Type: text/markdown" line (first introduced in the metadata 2.1 specification) to be ignored and fall back to assuming the long description is in the reStructuredText default. I propose we switch release jobs to run on an ubuntu-focal nodeset, as most official OpenStack jobs did already during the Victoria cycle. This will result in using python3-wheel 0.34.2-1 which is plenty new enough to support the necessary metadata version. If for some reason running release jobs for older stable branches on a newer distro causes issues, we can try to make use of Zuul's new branch-guessing mechanism for Git tags to create job variants running stable point releases for older branches on the older nodeset. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From fungi at yuggoth.org Fri Nov 6 17:50:52 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 6 Nov 2020 17:50:52 +0000 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: <20201106173521.3xmgzwoqs2s4hkep@yuggoth.org> References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> <20201106173521.3xmgzwoqs2s4hkep@yuggoth.org> Message-ID: <20201106175052.7v3iwde5aw43xuy4@yuggoth.org> On 2020-11-06 17:35:21 +0000 (+0000), Jeremy Stanley wrote: [...] > I propose we switch release jobs to run on an ubuntu-focal nodeset, > as most official OpenStack jobs did already during the Victoria > cycle. [...] Now up for review: https://review.opendev.org/761776 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From steve.vezina at gmail.com Fri Nov 6 01:38:37 2020 From: steve.vezina at gmail.com (=?UTF-8?Q?Steve_V=C3=A9zina?=) Date: Thu, 5 Nov 2020 20:38:37 -0500 Subject: [neutron] Multi-AZ without L2 spanning between datacenter for external network Message-ID: Hi everyone, I am wondering if anyone got a multi-az deployment using OVS that work with seperate network fabrics which mean no L2 spanning between the AZs for external network? Thanks! Steve. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsbiz at yahoo.com Fri Nov 6 23:45:23 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Fri, 6 Nov 2020 23:45:23 +0000 (UTC) Subject: [swift]: issues with multi region (swift as backend for glance) References: <46812675.2700917.1604706323822.ref@mail.yahoo.com> Message-ID: <46812675.2700917.1604706323822@mail.yahoo.com> Hi folks, We're on queens release.  We have just setup a 2nd region.  So, now we have two regions regionOne and regionTwo We had a storage cluster in regionOne. Everything works good.We also added another storage cluster in regionTwo and have created swift endpoints for this.Using swift API, everything works fine.  The container is properly created in regionOne or regionTwo asdesired. We are also using swift as the glance backend.  We are seeing an issue with this for regionTwo.When I create an image in regionTwo, it seems like the glance storage backend is not properly recognizingthe endpoint and wants to store the image in regionOne. This looks like a definite bug.I can work around it by overriding the endpoint using swift_store_endpoint but if there is a known bugabout this issue I would rather patch it than resort to overriding the URL endpoint returned from "auth". Is this a known bug ?  Again, we are on the latest Queens release. thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriel.gamero at pucp.edu.pe Sun Nov 8 03:06:50 2020 From: gabriel.gamero at pucp.edu.pe (Gabriel Omar Gamero Montenegro) Date: Sat, 7 Nov 2020 22:06:50 -0500 Subject: [neutron] Use the same network interface with multiple ML2 mechanism drivers Message-ID: Dear all, I know that ML2 Neutron core plugin is designed to support multiple mechanism and type drivers simultaneously. But I'd like to know: is it possible to use the same network interface configured with different ML2 mechanism drivers? I'm planning to use openvswitch and linuxbridge as mechanism drivers along with VLAN as type driver. Could it be possible to have the following configuration for that purpose?: ml2_conf.ini: [ml2] mechanism_drivers = openvswitch,linuxbridge [ml2_type_vlan] network_vlan_ranges = physnet1:40:60,physnet2:60:80 eth3 is a port of the provider bridge: ovs-vsctl add-port br-provider eth3 openvswitch_agent.ini: [ovs] bridge_mappings = physnet1:br-provider linuxbridge_agent.ini: [linux_bridge] physical_interface_mappings = physnet2:eth3 If it's mandatory to use different network interfaces any guide or sample reference about implementing multiple mechanism drivers would be highly appreciated. Thanks in advance, Gabriel Gamero From satish.txt at gmail.com Sun Nov 8 14:31:23 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 8 Nov 2020 09:31:23 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd Message-ID: Folks, Recently i have added come compute nodes in cloud supporting openvswitch-dpdk for performance. I am seeing all my PMD cpu cores are 100% cpu usage on linux top command. It is normal behavior from first looks. It's very scary to see 400% cpu usage on top. Can someone confirm before I assume it's normal and what we can do to reduce it if it's too high? From florian at datalounges.com Sun Nov 8 15:00:23 2020 From: florian at datalounges.com (Florian Rommel) Date: Sun, 8 Nov 2020 17:00:23 +0200 Subject: [octavia] Timeouts during building of lb? But then successful Message-ID: <6266636F-EB91-4CAC-B5CA-228710483B67@datalounges.com> Hi, so we have a fully functioning setup of octavia on ussuri and it works nicely, when it competes. So here is what happens: From octavia api to octavia worker takes 20 seconds for the job to be initiated. The loadbalancer gets built quickly and then we get a mysql went away error, the listener gets built and then a member , that works too, then the mysql error comes up with query took too long to execute. Now this is where it gets weird. This is all within the first 2 - 3 minutes. At this point it hangs and takes 10 minutes (600 seconds) for the next step to complete and then another 10 minutes and another 10 until it’s completed. It seems there is a timeout somewhere but even with debug on we do not see what is going on. Does anyone have a mysql 8 running and octavia executing fine? And could send me their redacted octavia or mysql conf files? We didn’t touch them but it seems that there is something off.. especially since it then completes and works extremely nicely. I would highly appreciate it , even off list. Best regards, //f From laurentfdumont at gmail.com Sun Nov 8 15:49:53 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 8 Nov 2020 10:49:53 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: As far as I know, DPDK enabled cores will show 100% usage at all times. On Sun, Nov 8, 2020 at 9:39 AM Satish Patel wrote: > Folks, > > Recently i have added come compute nodes in cloud supporting > openvswitch-dpdk for performance. I am seeing all my PMD cpu cores are > 100% cpu usage on linux top command. It is normal behavior from first > looks. It's very scary to see 400% cpu usage on top. Can someone > confirm before I assume it's normal and what we can do to reduce it if > it's too high? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sun Nov 8 16:13:44 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 8 Nov 2020 11:13:44 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: Thanks. Just curious then why people directly go for SR-IOV implementation where they get better performance + they can use the same CPU more also. What are the measure advantages or features attracting the community to go with DPDK over SR-IOV? On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont wrote: > > As far as I know, DPDK enabled cores will show 100% usage at all times. > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel wrote: >> >> Folks, >> >> Recently i have added come compute nodes in cloud supporting >> openvswitch-dpdk for performance. I am seeing all my PMD cpu cores are >> 100% cpu usage on linux top command. It is normal behavior from first >> looks. It's very scary to see 400% cpu usage on top. Can someone >> confirm before I assume it's normal and what we can do to reduce it if >> it's too high? >> From laurentfdumont at gmail.com Sun Nov 8 17:04:10 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Sun, 8 Nov 2020 12:04:10 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: I have limited hands-on experience with both but they don't serve the same purpose/have the same implementation. You use SRIOV to allow Tenants to access the NIC cards directly and bypass any inherent linux-vr/OVS performance limitations. This is key for NFV workloads which are expecting large amount of PPS + low latency (because they are often just virtualized bare-metal products with one real cloud-readiness/architecture ;) ) - This means that a Tenant with an SRIOV port can use DPDK + access the NIC through the VF which means (in theory) a better performance than OVS+DPDK. You use ovs-dpdk to increase the performance of OVS based flows (so provider networks + vxlan based internal-tenant networks). On Sun, Nov 8, 2020 at 11:13 AM Satish Patel wrote: > Thanks. Just curious then why people directly go for SR-IOV > implementation where they get better performance + they can use the > same CPU more also. What are the measure advantages or features > attracting the community to go with DPDK over SR-IOV? > > On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > wrote: > > > > As far as I know, DPDK enabled cores will show 100% usage at all times. > > > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > wrote: > >> > >> Folks, > >> > >> Recently i have added come compute nodes in cloud supporting > >> openvswitch-dpdk for performance. I am seeing all my PMD cpu cores are > >> 100% cpu usage on linux top command. It is normal behavior from first > >> looks. It's very scary to see 400% cpu usage on top. Can someone > >> confirm before I assume it's normal and what we can do to reduce it if > >> it's too high? > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tonyliu0592 at hotmail.com Sun Nov 8 20:22:47 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Sun, 8 Nov 2020 20:22:47 +0000 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: SRIOV gives you the maximum performance, without any SW features (security group, L3 routing, etc.), because it bypasses SW. DPDK gives you less performance, with all SW features. Depend on the use case, max perf and SW features, you will need to make a decision. Tony > -----Original Message----- > From: Laurent Dumont > Sent: Sunday, November 8, 2020 9:04 AM > To: Satish Patel > Cc: OpenStack Discuss > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > I have limited hands-on experience with both but they don't serve the > same purpose/have the same implementation. You use SRIOV to allow > Tenants to access the NIC cards directly and bypass any inherent linux- > vr/OVS performance limitations. This is key for NFV workloads which are > expecting large amount of PPS + low latency (because they are often just > virtualized bare-metal products with one real cloud- > readiness/architecture ;) ) - This means that a Tenant with an SRIOV > port can use DPDK + access the NIC through the VF which means (in theory) > a better performance than OVS+DPDK. > > You use ovs-dpdk to increase the performance of OVS based flows (so > provider networks + vxlan based internal-tenant networks). > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > wrote: > > > Thanks. Just curious then why people directly go for SR-IOV > implementation where they get better performance + they can use the > same CPU more also. What are the measure advantages or features > attracting the community to go with DPDK over SR-IOV? > > On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > wrote: > > > > As far as I know, DPDK enabled cores will show 100% usage at all > times. > > > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > wrote: > >> > >> Folks, > >> > >> Recently i have added come compute nodes in cloud supporting > >> openvswitch-dpdk for performance. I am seeing all my PMD cpu > cores are > >> 100% cpu usage on linux top command. It is normal behavior from > first > >> looks. It's very scary to see 400% cpu usage on top. Can someone > >> confirm before I assume it's normal and what we can do to reduce > it if > >> it's too high? > >> > From satish.txt at gmail.com Mon Nov 9 02:50:37 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 8 Nov 2020 21:50:37 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: Thank you tony, We are running openstack cloud with SR-IOV and we are happy with performance but one big issue, it doesn't support bonding on compute nodes, we can do bonding inside VM but that is over complicated to do that level of deployment, without bonding it's always risky if tor switch dies. that is why i started looking into DPDK but look like i hit the wall again because my compute node has only 2 NIC we i can't do bonding while i am connected over same nic. Anyway i will stick with SR-IOV in that case to get more performance and less complexity. On Sun, Nov 8, 2020 at 3:22 PM Tony Liu wrote: > > SRIOV gives you the maximum performance, without any SW features > (security group, L3 routing, etc.), because it bypasses SW. > DPDK gives you less performance, with all SW features. > > Depend on the use case, max perf and SW features, you will need > to make a decision. > > > Tony > > -----Original Message----- > > From: Laurent Dumont > > Sent: Sunday, November 8, 2020 9:04 AM > > To: Satish Patel > > Cc: OpenStack Discuss > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > I have limited hands-on experience with both but they don't serve the > > same purpose/have the same implementation. You use SRIOV to allow > > Tenants to access the NIC cards directly and bypass any inherent linux- > > vr/OVS performance limitations. This is key for NFV workloads which are > > expecting large amount of PPS + low latency (because they are often just > > virtualized bare-metal products with one real cloud- > > readiness/architecture ;) ) - This means that a Tenant with an SRIOV > > port can use DPDK + access the NIC through the VF which means (in theory) > > a better performance than OVS+DPDK. > > > > You use ovs-dpdk to increase the performance of OVS based flows (so > > provider networks + vxlan based internal-tenant networks). > > > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > > wrote: > > > > > > Thanks. Just curious then why people directly go for SR-IOV > > implementation where they get better performance + they can use the > > same CPU more also. What are the measure advantages or features > > attracting the community to go with DPDK over SR-IOV? > > > > On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > > wrote: > > > > > > As far as I know, DPDK enabled cores will show 100% usage at all > > times. > > > > > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > > wrote: > > >> > > >> Folks, > > >> > > >> Recently i have added come compute nodes in cloud supporting > > >> openvswitch-dpdk for performance. I am seeing all my PMD cpu > > cores are > > >> 100% cpu usage on linux top command. It is normal behavior from > > first > > >> looks. It's very scary to see 400% cpu usage on top. Can someone > > >> confirm before I assume it's normal and what we can do to reduce > > it if > > >> it's too high? > > >> > > > From tonyliu0592 at hotmail.com Mon Nov 9 04:41:09 2020 From: tonyliu0592 at hotmail.com (Tony Liu) Date: Mon, 9 Nov 2020 04:41:09 +0000 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: Bonding is a SW feature supported by either kernel or DPDK layer. In case of SRIOV, it's not complicated to enable bonding inside VM. And it has to be two NICs connecting to two ToRs. Depending on DPDK implementation, you might be able to use VF. Anyways, it's always recommended to have dedicated NIC for SRIOV. Thanks! Tony > -----Original Message----- > From: Satish Patel > Sent: Sunday, November 8, 2020 6:51 PM > To: Tony Liu > Cc: Laurent Dumont ; OpenStack Discuss > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > Thank you tony, > > We are running openstack cloud with SR-IOV and we are happy with > performance but one big issue, it doesn't support bonding on compute > nodes, we can do bonding inside VM but that is over complicated to do > that level of deployment, without bonding it's always risky if tor > switch dies. that is why i started looking into DPDK but look like i hit > the wall again because my compute node has only 2 NIC we i can't do > bonding while i am connected over same nic. Anyway i will stick with SR- > IOV in that case to get more performance and less complexity. > > On Sun, Nov 8, 2020 at 3:22 PM Tony Liu wrote: > > > > SRIOV gives you the maximum performance, without any SW features > > (security group, L3 routing, etc.), because it bypasses SW. > > DPDK gives you less performance, with all SW features. > > > > Depend on the use case, max perf and SW features, you will need to > > make a decision. > > > > > > Tony > > > -----Original Message----- > > > From: Laurent Dumont > > > Sent: Sunday, November 8, 2020 9:04 AM > > > To: Satish Patel > > > Cc: OpenStack Discuss > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > I have limited hands-on experience with both but they don't serve > > > the same purpose/have the same implementation. You use SRIOV to > > > allow Tenants to access the NIC cards directly and bypass any > > > inherent linux- vr/OVS performance limitations. This is key for NFV > > > workloads which are expecting large amount of PPS + low latency > > > (because they are often just virtualized bare-metal products with > > > one real cloud- readiness/architecture ;) ) - This means that a > > > Tenant with an SRIOV port can use DPDK + access the NIC through the > > > VF which means (in theory) a better performance than OVS+DPDK. > > > > > > You use ovs-dpdk to increase the performance of OVS based flows (so > > > provider networks + vxlan based internal-tenant networks). > > > > > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > > > wrote: > > > > > > > > > Thanks. Just curious then why people directly go for SR-IOV > > > implementation where they get better performance + they can > use the > > > same CPU more also. What are the measure advantages or > features > > > attracting the community to go with DPDK over SR-IOV? > > > > > > On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > > > wrote: > > > > > > > > As far as I know, DPDK enabled cores will show 100% usage at > > > all times. > > > > > > > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > > > > wrote: > > > >> > > > >> Folks, > > > >> > > > >> Recently i have added come compute nodes in cloud > supporting > > > >> openvswitch-dpdk for performance. I am seeing all my PMD > > > cpu cores are > > > >> 100% cpu usage on linux top command. It is normal behavior > > > from first > > > >> looks. It's very scary to see 400% cpu usage on top. Can > someone > > > >> confirm before I assume it's normal and what we can do to > > > reduce it if > > > >> it's too high? > > > >> > > > > > From skaplons at redhat.com Mon Nov 9 07:36:28 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 09 Nov 2020 08:36:28 +0100 Subject: [neutron] Use the same network interface with multiple ML2 mechanism drivers In-Reply-To: References: Message-ID: <5309547.V514dPugnQ@p1> Hi, Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro pisze: > Dear all, > > I know that ML2 Neutron core plugin is designed to > support multiple mechanism and type drivers simultaneously. > But I'd like to know: is it possible to use the > same network interface configured with different > ML2 mechanism drivers? > > I'm planning to use openvswitch and linuxbridge as > mechanism drivers along with VLAN as type driver. > Could it be possible to have the following > configuration for that purpose?: > > ml2_conf.ini: > [ml2] > mechanism_drivers = openvswitch,linuxbridge > [ml2_type_vlan] > network_vlan_ranges = physnet1:40:60,physnet2:60:80 > > eth3 is a port of the provider bridge: > ovs-vsctl add-port br-provider eth3 > > openvswitch_agent.ini: > [ovs] > bridge_mappings = physnet1:br-provider > > linuxbridge_agent.ini: > [linux_bridge] > physical_interface_mappings = physnet2:eth3 > I don't think it's will work because You would need to have same interface in the ovs bridge (br-provider) and use it by linuxbridge. But TBH this is a bit strange configuration for me. I can imaging different computes which are using different backends. But why You want to use linuxbridge and openvswitch agents together on same compute node? > If it's mandatory to use different network interfaces > any guide or sample reference about implementing > multiple mechanism drivers would be highly appreciated. > > Thanks in advance, > Gabriel Gamero -- Slawek Kaplonski Principal Software Engineer Red Hat From ionut at fleio.com Mon Nov 9 10:30:11 2020 From: ionut at fleio.com (Ionut Biru) Date: Mon, 9 Nov 2020 12:30:11 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Hi, I tried to run manually from inside the heat container the same commands and they worked fine. I think it doesn't inherit the values from /etc/bashrc On Fri, Nov 6, 2020 at 2:40 PM Ionut Biru wrote: > Hi, > > yes, it's deployed: https://paste.xinu.at/pEz/#n855 > > On Fri, Nov 6, 2020 at 2:36 PM Spyros Trigazis wrote: > >> Check if you have this patch deployed: >> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >> >> On Fri, Nov 6, 2020 at 1:03 PM Ionut Biru wrote: >> >>> Hi, >>> >>> I wonder if it is because the image is from fedora:rawhide and something >>> is too new in there and breaks the flow? >>> >>> https://opendev.org/openstack/magnum/commit/1de7a6af5ee2b834360e3daba34adb6b908fa035 >>> >>> >>> On Fri, Nov 6, 2020 at 1:34 PM Ionut Biru wrote: >>> >>>> Hi, >>>> >>>> I'm using stable/victoria. It works fine with an older image. >>>> >>>> On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis >>>> wrote: >>>> >>>>> I see, this says kubectl misconfigured not missing. >>>>> error: Missing or incomplete configuration info. Please point to an >>>>> existing, complete config file: >>>>> >>>>> I guess you miss some patches: >>>>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>>>> >>>>> Try using an older image of the agent or take the patch above. >>>>> >>>>> Spyros >>>>> >>>>> On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> Not sure if because kubectl is not found but here are the logs, I >>>>>> cannot figure out why it is not working with the new heat image. >>>>>> >>>>>> journalctl https://paste.xinu.at/jsA/ >>>>>> heat-config master https://paste.xinu.at/pEz/ >>>>>> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >>>>>> >>>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>>>> get pods --all-namespaces >>>>>> No resources found >>>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>>>> get all --all-namespaces >>>>>> NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP >>>>>> PORT(S) AGE >>>>>> default service/kubernetes ClusterIP 10.254.0.1 >>>>>> 443/TCP 36m >>>>>> >>>>>> It fails to deploy and I don't have any services configured. >>>>>> >>>>>> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >>>>>> wrote: >>>>>> >>>>>>> Am I missing something? It is there. >>>>>>> >>>>>>> Spyros >>>>>>> >>>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>>>>>> --client --short >>>>>>> Client Version: v1.18.2 >>>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>>>>>> version --client --short >>>>>>> Client Version: v1.18.2 >>>>>>> >>>>>>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >>>>>>> >>>>>>>> I wanted to say, 20 hours ago instead of days. >>>>>>>> >>>>>>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru wrote: >>>>>>>> >>>>>>>>> Hi, >>>>>>>>> >>>>>>>>> 20 days ago the image was updated and now all clusters fail to >>>>>>>>> deploy because it cannot find >>>>>>>>> /usr/bin/kubectl, i think. >>>>>>>>> >>>>>>>>> /usr/bin/kubectl apply -f >>>>>>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>>>>>> >>>>>>>>> Some services are installed using full path instead of simply >>>>>>>>> kubectl. >>>>>>>>> >>>>>>>>> Should the image be fixed or reverted the changes to old revision >>>>>>>>> or we should just fix the scripts? >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Ionut Biru - https://fleio.com >>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Ionut Biru - https://fleio.com >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Ionut Biru - https://fleio.com >>>>>> >>>>> >>>> >>>> -- >>>> Ionut Biru - https://fleio.com >>>> >>> >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> > > -- > Ionut Biru - https://fleio.com > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From strigazi at gmail.com Mon Nov 9 10:43:10 2020 From: strigazi at gmail.com (Spyros Trigazis) Date: Mon, 9 Nov 2020 11:43:10 +0100 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Please follow/comment https://storyboard.openstack.org/#!/story/2007591 Thanks, Spyros On Mon, Nov 9, 2020 at 11:30 AM Ionut Biru wrote: > Hi, > > I tried to run manually from inside the heat container the same commands > and they worked fine. > > I think it doesn't inherit the values from /etc/bashrc > > On Fri, Nov 6, 2020 at 2:40 PM Ionut Biru wrote: > >> Hi, >> >> yes, it's deployed: https://paste.xinu.at/pEz/#n855 >> >> On Fri, Nov 6, 2020 at 2:36 PM Spyros Trigazis >> wrote: >> >>> Check if you have this patch deployed: >>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>> >>> On Fri, Nov 6, 2020 at 1:03 PM Ionut Biru wrote: >>> >>>> Hi, >>>> >>>> I wonder if it is because the image is from fedora:rawhide and >>>> something is too new in there and breaks the flow? >>>> >>>> https://opendev.org/openstack/magnum/commit/1de7a6af5ee2b834360e3daba34adb6b908fa035 >>>> >>>> >>>> On Fri, Nov 6, 2020 at 1:34 PM Ionut Biru wrote: >>>> >>>>> Hi, >>>>> >>>>> I'm using stable/victoria. It works fine with an older image. >>>>> >>>>> On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis >>>>> wrote: >>>>> >>>>>> I see, this says kubectl misconfigured not missing. >>>>>> error: Missing or incomplete configuration info. Please point to an >>>>>> existing, complete config file: >>>>>> >>>>>> I guess you miss some patches: >>>>>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>>>>> >>>>>> Try using an older image of the agent or take the patch above. >>>>>> >>>>>> Spyros >>>>>> >>>>>> On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: >>>>>> >>>>>>> Hi, >>>>>>> >>>>>>> Not sure if because kubectl is not found but here are the logs, I >>>>>>> cannot figure out why it is not working with the new heat image. >>>>>>> >>>>>>> journalctl https://paste.xinu.at/jsA/ >>>>>>> heat-config master https://paste.xinu.at/pEz/ >>>>>>> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >>>>>>> >>>>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>>>>> get pods --all-namespaces >>>>>>> No resources found >>>>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# kubectl >>>>>>> get all --all-namespaces >>>>>>> NAMESPACE NAME TYPE CLUSTER-IP >>>>>>> EXTERNAL-IP PORT(S) AGE >>>>>>> default service/kubernetes ClusterIP 10.254.0.1 >>>>>>> 443/TCP 36m >>>>>>> >>>>>>> It fails to deploy and I don't have any services configured. >>>>>>> >>>>>>> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >>>>>>> wrote: >>>>>>> >>>>>>>> Am I missing something? It is there. >>>>>>>> >>>>>>>> Spyros >>>>>>>> >>>>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>>>>>>> --client --short >>>>>>>> Client Version: v1.18.2 >>>>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>>>>>>> version --client --short >>>>>>>> Client Version: v1.18.2 >>>>>>>> >>>>>>>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru wrote: >>>>>>>> >>>>>>>>> I wanted to say, 20 hours ago instead of days. >>>>>>>>> >>>>>>>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi, >>>>>>>>>> >>>>>>>>>> 20 days ago the image was updated and now all clusters fail to >>>>>>>>>> deploy because it cannot find >>>>>>>>>> /usr/bin/kubectl, i think. >>>>>>>>>> >>>>>>>>>> /usr/bin/kubectl apply -f >>>>>>>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>>>>>>> >>>>>>>>>> Some services are installed using full path instead of simply >>>>>>>>>> kubectl. >>>>>>>>>> >>>>>>>>>> Should the image be fixed or reverted the changes to old revision >>>>>>>>>> or we should just fix the scripts? >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Ionut Biru - https://fleio.com >>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> Ionut Biru - https://fleio.com >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> Ionut Biru - https://fleio.com >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> Ionut Biru - https://fleio.com >>>>> >>>> >>>> >>>> -- >>>> Ionut Biru - https://fleio.com >>>> >>> >> >> -- >> Ionut Biru - https://fleio.com >> > > > -- > Ionut Biru - https://fleio.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Nov 9 10:44:23 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 9 Nov 2020 11:44:23 +0100 Subject: [largescale-sig] Future meeting time In-Reply-To: <580916f8-dd2e-fbca-56e4-b3ae0dfca85d@openstack.org> References: <580916f8-dd2e-fbca-56e4-b3ae0dfca85d@openstack.org> Message-ID: Thierry Carrez wrote: > During the PTG we discussed our meeting time. Currently we are rotating > every two weeks between a APAC-EU friendly time (8utc) and a EU-US > friendly time (16utc). However, rotating the meeting between two > timezones resulted in fragmenting the already-small group, with only > Europeans being regular attendees. > > After discussing this at the PTG with Gene Kuo, we think it's easier to > have a single meeting time, if we can find one that works for all active > members. > > Please enter your preferences into this date poll for Wednesday times: > https://framadate.org/ue2eOBTT3QXoFhKg > > Feel free to add comments pointing to better times for you, like other > days that would work better for you than Wednesdays for example. Thanks to all those who responded, I selected Wednesdays at 14UTC for our Wallaby cycle meetings. Next meeting will be next week, Nov 18. mark your calendars ! -- Thierry Carrez (ttx) From mark at stackhpc.com Mon Nov 9 10:44:51 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 9 Nov 2020 10:44:51 +0000 Subject: [kolla] IRC meeting cancelled on 11th November Message-ID: Hi, Since I and a number of cores are away on Wednesday 11th November, let's cancel the IRC meeting. The focus should be on continuing to stabilise the Victoria release. Thanks, Mark From mark at stackhpc.com Mon Nov 9 11:33:50 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 9 Nov 2020 11:33:50 +0000 Subject: [kolla] Proposing wu.chunyang for Kolla core Message-ID: Hi, I would like to propose adding wu.chunyang to the kolla-core and kolla-ansible-core groups. wu.chunyang did some great work on Octavia integration in the Victoria release, and has provided some helpful reviews. Cores - please reply +1/-1 before the end of Friday 13th November. Thanks, Mark From marcin.juszkiewicz at linaro.org Mon Nov 9 11:35:56 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Mon, 9 Nov 2020 12:35:56 +0100 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: References: Message-ID: W dniu 09.11.2020 o 12:33, Mark Goddard pisze: > Hi, > > I would like to propose adding wu.chunyang to the kolla-core and > kolla-ansible-core groups. wu.chunyang did some great work on Octavia > integration in the Victoria release, and has provided some helpful > reviews. > > Cores - please reply +1/-1 before the end of Friday 13th November. +1 From ionut at fleio.com Mon Nov 9 11:40:13 2020 From: ionut at fleio.com (Ionut Biru) Date: Mon, 9 Nov 2020 13:40:13 +0200 Subject: [magnum] heat-container-agent:victoria-dev In-Reply-To: References: Message-ID: Hi, Thanks for pointing out the story, It help me to fix it. Please review: https://review.opendev.org/#/c/761897/ On Mon, Nov 9, 2020 at 12:43 PM Spyros Trigazis wrote: > Please follow/comment https://storyboard.openstack.org/#!/story/2007591 > > Thanks, > Spyros > > On Mon, Nov 9, 2020 at 11:30 AM Ionut Biru wrote: > >> Hi, >> >> I tried to run manually from inside the heat container the same commands >> and they worked fine. >> >> I think it doesn't inherit the values from /etc/bashrc >> >> On Fri, Nov 6, 2020 at 2:40 PM Ionut Biru wrote: >> >>> Hi, >>> >>> yes, it's deployed: https://paste.xinu.at/pEz/#n855 >>> >>> On Fri, Nov 6, 2020 at 2:36 PM Spyros Trigazis >>> wrote: >>> >>>> Check if you have this patch deployed: >>>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>>> >>>> On Fri, Nov 6, 2020 at 1:03 PM Ionut Biru wrote: >>>> >>>>> Hi, >>>>> >>>>> I wonder if it is because the image is from fedora:rawhide and >>>>> something is too new in there and breaks the flow? >>>>> >>>>> https://opendev.org/openstack/magnum/commit/1de7a6af5ee2b834360e3daba34adb6b908fa035 >>>>> >>>>> >>>>> On Fri, Nov 6, 2020 at 1:34 PM Ionut Biru wrote: >>>>> >>>>>> Hi, >>>>>> >>>>>> I'm using stable/victoria. It works fine with an older image. >>>>>> >>>>>> On Fri, Nov 6, 2020 at 1:28 PM Spyros Trigazis >>>>>> wrote: >>>>>> >>>>>>> I see, this says kubectl misconfigured not missing. >>>>>>> error: Missing or incomplete configuration info. Please point to an >>>>>>> existing, complete config file: >>>>>>> >>>>>>> I guess you miss some patches: >>>>>>> https://review.opendev.org/#/q/I15ef91bbec20a8037d47902225eabb3082579705 >>>>>>> >>>>>>> Try using an older image of the agent or take the patch above. >>>>>>> >>>>>>> Spyros >>>>>>> >>>>>>> On Fri, Nov 6, 2020 at 12:01 PM Ionut Biru wrote: >>>>>>> >>>>>>>> Hi, >>>>>>>> >>>>>>>> Not sure if because kubectl is not found but here are the logs, I >>>>>>>> cannot figure out why it is not working with the new heat image. >>>>>>>> >>>>>>>> journalctl https://paste.xinu.at/jsA/ >>>>>>>> heat-config master https://paste.xinu.at/pEz/ >>>>>>>> heat-config master cluster config https://paste.xinu.at/K85ZY5/ >>>>>>>> >>>>>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# >>>>>>>> kubectl get pods --all-namespaces >>>>>>>> No resources found >>>>>>>> [root at cluster001-7dby23lm6ods-master-0 heat-config-script]# >>>>>>>> kubectl get all --all-namespaces >>>>>>>> NAMESPACE NAME TYPE CLUSTER-IP >>>>>>>> EXTERNAL-IP PORT(S) AGE >>>>>>>> default service/kubernetes ClusterIP 10.254.0.1 >>>>>>>> 443/TCP 36m >>>>>>>> >>>>>>>> It fails to deploy and I don't have any services configured. >>>>>>>> >>>>>>>> On Fri, Nov 6, 2020 at 12:43 PM Spyros Trigazis >>>>>>>> wrote: >>>>>>>> >>>>>>>>> Am I missing something? It is there. >>>>>>>>> >>>>>>>>> Spyros >>>>>>>>> >>>>>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent:victoria-dev version >>>>>>>>> --client --short >>>>>>>>> Client Version: v1.18.2 >>>>>>>>> ubuntu at str-focal-2xlarge-01:~$ docker run -it --entrypoint >>>>>>>>> /usr/bin/kubectl openstackmagnum/heat-container-agent at sha256:0f354341b1970b7dacd9825a1c7585bae5495842416123941965574374922b51 >>>>>>>>> version --client --short >>>>>>>>> Client Version: v1.18.2 >>>>>>>>> >>>>>>>>> On Fri, Nov 6, 2020 at 11:33 AM Ionut Biru >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> I wanted to say, 20 hours ago instead of days. >>>>>>>>>> >>>>>>>>>> On Fri, Nov 6, 2020 at 12:33 PM Ionut Biru >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>>> Hi, >>>>>>>>>>> >>>>>>>>>>> 20 days ago the image was updated and now all clusters fail to >>>>>>>>>>> deploy because it cannot find >>>>>>>>>>> /usr/bin/kubectl, i think. >>>>>>>>>>> >>>>>>>>>>> /usr/bin/kubectl apply -f >>>>>>>>>>> /srv/magnum/kubernetes/manifests/calico-deploy.yaml --namespace=kube-system >>>>>>>>>>> >>>>>>>>>>> Some services are installed using full path instead of simply >>>>>>>>>>> kubectl. >>>>>>>>>>> >>>>>>>>>>> Should the image be fixed or reverted the changes to old >>>>>>>>>>> revision or we should just fix the scripts? >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> Ionut Biru - https://fleio.com >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> Ionut Biru - https://fleio.com >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> Ionut Biru - https://fleio.com >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> Ionut Biru - https://fleio.com >>>>>> >>>>> >>>>> >>>>> -- >>>>> Ionut Biru - https://fleio.com >>>>> >>>> >>> >>> -- >>> Ionut Biru - https://fleio.com >>> >> >> >> -- >> Ionut Biru - https://fleio.com >> > -- Ionut Biru - https://fleio.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Nov 9 12:12:38 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 9 Nov 2020 13:12:38 +0100 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: References: Message-ID: On Mon, Nov 9, 2020 at 12:34 PM Mark Goddard wrote: > > Hi, > > I would like to propose adding wu.chunyang to the kolla-core and > kolla-ansible-core groups. wu.chunyang did some great work on Octavia > integration in the Victoria release, and has provided some helpful > reviews. > > Cores - please reply +1/-1 before the end of Friday 13th November. +1 From mnasiadka at gmail.com Mon Nov 9 12:21:52 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Mon, 9 Nov 2020 13:21:52 +0100 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: References: Message-ID: +1 pon., 9 lis 2020 o 12:35 Mark Goddard napisał(a): > Hi, > > I would like to propose adding wu.chunyang to the kolla-core and > kolla-ansible-core groups. wu.chunyang did some great work on Octavia > integration in the Victoria release, and has provided some helpful > reviews. > > Cores - please reply +1/-1 before the end of Friday 13th November. > > Thanks, > Mark > > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Mon Nov 9 13:25:25 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Nov 2020 13:25:25 +0000 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: <7b6c8fa04ea74598ba2a9d365f1ac72aee8f872c.camel@redhat.com> On Sun, 2020-11-08 at 09:31 -0500, Satish Patel wrote: > Folks, > > Recently i have added come compute nodes in cloud supporting > openvswitch-dpdk for performance. I am seeing all my PMD cpu cores are > 100% cpu usage on linux top command. yes this is perfectly normal and how dpdk is intended to work PMD stands for pool mode driver. the dpdk driver is runing in a busy loop polling the nic for new packets to process so form a linux perspective the core wil be used 100%. dpdk has its own stats for pmd useage that tell you the actul capasity but thre is nothing to be alarmed by. > It is normal behavior from first > looks. It's very scary to see 400% cpu usage on top. Can someone > confirm before I assume it's normal and what we can do to reduce it if > it's too high? > From fungi at yuggoth.org Mon Nov 9 13:48:18 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 9 Nov 2020 13:48:18 +0000 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: <20201106175052.7v3iwde5aw43xuy4@yuggoth.org> References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> <20201106173521.3xmgzwoqs2s4hkep@yuggoth.org> <20201106175052.7v3iwde5aw43xuy4@yuggoth.org> Message-ID: <20201109134818.sj75rxdurliuvq5k@yuggoth.org> On 2020-11-06 17:50:52 +0000 (+0000), Jeremy Stanley wrote: > On 2020-11-06 17:35:21 +0000 (+0000), Jeremy Stanley wrote: > [...] > > I propose we switch release jobs to run on an ubuntu-focal nodeset, > > as most official OpenStack jobs did already during the Victoria > > cycle. > [...] > > Now up for review: https://review.opendev.org/761776 This seems to have solved the problem. We also observed successful releases for the nova repository, including stable point releases for stable/stein and stable/train, so this presumably hasn't broken more common cases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From smooney at redhat.com Mon Nov 9 14:03:06 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Nov 2020 14:03:06 +0000 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: Message-ID: <5d30d8f4a6cc6ea92c004f685d1e4501700cae80.camel@redhat.com> On Mon, 2020-11-09 at 04:41 +0000, Tony Liu wrote: > Bonding is a SW feature supported by either kernel or DPDK layer. > In case of SRIOV, it's not complicated to enable bonding inside VM. > And it has to be two NICs connecting to two ToRs. > > Depending on DPDK implementation, you might be able to use VF. > Anyways, it's always recommended to have dedicated NIC for SRIOV. for what its worth melonox do support bondign fo VF on the same card i have never used it but bonding on the host is possibel for sriov. im not sure if it works with openstack however but i belvie it does. you will have to reach out to mellonox to determin if it is. most other nic vendors do not support bonding and it may limit other feature like bandwith based schduling as really you can only list one of the interfaces bandwith because you cant contol which interface is activly being used. > > > Thanks! > Tony > > -----Original Message----- > > From: Satish Patel > > Sent: Sunday, November 8, 2020 6:51 PM > > To: Tony Liu > > Cc: Laurent Dumont ; OpenStack Discuss > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > Thank you tony, > > > > We are running openstack cloud with SR-IOV and we are happy with > > performance but one big issue, it doesn't support bonding on compute > > nodes, we can do bonding inside VM but that is over complicated to do > > that level of deployment, without bonding it's always risky if tor > > switch dies. that is why i started looking into DPDK but look like i hit > > the wall again because my compute node has only 2 NIC we i can't do > > bonding while i am connected over same nic. Anyway i will stick with SR- > > IOV in that case to get more performance and less complexity. > > > > On Sun, Nov 8, 2020 at 3:22 PM Tony Liu wrote: > > > > > > SRIOV gives you the maximum performance, without any SW features > > > (security group, L3 routing, etc.), because it bypasses SW. > > > DPDK gives you less performance, with all SW features. > > > > > > Depend on the use case, max perf and SW features, you will need to > > > make a decision. > > > > > > > > > Tony > > > > -----Original Message----- > > > > From: Laurent Dumont > > > > Sent: Sunday, November 8, 2020 9:04 AM > > > > To: Satish Patel > > > > Cc: OpenStack Discuss > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > > > I have limited hands-on experience with both but they don't serve > > > > the same purpose/have the same implementation. You use SRIOV to > > > > allow Tenants to access the NIC cards directly and bypass any > > > > inherent linux- vr/OVS performance limitations. This is key for NFV > > > > workloads which are expecting large amount of PPS + low latency > > > > (because they are often just virtualized bare-metal products with > > > > one real cloud- readiness/architecture ;) ) - This means that a > > > > Tenant with an SRIOV port can use DPDK + access the NIC through the > > > > VF which means (in theory) a better performance than OVS+DPDK. > > > > > > > > You use ovs-dpdk to increase the performance of OVS based flows (so > > > > provider networks + vxlan based internal-tenant networks). > > > > > > > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > > > > wrote: > > > > > > > > > > > >        Thanks. Just curious then why people directly go for SR-IOV > > > >       implementation where they get better performance + they can > > use the > > > >       same CPU more also. What are the measure advantages or > > features > > > >       attracting the community to go with DPDK over SR-IOV? > > > > > > > >       On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > > > > wrote: > > > >       > > > > >       > As far as I know, DPDK enabled cores will show 100% usage at > > > > all times. > > > >       > > > > >       > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > > > > > wrote: > > > >       >> > > > >       >> Folks, > > > >       >> > > > >       >> Recently i have added come compute nodes in cloud > > supporting > > > >       >> openvswitch-dpdk for performance. I am seeing all my PMD > > > > cpu cores are > > > >       >> 100% cpu usage on linux top command. It is normal behavior > > > > from first > > > >       >> looks. It's very scary to see 400% cpu usage on top. Can > > someone > > > >       >> confirm before I assume it's normal and what we can do to > > > > reduce it if > > > >       >> it's too high? > > > >       >> > > > > > > > From satish.txt at gmail.com Mon Nov 9 14:13:44 2020 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 9 Nov 2020 09:13:44 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: <5d30d8f4a6cc6ea92c004f685d1e4501700cae80.camel@redhat.com> References: <5d30d8f4a6cc6ea92c004f685d1e4501700cae80.camel@redhat.com> Message-ID: Thank Sean, I have Intel NIC [root at infra-lxb-1 ~]# lspci | grep -i eth 06:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) 06:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual Port Backplane Connection (rev 01) I was thinking if i can create a couple VF out of SR-IOV interface and on a computer machine i create two bonding interfaces. bond-1 for mgmt and bond-2 for OVS+DPDK then it will solve my all problem related TOR switches redundancy. I don't think we can add VF as an interface in OVS for DPDK. On Mon, Nov 9, 2020 at 9:03 AM Sean Mooney wrote: > > On Mon, 2020-11-09 at 04:41 +0000, Tony Liu wrote: > > Bonding is a SW feature supported by either kernel or DPDK layer. > > In case of SRIOV, it's not complicated to enable bonding inside VM. > > And it has to be two NICs connecting to two ToRs. > > > > Depending on DPDK implementation, you might be able to use VF. > > Anyways, it's always recommended to have dedicated NIC for SRIOV. > for what its worth melonox do support bondign fo VF on the same card > i have never used it but bonding on the host is possibel for sriov. > im not sure if it works with openstack however but i belvie it does. > > you will have to reach out to mellonox to determin if it is. > most other nic vendors do not support bonding and it may limit other > feature like bandwith based schduling as really you can only list one of the interfaces bandwith > because you cant contol which interface is activly being used. > > > > > > > Thanks! > > Tony > > > -----Original Message----- > > > From: Satish Patel > > > Sent: Sunday, November 8, 2020 6:51 PM > > > To: Tony Liu > > > Cc: Laurent Dumont ; OpenStack Discuss > > > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > Thank you tony, > > > > > > We are running openstack cloud with SR-IOV and we are happy with > > > performance but one big issue, it doesn't support bonding on compute > > > nodes, we can do bonding inside VM but that is over complicated to do > > > that level of deployment, without bonding it's always risky if tor > > > switch dies. that is why i started looking into DPDK but look like i hit > > > the wall again because my compute node has only 2 NIC we i can't do > > > bonding while i am connected over same nic. Anyway i will stick with SR- > > > IOV in that case to get more performance and less complexity. > > > > > > On Sun, Nov 8, 2020 at 3:22 PM Tony Liu wrote: > > > > > > > > SRIOV gives you the maximum performance, without any SW features > > > > (security group, L3 routing, etc.), because it bypasses SW. > > > > DPDK gives you less performance, with all SW features. > > > > > > > > Depend on the use case, max perf and SW features, you will need to > > > > make a decision. > > > > > > > > > > > > Tony > > > > > -----Original Message----- > > > > > From: Laurent Dumont > > > > > Sent: Sunday, November 8, 2020 9:04 AM > > > > > To: Satish Patel > > > > > Cc: OpenStack Discuss > > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > > > > > I have limited hands-on experience with both but they don't serve > > > > > the same purpose/have the same implementation. You use SRIOV to > > > > > allow Tenants to access the NIC cards directly and bypass any > > > > > inherent linux- vr/OVS performance limitations. This is key for NFV > > > > > workloads which are expecting large amount of PPS + low latency > > > > > (because they are often just virtualized bare-metal products with > > > > > one real cloud- readiness/architecture ;) ) - This means that a > > > > > Tenant with an SRIOV port can use DPDK + access the NIC through the > > > > > VF which means (in theory) a better performance than OVS+DPDK. > > > > > > > > > > You use ovs-dpdk to increase the performance of OVS based flows (so > > > > > provider networks + vxlan based internal-tenant networks). > > > > > > > > > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > > > > > wrote: > > > > > > > > > > > > > > > Thanks. Just curious then why people directly go for SR-IOV > > > > > implementation where they get better performance + they can > > > use the > > > > > same CPU more also. What are the measure advantages or > > > features > > > > > attracting the community to go with DPDK over SR-IOV? > > > > > > > > > > On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > > > > > wrote: > > > > > > > > > > > > As far as I know, DPDK enabled cores will show 100% usage at > > > > > all times. > > > > > > > > > > > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > > > > > > wrote: > > > > > >> > > > > > >> Folks, > > > > > >> > > > > > >> Recently i have added come compute nodes in cloud > > > supporting > > > > > >> openvswitch-dpdk for performance. I am seeing all my PMD > > > > > cpu cores are > > > > > >> 100% cpu usage on linux top command. It is normal behavior > > > > > from first > > > > > >> looks. It's very scary to see 400% cpu usage on top. Can > > > someone > > > > > >> confirm before I assume it's normal and what we can do to > > > > > reduce it if > > > > > >> it's too high? > > > > > >> > > > > > > > > > > > From hberaud at redhat.com Mon Nov 9 14:17:00 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 9 Nov 2020 15:17:00 +0100 Subject: [Release-job-failures] Release of openstack/ansible-role-redhat-subscription for ref refs/tags/1.1.1 failed In-Reply-To: <20201109134818.sj75rxdurliuvq5k@yuggoth.org> References: <642745d9-ca22-3830-c9d1-36f3e97a458d@openstack.org> <20201106173521.3xmgzwoqs2s4hkep@yuggoth.org> <20201106175052.7v3iwde5aw43xuy4@yuggoth.org> <20201109134818.sj75rxdurliuvq5k@yuggoth.org> Message-ID: >From the release management side I confirm that this fix solved the problem, ansible-role-redhat-subscription 1.1.1 have been re-enqueued and successfully released on pypi [1]. Thanks Jeremy for all things done on this topic. [1] https://pypi.org/project/ansible-role-redhat-subscription/#history Le lun. 9 nov. 2020 à 14:52, Jeremy Stanley a écrit : > On 2020-11-06 17:50:52 +0000 (+0000), Jeremy Stanley wrote: > > On 2020-11-06 17:35:21 +0000 (+0000), Jeremy Stanley wrote: > > [...] > > > I propose we switch release jobs to run on an ubuntu-focal nodeset, > > > as most official OpenStack jobs did already during the Victoria > > > cycle. > > [...] > > > > Now up for review: https://review.opendev.org/761776 > > This seems to have solved the problem. We also observed successful > releases for the nova repository, including stable point releases > for stable/stein and stable/train, so this presumably hasn't broken > more common cases. > -- > Jeremy Stanley > -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From gabriel.gamero at pucp.edu.pe Mon Nov 9 14:24:57 2020 From: gabriel.gamero at pucp.edu.pe (Gabriel Omar Gamero Montenegro) Date: Mon, 9 Nov 2020 09:24:57 -0500 Subject: [neutron] Use the same network interface with multiple ML2 mechanism drivers In-Reply-To: <5309547.V514dPugnQ@p1> References: <5309547.V514dPugnQ@p1> Message-ID: I'm planning to deploy OpenStack with 2 mechanism drivers on physical servers with only one network interface card: openvswitch and SR-IOV. According to what I read, it is possible to use the same physical interface using these 2 mechanism drivers. I assume this is possible because a NIC with SR-IOV capabilities can divide a NIC into a PhysicalFunction (which I'm using for openvswitch) and many VirtualFunctions (which I'm using for SR-IOV). Before editing something on physical servers I was planning to use a test environment with virtual machines where I do not count with a NIC with SR-IOV capabilities. Since the mechanism driver of openvswitch will work with security groups and the SR-IOV mechanism driver can't have security groups enabled, I was planning to use linux bridge as a replacement and disable the security feature. Thus, I can make security tests with a SDN module that I'm developing for networks with SR-IOV in OpenStack. Thanks. Gabriel Gamero. El lun., 9 nov. 2020 a las 2:36, Slawek Kaplonski () escribió: > > Hi, > > Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro > pisze: > > Dear all, > > > > I know that ML2 Neutron core plugin is designed to > > support multiple mechanism and type drivers simultaneously. > > But I'd like to know: is it possible to use the > > same network interface configured with different > > ML2 mechanism drivers? > > > > I'm planning to use openvswitch and linuxbridge as > > mechanism drivers along with VLAN as type driver. > > Could it be possible to have the following > > configuration for that purpose?: > > > > ml2_conf.ini: > > [ml2] > > mechanism_drivers = openvswitch,linuxbridge > > [ml2_type_vlan] > > network_vlan_ranges = physnet1:40:60,physnet2:60:80 > > > > eth3 is a port of the provider bridge: > > ovs-vsctl add-port br-provider eth3 > > > > openvswitch_agent.ini: > > [ovs] > > bridge_mappings = physnet1:br-provider > > > > linuxbridge_agent.ini: > > [linux_bridge] > > physical_interface_mappings = physnet2:eth3 > > > > I don't think it's will work because You would need to have same interface in > the ovs bridge (br-provider) and use it by linuxbridge. > But TBH this is a bit strange configuration for me. I can imaging different > computes which are using different backends. But why You want to use > linuxbridge and openvswitch agents together on same compute node? > > > If it's mandatory to use different network interfaces > > any guide or sample reference about implementing > > multiple mechanism drivers would be highly appreciated. > > > > Thanks in advance, > > Gabriel Gamero > > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat > > From smooney at redhat.com Mon Nov 9 14:28:33 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Nov 2020 14:28:33 +0000 Subject: [neutron] Use the same network interface with multiple ML2 mechanism drivers In-Reply-To: <5309547.V514dPugnQ@p1> References: <5309547.V514dPugnQ@p1> Message-ID: On Mon, 2020-11-09 at 08:36 +0100, Slawek Kaplonski wrote: > Hi, > > Dnia niedziela, 8 listopada 2020 04:06:50 CET Gabriel Omar Gamero Montenegro > pisze: > > Dear all, > > > > I know that ML2 Neutron core plugin is designed to > > support multiple mechanism and type drivers simultaneously. > > But I'd like to know: is it possible to use the > > same network interface configured with different > > ML2 mechanism drivers? you can use the sriov nic agent to manage VF and use either the linux bridge agent or ovs agent to managed the pf on the same host. > > > > I'm planning to use openvswitch and linuxbridge as > > mechanism drivers along with VLAN as type driver. > > Could it be possible to have the following > > configuration for that purpose?: > > > > ml2_conf.ini: > > [ml2] > > mechanism_drivers = openvswitch,linuxbridge > > [ml2_type_vlan] > > network_vlan_ranges = physnet1:40:60,physnet2:60:80 > > > > eth3 is a port of the provider bridge: > > ovs-vsctl add-port br-provider eth3 > > > > openvswitch_agent.ini: > > [ovs] > > bridge_mappings = physnet1:br-provider > > > > linuxbridge_agent.ini: > > [linux_bridge] > > physical_interface_mappings = physnet2:eth3 > > > > I don't think it's will work because You would need to have same interface in > the ovs bridge (br-provider) and use it by linuxbridge. > But TBH this is a bit strange configuration for me. I can imaging different > computes which are using different backends. But why You want to use > linuxbridge and openvswitch agents together on same compute node? ya in this case it wont work although there are cases wehre it would. linux bridge is better fro multi cast heavy workslond so you might want all vxlan traffic to be handeled by linux bridge whith all vlan traffic handeled by ovs. > > > If it's mandatory to use different network interfaces > > any guide or sample reference about implementing > > multiple mechanism drivers would be highly appreciated. its really intened to have different ml2 dirver on differnet hosts with one exception. if the vnic types supported by each ml2 driver do not over lap then you can have two different ml2 drivers on the same host. e.g. sriov + somethign else. it should also be noted that mixed deployemtn really only work properly for vlan and flat netwroks. tuneled networks like vxlan will not form the required mesh to allow comunication between host with different backends. if you want to run linux bridge and ovs on the same host you could do it by using 2 nics with different physnets e.g. using eth2 for ovs and eth3 for linux bridge openvswitch_agent.ini: > > [ovs] > > bridge_mappings = ovs-physnet:br-provider added eth2 to br-provider linuxbridge_agent.ini: > > [linux_bridge] > > physical_interface_mappings = linuxbridge-physnet:eth3 tunnesl will only ever be served by one of the 2 ml2 drivers on any one host determined by mechanism_drivers = openvswitch,linuxbridge in this case ovs would be used for all tunnel traffic unless you segreate the vni ranges so that linxu bgride and ovs have different ranges. netuon will still basically allocate networks in assendign order fo vnis so it will file one before using the other as an operator you could specify the vni to use when creating a network but i dont belive that normal users can do that. in general i would not advice doing this and just using 1 ml2 driver for vnic_type=normal on a singel host as it make debuging much much harder and vendors wont generaly support this custom config. its possible to do but you really really need to know how the different backends work. simple is normally better when it comes to networking. > > > > Thanks in advance, > > Gabriel Gamero > > From smooney at redhat.com Mon Nov 9 14:30:34 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 09 Nov 2020 14:30:34 +0000 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: References: <5d30d8f4a6cc6ea92c004f685d1e4501700cae80.camel@redhat.com> Message-ID: <80aaf9917fb3624b15af3f2119b65c115409d7f1.camel@redhat.com> On Mon, 2020-11-09 at 09:13 -0500, Satish Patel wrote: > Thank Sean, > > I have Intel NIC > > [root at infra-lxb-1 ~]# lspci | grep -i eth > 06:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual > Port Backplane Connection (rev 01) > 06:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual > Port Backplane Connection (rev 01) > > I was thinking if i can create a couple VF out of SR-IOV interface and > on a computer machine i create two bonding interfaces. bond-1 for mgmt > and bond-2 for OVS+DPDK then it will solve my all problem related TOR > switches redundancy. > > I don't think we can add VF as an interface in OVS for DPDK. you can and if you create the bond on the host first it basically defeets teh reason for using dpdk the kernel bond driver will be a bottelneck for dpdk. if you want to bond dpdk interfaces then you should create that bond in ovs by adding the two vfs and then creatign an ovs bond. > On Mon, Nov 9, 2020 at 9:03 AM Sean Mooney wrote: > > > > On Mon, 2020-11-09 at 04:41 +0000, Tony Liu wrote: > > > Bonding is a SW feature supported by either kernel or DPDK layer. > > > In case of SRIOV, it's not complicated to enable bonding inside VM. > > > And it has to be two NICs connecting to two ToRs. > > > > > > Depending on DPDK implementation, you might be able to use VF. > > > Anyways, it's always recommended to have dedicated NIC for SRIOV. > > for what its worth melonox do support bondign fo VF on the same card > > i have never used it but bonding on the host is possibel for sriov. > > im not sure if it works with openstack however but i belvie it does. > > > > you will have to reach out to mellonox to determin if it is. > > most other nic vendors do not support bonding and it may limit other > > feature like bandwith based schduling as really you can only list one of the interfaces bandwith > > because you cant contol which interface is activly being used. > > > > > > > > > > > Thanks! > > > Tony > > > > -----Original Message----- > > > > From: Satish Patel > > > > Sent: Sunday, November 8, 2020 6:51 PM > > > > To: Tony Liu > > > > Cc: Laurent Dumont ; OpenStack Discuss > > > > > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > > > Thank you tony, > > > > > > > > We are running openstack cloud with SR-IOV and we are happy with > > > > performance but one big issue, it doesn't support bonding on compute > > > > nodes, we can do bonding inside VM but that is over complicated to do > > > > that level of deployment, without bonding it's always risky if tor > > > > switch dies. that is why i started looking into DPDK but look like i hit > > > > the wall again because my compute node has only 2 NIC we i can't do > > > > bonding while i am connected over same nic. Anyway i will stick with SR- > > > > IOV in that case to get more performance and less complexity. > > > > > > > > On Sun, Nov 8, 2020 at 3:22 PM Tony Liu wrote: > > > > > > > > > > SRIOV gives you the maximum performance, without any SW features > > > > > (security group, L3 routing, etc.), because it bypasses SW. > > > > > DPDK gives you less performance, with all SW features. > > > > > > > > > > Depend on the use case, max perf and SW features, you will need to > > > > > make a decision. > > > > > > > > > > > > > > > Tony > > > > > > -----Original Message----- > > > > > > From: Laurent Dumont > > > > > > Sent: Sunday, November 8, 2020 9:04 AM > > > > > > To: Satish Patel > > > > > > Cc: OpenStack Discuss > > > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > > > > > > > I have limited hands-on experience with both but they don't serve > > > > > > the same purpose/have the same implementation. You use SRIOV to > > > > > > allow Tenants to access the NIC cards directly and bypass any > > > > > > inherent linux- vr/OVS performance limitations. This is key for NFV > > > > > > workloads which are expecting large amount of PPS + low latency > > > > > > (because they are often just virtualized bare-metal products with > > > > > > one real cloud- readiness/architecture ;) ) - This means that a > > > > > > Tenant with an SRIOV port can use DPDK + access the NIC through the > > > > > > VF which means (in theory) a better performance than OVS+DPDK. > > > > > > > > > > > > You use ovs-dpdk to increase the performance of OVS based flows (so > > > > > > provider networks + vxlan based internal-tenant networks). > > > > > > > > > > > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > > > > > > wrote: > > > > > > > > > > > > > > > > > >        Thanks. Just curious then why people directly go for SR-IOV > > > > > >       implementation where they get better performance + they can > > > > use the > > > > > >       same CPU more also. What are the measure advantages or > > > > features > > > > > >       attracting the community to go with DPDK over SR-IOV? > > > > > > > > > > > >       On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > > > > > > wrote: > > > > > >       > > > > > > >       > As far as I know, DPDK enabled cores will show 100% usage at > > > > > > all times. > > > > > >       > > > > > > >       > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > > > > > > > wrote: > > > > > >       >> > > > > > >       >> Folks, > > > > > >       >> > > > > > >       >> Recently i have added come compute nodes in cloud > > > > supporting > > > > > >       >> openvswitch-dpdk for performance. I am seeing all my PMD > > > > > > cpu cores are > > > > > >       >> 100% cpu usage on linux top command. It is normal behavior > > > > > > from first > > > > > >       >> looks. It's very scary to see 400% cpu usage on top. Can > > > > someone > > > > > >       >> confirm before I assume it's normal and what we can do to > > > > > > reduce it if > > > > > >       >> it's too high? > > > > > >       >> > > > > > > > > > > > > > > > > From satish.txt at gmail.com Mon Nov 9 15:11:19 2020 From: satish.txt at gmail.com (Satish Patel) Date: Mon, 9 Nov 2020 10:11:19 -0500 Subject: openvswitch+dpdk 100% cpu usage of ovs-vswitchd In-Reply-To: <80aaf9917fb3624b15af3f2119b65c115409d7f1.camel@redhat.com> References: <5d30d8f4a6cc6ea92c004f685d1e4501700cae80.camel@redhat.com> <80aaf9917fb3624b15af3f2119b65c115409d7f1.camel@redhat.com> Message-ID: That would be great, thank you. Let me try to create VF based bonding inside DPDK and see how it goes. On Mon, Nov 9, 2020 at 9:30 AM Sean Mooney wrote: > > On Mon, 2020-11-09 at 09:13 -0500, Satish Patel wrote: > > Thank Sean, > > > > I have Intel NIC > > > > [root at infra-lxb-1 ~]# lspci | grep -i eth > > 06:00.0 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual > > Port Backplane Connection (rev 01) > > 06:00.1 Ethernet controller: Intel Corporation 82599 10 Gigabit Dual > > Port Backplane Connection (rev 01) > > > > I was thinking if i can create a couple VF out of SR-IOV interface and > > on a computer machine i create two bonding interfaces. bond-1 for mgmt > > and bond-2 for OVS+DPDK then it will solve my all problem related TOR > > switches redundancy. > > > > I don't think we can add VF as an interface in OVS for DPDK. > you can and if you create the bond on the host first it basically defeets teh reason for using dpdk > the kernel bond driver will be a bottelneck for dpdk. if you want to bond dpdk interfaces then you should > create that bond in ovs by adding the two vfs and then creatign an ovs bond. > > > On Mon, Nov 9, 2020 at 9:03 AM Sean Mooney wrote: > > > > > > On Mon, 2020-11-09 at 04:41 +0000, Tony Liu wrote: > > > > Bonding is a SW feature supported by either kernel or DPDK layer. > > > > In case of SRIOV, it's not complicated to enable bonding inside VM. > > > > And it has to be two NICs connecting to two ToRs. > > > > > > > > Depending on DPDK implementation, you might be able to use VF. > > > > Anyways, it's always recommended to have dedicated NIC for SRIOV. > > > for what its worth melonox do support bondign fo VF on the same card > > > i have never used it but bonding on the host is possibel for sriov. > > > im not sure if it works with openstack however but i belvie it does. > > > > > > you will have to reach out to mellonox to determin if it is. > > > most other nic vendors do not support bonding and it may limit other > > > feature like bandwith based schduling as really you can only list one of the interfaces bandwith > > > because you cant contol which interface is activly being used. > > > > > > > > > > > > > > > Thanks! > > > > Tony > > > > > -----Original Message----- > > > > > From: Satish Patel > > > > > Sent: Sunday, November 8, 2020 6:51 PM > > > > > To: Tony Liu > > > > > Cc: Laurent Dumont ; OpenStack Discuss > > > > > > > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > > > > > Thank you tony, > > > > > > > > > > We are running openstack cloud with SR-IOV and we are happy with > > > > > performance but one big issue, it doesn't support bonding on compute > > > > > nodes, we can do bonding inside VM but that is over complicated to do > > > > > that level of deployment, without bonding it's always risky if tor > > > > > switch dies. that is why i started looking into DPDK but look like i hit > > > > > the wall again because my compute node has only 2 NIC we i can't do > > > > > bonding while i am connected over same nic. Anyway i will stick with SR- > > > > > IOV in that case to get more performance and less complexity. > > > > > > > > > > On Sun, Nov 8, 2020 at 3:22 PM Tony Liu wrote: > > > > > > > > > > > > SRIOV gives you the maximum performance, without any SW features > > > > > > (security group, L3 routing, etc.), because it bypasses SW. > > > > > > DPDK gives you less performance, with all SW features. > > > > > > > > > > > > Depend on the use case, max perf and SW features, you will need to > > > > > > make a decision. > > > > > > > > > > > > > > > > > > Tony > > > > > > > -----Original Message----- > > > > > > > From: Laurent Dumont > > > > > > > Sent: Sunday, November 8, 2020 9:04 AM > > > > > > > To: Satish Patel > > > > > > > Cc: OpenStack Discuss > > > > > > > Subject: Re: openvswitch+dpdk 100% cpu usage of ovs-vswitchd > > > > > > > > > > > > > > I have limited hands-on experience with both but they don't serve > > > > > > > the same purpose/have the same implementation. You use SRIOV to > > > > > > > allow Tenants to access the NIC cards directly and bypass any > > > > > > > inherent linux- vr/OVS performance limitations. This is key for NFV > > > > > > > workloads which are expecting large amount of PPS + low latency > > > > > > > (because they are often just virtualized bare-metal products with > > > > > > > one real cloud- readiness/architecture ;) ) - This means that a > > > > > > > Tenant with an SRIOV port can use DPDK + access the NIC through the > > > > > > > VF which means (in theory) a better performance than OVS+DPDK. > > > > > > > > > > > > > > You use ovs-dpdk to increase the performance of OVS based flows (so > > > > > > > provider networks + vxlan based internal-tenant networks). > > > > > > > > > > > > > > On Sun, Nov 8, 2020 at 11:13 AM Satish Patel > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > > Thanks. Just curious then why people directly go for SR-IOV > > > > > > > implementation where they get better performance + they can > > > > > use the > > > > > > > same CPU more also. What are the measure advantages or > > > > > features > > > > > > > attracting the community to go with DPDK over SR-IOV? > > > > > > > > > > > > > > On Sun, Nov 8, 2020 at 10:50 AM Laurent Dumont > > > > > > > > wrote: > > > > > > > > > > > > > > > > As far as I know, DPDK enabled cores will show 100% usage at > > > > > > > all times. > > > > > > > > > > > > > > > > On Sun, Nov 8, 2020 at 9:39 AM Satish Patel > > > > > > > > wrote: > > > > > > > >> > > > > > > > >> Folks, > > > > > > > >> > > > > > > > >> Recently i have added come compute nodes in cloud > > > > > supporting > > > > > > > >> openvswitch-dpdk for performance. I am seeing all my PMD > > > > > > > cpu cores are > > > > > > > >> 100% cpu usage on linux top command. It is normal behavior > > > > > > > from first > > > > > > > >> looks. It's very scary to see 400% cpu usage on top. Can > > > > > someone > > > > > > > >> confirm before I assume it's normal and what we can do to > > > > > > > reduce it if > > > > > > > >> it's too high? > > > > > > > >> > > > > > > > > > > > > > > > > > > > > > > > From arne.wiebalck at cern.ch Mon Nov 9 15:43:30 2020 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Mon, 9 Nov 2020 16:43:30 +0100 Subject: [baremetal-sig][ironic] Tue Nov 10, 2pm UTC: Baremetal and ML2 networking Message-ID: <2a4befd4-ebb1-4589-572a-8a1da6f4f325@cern.ch> Dear all, The Bare Metal SIG will meet tomorrow, Tue Nov 10 at 2pm UTC, with a "topic-of-the-day"(*) presentation by Julia on Bare Metal + ML2 Networking: Why it is this way (feat. 3d printer calibration cubes!) All details for this meeting and upcoming topics can be found on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Everyone is welcome, don't miss out! Cheers, Arne (*) The Bare Metal SIG has at each meeting a ~10 minute introduction to or presentation of a bare metal related topic. From juliaashleykreger at gmail.com Mon Nov 9 16:14:59 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 9 Nov 2020 08:14:59 -0800 Subject: [oslo][release] Oslo's Transition To Independent In-Reply-To: References: <4a9dc3ba-ef77-fdfd-2729-b7cb6357e3ed@nemebean.com> Message-ID: Top posting because I'm a horrible person. Anyway, I think moving oslo to independent makes lots of sense, at least to me. The key I think is to have the ability to backport and release a patch/revisin/fix release if there is a major issue, but the obligation to backport or even set a foundation to do so is a whole other matter. My overall feeling, for something as stable as the oslo libraries, that cycle-with-intermediacy path may just end up being a frustrating headache, that is if the team feels confident and actively manages their own releases of the various libraries. Also, Under the existing model and policy of the release team, they would basically end up back where they started if the libraries failed to release multiple times, which may not make sense for stable libraries. Anyway, just my $0.02. -Julia On Fri, Nov 6, 2020 at 2:04 AM Herve Beraud wrote: > > First, thanks for your answer. > > Le mer. 4 nov. 2020 à 15:50, Ben Nemec a écrit : >> >> >> >> On 11/4/20 7:31 AM, Herve Beraud wrote: >> > Greetings, >> > >> > Is it time for us to move some parts of Oslo to the independent release >> > model [1]? >> > >> > I think we could consider Oslo world is mostly stable enough to answer >> > yes to the previous question. >> > >> > However, the goal of this email is to trigger the debat and see which >> > deliverables could be transitioned to the independent model. >> > >> > Do we need to expect that major changes will happen within Oslo and who >> > could warrant to continue to follow cycle-with-intermediary model [2]? >> >> I would hesitate to try to predict the future on what will see major >> changes and what won't. I would prefer to look at this more from the >> perspective of what Oslo libraries are tied to the OpenStack version. >> For example, I don't think oslo.messaging should be moved to >> independent. It's important that RPC has a sane version to version >> upgrade path, and the easiest way to ensure that is to keep it on the >> regular cycle release schedule. The same goes for other libraries too: >> oslo.versionedobjects, oslo.policy, oslo.service, oslo.db, possibly also >> things like oslo.config and oslo.context (I suspect contexts need to be >> release-specific, but maybe someone from a consuming project can weigh >> in). Even oslo.serialization could have upgrade impacts if it is being >> used to serialize internal data in a service. > > > Agreed, the goal here isn't to try to move everything to the independent model but more to identify which projects could be eligible for this switch. > I strongly agree that the previous list of projects that you quote should stay binded to openstack cycles and should continue to rely on stable branches. > These kinds of projects and also openstack's services are strongly tied to backends, their version, and available APIs and so to openstack's series, so they must remain linked to them. > >> >> That said, many of the others can probably be moved. oslo.i18n and >> oslo.upgradecheck are both pretty stable at this point and not really >> tied to a specific release. As long as we're responsible with any future >> changes to them it should be pretty safe to make them independent. > > > Agreed. > >> >> This does raise a question though: Are the benefits of going independent >> with the release model sufficient to justify splitting the release >> models of the Oslo projects? I assume the motivation is to avoid having >> to do as many backports of bug fixes, but if we're mostly doing this >> with low-volume, stable projects does it gain us that much? > > > Yes, you're right, it could help us to reduce our needed maintenance and so our Oslo's activity in general. > Indeed, about 35 projects are hosted by Oslo and concerning the active maintainers the trend isn't on the rise. > So reducing the number of stable branches to maintain could benefit us, and it could be done by moving projects to an independent model. > >> >> >> I guess I don't have a strong opinion one way or another on this yet, >> and would defer to our release liaisons if they want to go one way or >> other other. Hopefully this provides some things to think about though. > > > Yes you provided interesting observations, thanks. > It could be interesting to get feedback from other cores. > >> >> -Ben >> > > > -- > Hervé Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > From gouthampravi at gmail.com Mon Nov 9 16:39:10 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Mon, 9 Nov 2020 08:39:10 -0800 Subject: Which deployment method for ceph (rook|ceph-ansible|tripleo) In-Reply-To: <3BF6161C-B8D5-46E1-ACA3-3516BE901D1F@me.com> References: <20201103151029.dyixpfoput5rgyae@barron.net> <3BF6161C-B8D5-46E1-ACA3-3516BE901D1F@me.com> Message-ID: On Tue, Nov 3, 2020 at 12:11 PM Oliver Weinmann wrote: > Hi Tom, > > Thanks a lot for your input. Highly appreciated. John really convinced me > to deploy a > Standalone cluster with cephadm and so I started to deploy It. I stumbled > upon a few obstacles, but mostly because commands didn’t behave as expected > or myself missing some important steps like adding 3 dashes in a yml file. > Cephadm looks really promising. I hope that by tomorrow I will have a > simple 3 node cluster. I have to look deeper into it as of how to configure > separate networks for the cluster and the Frontend but it shouldn’t be too > hard. Once the cluster is functioning I will re-deploy my tripleo POC > pointing to the standalone Ceph cluster. Currently I have a zfs nfs Storage > configured that I would like to keep. I have to figure out how to configure > multiple backends in tripleo. > Is the ZFS NFS storage going to be managed by the ZFSOnLinux driver in Manila? Or is this Oracle ZFSSA? Configuring integrated backends [2] in TripleO is fairly straightforward - but neither of the drivers above are integrated into TripleO, you'll have to use *ExtraConfig steps to shove the config into manila.conf: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/node_config.html Something along these lines: http://paste.openstack.org/show/799832/ > > Cheers, > Oliver > > Von meinem iPhone gesendet > > > Am 03.11.2020 um 16:20 schrieb Tom Barron : > > > > On 01/11/20 08:55 +0100, Oliver Weinmann wrote: > >> Hi, > >> > >> I'm still in the process of preparing a OpenStack POC. I'm 100% sure > that I want to use CEPH and so I purchased the book Mastering CEPH 2nd > edition. First of all, It's a really good book. It basically explains the > various methods how ceph can be deployed and also the components that CEPH > is build of. So I played around a lot with ceph-ansible and rook in my > virtualbox environment. I also started to play with tripleo ceph > deployment, although I haven't had the time yet to sucessfully deploy a > openstack cluster with CEPH. Now I'm wondering, which of these 3 methods > should I use? > >> > >> rook > >> > >> ceph-ansible > >> > >> tripleo > >> > >> I also want to use CEPH for NFS and CIFS (Samba) as we have plenty of > VMs running under vSphere that currently consume storage from a ZFS storage > via CIFS and NFS. I don't know if rook can be used for this. I have the > feeling that it is purely meant to be used for kubernetes? And If I would > like to have CIFS and NFS maybe tripleo is not capable of enabling these > features in CEPH? So I would be left with ceph-ansible? > >> > >> Any suggestions are very welcome. > >> > >> Best Regards, > >> > >> Oliver > >> > >> > > > > If your core use case is OpenStack (OpenStack POC with CEPH) then of the > three tools mentioned only tripleo will deploy OpenStack. It can deploy a > Ceph cluster for use by OpenStack as well, or you can deploy Ceph > externally and it will "point" to it from OpenStack. For an OpenStack POC > (and maybe for production too) I'd just have TripleO deploy the Ceph > cluster as well. TripleO will use a Ceph deployment tool -- today, > ceph-ansible; in the future, cephadm -- to do the actual Ceph cluster > deployment but it figures out how to run the Ceph deployment for you. > > > > I don't think any of these tools will install a Samba/CIFs gateway to > CephFS. It's reportedly on the road map for the new cephadm tool. You may > be able to manually retrofit it to your Ceph cluster by consulting upstream > guidance such as [1]. > > > > NFS shares provisioned in OpenStack (Manila file service) *could* be > shared out-of-cloud to VSphere VMs if networking is set up to make them > accessible but the shares would remain OpenStack managed. To use the same > Ceph cluster for OpenStack and vSphere but have separate storage pools > behind them, and independent management, you'd need to deploy the Ceph > cluster independently of OpenStack and vSphere. TripleO could "point" to > this cluster as mentioned previously. > > > > I agree with your assessment that rook is intended to provide PVs for > Kubernetes consumers. You didn't mention Kubernetes as a use case, but as > John Fulton has mentioned it is possible to use rook on Kubernetes in a > mode where it "points" to an external ceph cluster rather than provisioning > its own as well. Or if you run k8s clusters on OpenStack, they can just > consume the OpenStack storage, which will be Ceph storage when that is used > to back OpenStack Cinder and Manila. > > > > -- Tom Barron > > > > [1] > https://www.youtube.com/watch?v=5v8L7FhIyOw&list=PLrBUGiINAakNCnQUosh63LpHbf84vegNu&index=19&t=0s&app=desktop > > > > [2] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/environments -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Mon Nov 9 16:40:39 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 9 Nov 2020 08:40:39 -0800 Subject: [octavia] Timeouts during building of lb? But then successful In-Reply-To: <6266636F-EB91-4CAC-B5CA-228710483B67@datalounges.com> References: <6266636F-EB91-4CAC-B5CA-228710483B67@datalounges.com> Message-ID: Hi Florian, That is very unusual. It typically takes less than 30 seconds for a load balancer to be provisioned. It definitely sounds like the mysql instance is having trouble. This can also cause longer term issues if the query response time drops to 10 seconds or more(0.001 is normal), which could trigger unnecessary failovers. In Octavia there are layers of "retries" to attempt to handle clouds that are having trouble. It sounds like database issues are triggering one or more of these retries. There are a few retries that will be in play for database transactions: MySQL internal retries/timeouts such as lock timeouts (logged on the mysql side) oslo.db includes some automatic retries (typically not logged without configuration file settings) Octavia tenacity and flow retries (Typically logged if the configuration file has Debug = True enabled) This may also be a general network connection issue. The default REST timeouts (used when we connect to the amphora agents) is 600, I'm wondering if the lb-mgmt-network is also having an issue. Please check your health manager log files. If there are database query time issues logged, it would point specifically to a mysql issue. In the past we have seen mysql clustering setups that were bad and caused performance issues (flipping primary instance, lock contention between the instances, etc.). You should not be seeing any log messages that the mysql database went away, that is not normal. Michael On Sun, Nov 8, 2020 at 7:06 AM Florian Rommel wrote: > > Hi, so we have a fully functioning setup of octavia on ussuri and it works nicely, when it competes. > So here is what happens: > From octavia api to octavia worker takes 20 seconds for the job to be initiated. > The loadbalancer gets built quickly and then we get a mysql went away error, the listener gets built and then a member , that works too, then the mysql error comes up with query took too long to execute. > Now this is where it gets weird. This is all within the first 2 - 3 minutes. > At this point it hangs and takes 10 minutes (600 seconds) for the next step to complete and then another 10 minutes and another 10 until it’s completed. > It seems there is a timeout somewhere but even with debug on we do not see what is going on. Does anyone have a mysql 8 running and octavia executing fine? And could send me their redacted octavia or mysql conf files? We didn’t touch them but it seems that there is something off.. > especially since it then completes and works extremely nicely. > I would highly appreciate it , even off list. > Best regards, > //f > > From Akshay.346 at hsc.com Mon Nov 9 09:06:04 2020 From: Akshay.346 at hsc.com (Akshay 346) Date: Mon, 9 Nov 2020 09:06:04 +0000 Subject: Unable to build centos 8.2 qcow2 Message-ID: Hello All, I need to build centos 8.2 image using disk-image-builder. But by setting DIB_RELEASE with 8, it by default uses centos8.1 as base cloud image. How can I use centos 8.2 image? Also I tried setting DIB_CLOUD_IMAGES having my local path to centos 8.2 qcow2 but it appends centos8.1 name at the defined path. i.e /path/to/my/dir/centos8.2/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 Please suggest how may I build centos 8.2 qcow2. Be safe Akshay DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Nov 9 21:38:16 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 09 Nov 2020 13:38:16 -0800 Subject: Unable to build centos 8.2 qcow2 In-Reply-To: References: Message-ID: <468e57d7-b3b5-4eff-8f96-76f2df1e83bb@www.fastmail.com> On Mon, Nov 9, 2020, at 1:06 AM, Akshay 346 wrote: > > Hello All, > > > > I need to build centos 8.2 image using disk-image-builder. > > But by setting DIB_RELEASE with 8, it by default uses centos8.1 as base > cloud image. How can I use centos 8.2 image? > > > > Also I tried setting DIB_CLOUD_IMAGES having my local path to centos > 8.2 qcow2 but it appends centos8.1 name at the defined path. > > i.e > /path/to/my/dir/centos8.2/CentOS-8-GenericCloud-8.1.1911-20200113.3.x86_64.qcow2 > > > > Please suggest how may I build centos 8.2 qcow2. Looking at the dib code [0] that does the image selection the other variables at play are $DIB_FLAVOR and $ARCH. If I set DIB_RELEASE=8, DIB_FLAVOR=GenericCloud, and ARCH=x86_64 then running `curl -s https://cloud.centos.org/centos/${DIB_RELEASE}/${ARCH}/images/ | grep -o "CentOS-.[^>]*${DIB_FLAVOR}-.[^>]*.qcow2" | sort -r | head -1` produces "CentOS-8-GenericCloud-8.2.2004-20200611.2.x86_64.qcow2". I suspect that means if I were to run a build it would be based on that image. Can you double check you aren't setting DIB_FLAVOR to another value that may not support 8.2 yet or perhaps another ARCH? Also, if you can share logs from the build via a paste service that would be helpful in debugging. One other thing to consider is that you can use the centos-minimal element instead which will build an image from scratch using yum/dnf. > > > > Be safe > > Akshay [0] https://opendev.org/openstack/diskimage-builder/src/branch/master/diskimage_builder/elements/centos/root.d/10-centos-cloud-image#L50-L56 From florian at datalounges.com Tue Nov 10 08:30:00 2020 From: florian at datalounges.com (Florian Rommel) Date: Tue, 10 Nov 2020 10:30:00 +0200 Subject: [octavia] Timeouts during building of lb? But then successful In-Reply-To: References: <6266636F-EB91-4CAC-B5CA-228710483B67@datalounges.com> Message-ID: <4F54784D-2107-4A23-9EFA-C262B69D0365@datalounges.com> Hi Micheal and thank you.. So it really seems that the mysql server ( fairly vanilla, non-clustered) is a problem.. I followed the installation instructions from the official documentation and yes, when I create a LB this is what comes through in the octavia-worker log: 2020-11-10 08:23:18.450 777550 INFO octavia.controller.queue.v1.endpoints [-] Creating member 'a6389dc7-61cb-489c-b69b-a922f0a9d9f2'... 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines [-] Database connection was found disconnected; reconnecting: oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query’) The server right now runs on localhost (mysql) I changed that to see if the problem persists.. There is no iptables currently running to prevent rate limited etc. We made almost no modifications to the my.cnf and octavia is the only one that has problems.. all other services have no issues at all. Is there some specific settings I should set in the octavia.conf that deal with this? Here is the python trace, which in my opnion is fairly basic… (Background on this error at: http://sqlalche.me/e/e3q8) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines Traceback (most recent call last): 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1245, in _execute_context 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines self.dialect.do_execute( 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines cursor.execute(statement, parameters) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 170, in execute 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines result = self._query(query) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 328, in _query 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines conn.query(q) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 517, in query 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 732, in _read_query_result 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines result.read() 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1075, in read 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines first_packet = self.connection._read_packet() 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 657, in _read_packet 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines packet_header = self._read_bytes(4) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 706, in _read_bytes 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines raise err.OperationalError( 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query') 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines The above exception was the direct cause of the following exception: 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines Traceback (most recent call last): 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/oslo_db/sqlalchemy/engines.py", line 73, in _connect_ping_listener 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines connection.scalar(select([1])) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 914, in scalar 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines return self.execute(object_, *multiparams, **params).scalar() 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 982, in execute 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines return meth(self, multiparams, params) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/sql/elements.py", line 287, in _execute_on_connection 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines return connection._execute_clauseelement(self, multiparams, params) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1095, in _execute_clauseelement 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines ret = self._execute_context( 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1249, in _execute_context 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines self._handle_dbapi_exception( 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1474, in _handle_dbapi_exception 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines util.raise_from_cause(newraise, exc_info) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines reraise(type(exception), exception, tb=exc_tb, cause=cause) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/util/compat.py", line 152, in reraise 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines raise value.with_traceback(tb) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/base.py", line 1245, in _execute_context 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines self.dialect.do_execute( 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/sqlalchemy/engine/default.py", line 581, in do_execute 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines cursor.execute(statement, parameters) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 170, in execute 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines result = self._query(query) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/cursors.py", line 328, in _query 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines conn.query(q) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 517, in query 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines self._affected_rows = self._read_query_result(unbuffered=unbuffered) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 732, in _read_query_result 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines result.read() 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 1075, in read 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines first_packet = self.connection._read_packet() 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 657, in _read_packet 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines packet_header = self._read_bytes(4) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines File "/usr/lib/python3/dist-packages/pymysql/connections.py", line 706, in _read_bytes 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines raise err.OperationalError( 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query') 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines [SQL: SELECT 1] 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines (Background on this error at: http://sqlalche.me/e/e3q8) 2020-11-10 08:23:18.458 777550 ERROR oslo_db.sqlalchemy.engines The other option is to run it all verbos? Best regards //F > On 9. Nov 2020, at 18.40, Michael Johnson wrote: > > Hi Florian, > > That is very unusual. It typically takes less than 30 seconds for a > load balancer to be provisioned. It definitely sounds like the mysql > instance is having trouble. This can also cause longer term issues if > the query response time drops to 10 seconds or more(0.001 is normal), > which could trigger unnecessary failovers. > > In Octavia there are layers of "retries" to attempt to handle clouds > that are having trouble. It sounds like database issues are triggering > one or more of these retries. > There are a few retries that will be in play for database transactions: > MySQL internal retries/timeouts such as lock timeouts (logged on the mysql side) > oslo.db includes some automatic retries (typically not logged without > configuration file settings) > Octavia tenacity and flow retries (Typically logged if the > configuration file has Debug = True enabled) > > This may also be a general network connection issue. The default REST > timeouts (used when we connect to the amphora agents) is 600, I'm > wondering if the lb-mgmt-network is also having an issue. > > Please check your health manager log files. If there are database > query time issues logged, it would point specifically to a mysql > issue. In the past we have seen mysql clustering setups that were bad > and caused performance issues (flipping primary instance, lock > contention between the instances, etc.). You should not be seeing any > log messages that the mysql database went away, that is not normal. > > Michael > > On Sun, Nov 8, 2020 at 7:06 AM Florian Rommel wrote: >> >> Hi, so we have a fully functioning setup of octavia on ussuri and it works nicely, when it competes. >> So here is what happens: >> From octavia api to octavia worker takes 20 seconds for the job to be initiated. >> The loadbalancer gets built quickly and then we get a mysql went away error, the listener gets built and then a member , that works too, then the mysql error comes up with query took too long to execute. >> Now this is where it gets weird. This is all within the first 2 - 3 minutes. >> At this point it hangs and takes 10 minutes (600 seconds) for the next step to complete and then another 10 minutes and another 10 until it’s completed. >> It seems there is a timeout somewhere but even with debug on we do not see what is going on. Does anyone have a mysql 8 running and octavia executing fine? And could send me their redacted octavia or mysql conf files? We didn’t touch them but it seems that there is something off.. >> especially since it then completes and works extremely nicely. >> I would highly appreciate it , even off list. >> Best regards, >> //f >> >> > From doug at stackhpc.com Tue Nov 10 09:02:58 2020 From: doug at stackhpc.com (Doug) Date: Tue, 10 Nov 2020 09:02:58 +0000 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: References: Message-ID: <3246e30d-a77f-d697-2a73-bca70060f30c@stackhpc.com> On 09/11/2020 11:33, Mark Goddard wrote: > Hi, > > I would like to propose adding wu.chunyang to the kolla-core and > kolla-ansible-core groups. wu.chunyang did some great work on Octavia > integration in the Victoria release, and has provided some helpful > reviews. > > Cores - please reply +1/-1 before the end of Friday 13th November. +1, thanks for your efforts! > > Thanks, > Mark > From hberaud at redhat.com Tue Nov 10 09:24:45 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 10 Nov 2020 10:24:45 +0100 Subject: [oslo] New courtesy ping list for Wallaby Message-ID: Hello, As we discussed during our previous meeting, I'll be clearing the current ping list in the next few weeks. This is to prevent courtesy pinging people who are no longer active on the project. If you wish to continue receiving courtesy pings at the start of the Oslo meeting please add yourself to the new list on the agenda template [1]. Note that the new list is above the template, called "Courtesy ping for Wallaby". If you add yourself again to the end of the existing list I'll assume you want to be left on though. Thanks. Hervé [1] https://wiki.openstack.org/wiki/Meetings/Oslo#Agenda_Template -- Hervé Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Tue Nov 10 10:15:24 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 10 Nov 2020 11:15:24 +0100 Subject: [ironic] SPUC times! Message-ID: Hi folks, I nearly forgot about SPUC, can you imagine? :) Anyway, after staring at the doodle for some time (it wasn't easy!), I've picked two time slots: Friday, 10am UTC (in https://bluejeans.com/643711802) Friday, 5pm UTC (in https://bluejeans.com/313987753) Apologies for those who cannot make them! We're starting this Friday (the 13th!). I'm taking a day off, so I won't necessarily be present, but feel free to use the meeting IDs without me as well. SPUC will run till X-mas (maybe we should even have an X-mas special on the 25th?). Cheers, Dmitry -- Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, Commercial register: Amtsgericht Muenchen, HRB 153243, Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill -------------- next part -------------- An HTML attachment was scrubbed... URL: From berendt at betacloud-solutions.de Tue Nov 10 11:26:10 2020 From: berendt at betacloud-solutions.de (Christian Berendt) Date: Tue, 10 Nov 2020 12:26:10 +0100 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: References: Message-ID: <8F0D5876-7991-4691-9545-0B1BEBF2BC36@betacloud-solutions.de> > Am 09.11.2020 um 12:33 schrieb Mark Goddard : > > Cores - please reply +1/-1 before the end of Friday 13th November. +1 From geguileo at redhat.com Tue Nov 10 12:06:06 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 10 Nov 2020 13:06:06 +0100 Subject: multiple nfs shares with cinder-backup In-Reply-To: References: Message-ID: <20201110120606.ky2t2gerwouolwqn@localhost> On 04/11, MaSch wrote: > Hello all! > > I'm currently using openstack queens with cinder 12.0.10. > I would like to a backend I'm using a NFS-share. > > Now i would like to spit my backups up to two nfs-shares. > I have seen that the cinder.volume.driver can handle multiple nfs shares. > But it seems that cinder.backup.driver can not. > > Is there a way to use two nfs shares for backups? > Or is it maybe possible with a later release of Cinder? > > regards, > MaSch > > Hi, Currently the cinder-backup service has two deployment options: tightly coupled, and non-coupled. The non-coupled version is the usual way to deploy it, as you can run it in Active-Active mode, and it only supports a single backend. The coupled deployment allows you to have multiple backends, one backend per cinder-backup service and host, but each backup backend can only backup volumes from the cinder-volume service deployed on the same host. Cheers, Gorka. From radoslaw.piliszek at gmail.com Tue Nov 10 12:55:12 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 10 Nov 2020 13:55:12 +0100 Subject: multiple nfs shares with cinder-backup In-Reply-To: References: Message-ID: On Wed, Nov 4, 2020 at 11:45 AM MaSch wrote: > > Hello all! > > I'm currently using openstack queens with cinder 12.0.10. > I would like to a backend I'm using a NFS-share. > > Now i would like to spit my backups up to two nfs-shares. > I have seen that the cinder.volume.driver can handle multiple nfs shares. > But it seems that cinder.backup.driver can not. > > Is there a way to use two nfs shares for backups? > Or is it maybe possible with a later release of Cinder? > > regards, > MaSch > > You can use directories where you mount different NFS shares. Cinder Backup allows you to specify the directory to use for backup so this way you can segregate the backups. Just specify --container in your commands. -yoctozepto From vhariria at redhat.com Tue Nov 10 14:42:08 2020 From: vhariria at redhat.com (Vida Haririan) Date: Tue, 10 Nov 2020 09:42:08 -0500 Subject: [Manila] Bug Squash upcoming event on 19th Nov 2020 Message-ID: Hi everyone, Back by popular demand :) We are planning the next Bug Squash session focusing on bugs that require API/DB changes as identified in Victoria Cycle. The event will be held all day on 19th Nov 2020, with a synchronous bug triage call at 15:00 UTC using this Jitsi bridge [1]. A list of bugs targeted for the Wallaby Cycle are available for review [2]. Please Feel free to flag an existing bug on the list or add any bugs that you plan on addressing early in the Cycle. Your participation is much appreciated, and we look forward to you joining us for this event. Thanks in advance, Vida [1] https://meetpad.opendev.org/ManilaW-ReleaseBugSquash [2] https://ethercalc.openstack.org/birpr9a6bd0b -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Nov 10 14:50:32 2020 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 10 Nov 2020 15:50:32 +0100 Subject: [neutron] Bug deputy week 45 (2020/10/02 - 2020/10/08) Message-ID: Hello: This is the weekly summary of Neutron bugs reported: Critical: - https://bugs.launchpad.net/neutron/+bug/1902678 [Fullstack] Wrong output of the cmdline causes tests timeouts Patch: https://review.opendev.org/#/c/761202/ - https://bugs.launchpad.net/neutron/+bug/1902843 [rocky]Rally job is broken in the Neutron stable/rocky branch *Not assigned* - https://bugs.launchpad.net/neutron/+bug/1902844 Error while removing subnet after test *Not assigned* High: - https://bugs.launchpad.net/neutron/+bug/1902775 neutron-rally-task fails on stable/rocky: SyntaxError: invalid syntax Patch: https://review.opendev.org/#/c/761391/ (merged) - https://bugs.launchpad.net/neutron/+bug/1902917 Anti-spoofing bypass using Open vSwitch *Not assigned* - https://bugs.launchpad.net/neutron/+bug/1903696 Security group OVO object don't reloads and make compatible rules objects Medium: - https://bugs.launchpad.net/neutron/+bug/1902888 [OVN] neutron db sync does not sync external_ids for routes Patch: https://review.opendev.org/#/c/761420/ - https://bugs.launchpad.net/neutron/+bug/1902950 [OVN] DNS resolution not forwarded with OVN driver *Not assigned* - https://bugs.launchpad.net/neutron/+bug/1902998 tempest test_create_router_set_gateway_with_fixed_ip often fails with DVR Patch: https://review.opendev.org/#/c/761498/ - https://bugs.launchpad.net/neutron/+bug/1903433 The duplicate route in l3 router namespace results in North-South traffic breaking Patch: https://review.opendev.org/#/c/761829/ - https://bugs.launchpad.net/neutron/+bug/1903689 [stable/ussuri] Functional job fails - AttributeError: module 'neutron_lib.constants' has no attribute 'DEVICE_OWNER_DISTRIBUTED' *Not assigned* Low: - https://bugs.launchpad.net/neutron/+bug/1903008 Create network failed during functional test *Not assigned* Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Nov 10 15:44:16 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 10 Nov 2020 07:44:16 -0800 Subject: [ironic] SPUC times! In-Reply-To: References: Message-ID: On Tue, Nov 10, 2020 at 2:23 AM Dmitry Tantsur wrote: > > Hi folks, > > I nearly forgot about SPUC, can you imagine? :) > > Anyway, after staring at the doodle for some time (it wasn't easy!), I've picked two time slots: > Friday, 10am UTC (in https://bluejeans.com/643711802) > Friday, 5pm UTC (in https://bluejeans.com/313987753) > > Apologies for those who cannot make them! > > We're starting this Friday (the 13th!). I'm taking a day off, so I won't necessarily be present, but feel free to use the meeting IDs without me as well. SPUC will run till X-mas (maybe we should even have an X-mas special on the 25th?). I _really_ like this idea. Ultimately we are a huge giant family, and there is no reason we cannot bond on what is a special day for some. > > Cheers, > Dmitry > > -- > Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn, > Commercial register: Amtsgericht Muenchen, HRB 153243, > Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill From ildiko.vancsa at gmail.com Tue Nov 10 16:46:45 2020 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 10 Nov 2020 17:46:45 +0100 Subject: [edge] Open Geospatial Consortium presentation on the next Edge WG call Message-ID: <25BF983B-A24A-4F66-8F52-E106DFA9FBA4@gmail.com> Hi, I’m reaching out to draw your attention to an upcoming presentation on the Edge Computing Group weekly call next Monday. We will have a presentation from the Open Geospatial Consortium where you can learn about their use cases relevant to edge computing and more specifically about the importance of location in that context. In the remaining time on the call we will discuss some of the requirements of the use cases such as location-awareness of infrastructure services and look into collaboration opportunities. The call is on __Monday (November 16) at 1400 UTC__. If you are interested in joining the call and the discussions you can find the dial-in information here: https://wiki.openstack.org/wiki/Edge_Computing_Group#Next_meeting:_Monday_.28November_16.29.2C_6am_PST_.2F_1400_UTC Please let me know if you have any questions. Thanks, Ildikó From tamm.maari at gmail.com Tue Nov 10 17:19:23 2020 From: tamm.maari at gmail.com (Maari Tamm) Date: Tue, 10 Nov 2020 19:19:23 +0200 Subject: [manila][OSC] Priorities for implementing the OSC share commands Message-ID: <6EB02241-97BE-42D3-837F-5206956059B9@gmail.com> Hi Everyone! Slowly but steady we are implementing the OpenStack Client for Manila. For the upcoming release we are aiming to implement the following commands, in no particular order: Share Replicas: openstack share replica create / delete / list / show / set / promote / resync Share Replica Export Locations: openstack share replica export location list / show Share Manage/Unmanage: openstack share adopt / abandon Services: openstack share service set / list Share Pools: openstack share pool list User Messages: openstack share message delete / list / show Availability Zones (add support for manila): openstack availability zone list Limits (add support for manila): openstack limits show —absolute /—rate Share Export Locations (ready for reviews: https://review.opendev.org/#/c/761661/ ): openstack share export location list / show Share Snapshots patch II (ready for reviews: https://review.opendev.org/#/c/746458/ ) : openstack share snapshot adopt / abandon openstack share snapshot export location show / list openstack share snapshot access create / delete / list To make this happen we could really use help reviewing the patches, once proposed. As linked above, two patches are currently ready for testing/reviews. We’d also very much appreciate getting some feedback on the list itself. Are we missing something important that would be useful for operators and should be implemented this cycle? If anyone has some commands in mind, please do let us know! Mohammed Naser and Belmiro Moreira, would love to get your thoughts on this as well! :) Thanks, Maari Tamm (IRC: maaritamm) -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkopec at redhat.com Tue Nov 10 18:28:00 2020 From: mkopec at redhat.com (Martin Kopec) Date: Tue, 10 Nov 2020 19:28:00 +0100 Subject: [openstack][qa][tempest] tempest cleanup ansible role Message-ID: Hello all, we have implemented a tempest-cleanup ansible role [1] which runs the 'tempest cleanup' tool [2]. The role might be beneficial for you in case you run tempest tests multiple times within the same environment and you wanna make sure no leftover resources are present after the first run. The other great use case is to verify that your tempest tests are not leaving any resources behind after the tests are completed. 'tempest cleanup' can take care of the services defined here [3] - a service which doesn't have a class definition there cannot be discovered by the tool. If the resource you would wanna have cleaned is not in [3] help us please improve the tool and tell us about it or add it there yourself. [1] https://opendev.org/openstack/tempest/src/branch/master/roles/tempest-cleanup [2] https://docs.openstack.org/tempest/latest/cleanup.html [3] https://opendev.org/openstack/tempest/src/branch/master/tempest/cmd/cleanup_service.py Regards, -- Martin Kopec Quality Engineer Red Hat EMEA -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at dincercelik.com Tue Nov 10 18:43:20 2020 From: hello at dincercelik.com (Dincer Celik) Date: Tue, 10 Nov 2020 21:43:20 +0300 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: References: Message-ID: <626B9F93-BC2A-4D6B-A43A-99BAC93842B0@dincercelik.com> +1 > On 9 Nov 2020, at 14:33, Mark Goddard wrote: > > Hi, > > I would like to propose adding wu.chunyang to the kolla-core and > kolla-ansible-core groups. wu.chunyang did some great work on Octavia > integration in the Victoria release, and has provided some helpful > reviews. > > Cores - please reply +1/-1 before the end of Friday 13th November. > > Thanks, > Mark > From lbragstad at gmail.com Tue Nov 10 18:50:13 2020 From: lbragstad at gmail.com (Lance Bragstad) Date: Tue, 10 Nov 2020 12:50:13 -0600 Subject: [qa][policy] Testing strategy discussion for new RBAC (new scope and default role combinations) In-Reply-To: <1758f922fe5.b2f4514a220341.5965440098876721164@ghanshyammann.com> References: <1758f922fe5.b2f4514a220341.5965440098876721164@ghanshyammann.com> Message-ID: Hey folks, We held the meeting today and we started capturing some notes in an etherpad [0]. Most of today's discussion was focused on introducing context. We didn't come to a definitive answer as to which testing strategy is best. We're going to have another meeting on Monday, November 16th (next Monday) at 16:00 UTC. We're also going to use the same conferencing room Ghanshyam sent out in the initial email [1]. Feel free to add thoughts to the etherpad [0] before hand. [0] https://etherpad.opendev.org/p/wallaby-secure-rbac-testing [1] https://bluejeans.com/749897818 On Tue, Nov 3, 2020 at 1:25 PM Ghanshyam Mann wrote: > Hello Everyone, > > To continue the discussion on how projects should test the new RBAC (scope > as well as the new defaults combination), > we will host a call on bluejeans on Tuesday 10th Nov 16.00 UTC. > > Lance has set up the room for that: https://bluejeans.com/749897818 > > Feel free to join if you are interested in that or would like to help with > this effort. > > -gmann > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Nov 10 19:15:55 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Nov 2020 13:15:55 -0600 Subject: [tc][all][searchlight ] Retiring the Searchlight project Message-ID: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> Hello Everyone, As you know, Searchlight is a leaderless project for the Wallaby cycle, which means there is no PTL candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria which triggers TC to start checking the health, maintainers of the project for dropping the project from OpenStack Governance[1]. TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what activities are done in the Victoria development cycle. It seems no functional changes in Searchlight repos except few gate fixes or community goal commits[3]. Based on all these checks and no maintainer for Searchlight, TC decided to drop this project from OpenStack governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process is in the project guide docs [5]. If your organization product/customer use/rely on this project then this is the right time to step forward to maintain it otherwise from the Wallaby cycle, Searchlight will move out of OpenStack governance by keeping their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. If someone from old or new maintainers shows interest to continue its development then it can be re-added to OpenStack governance. With that thanks to Searchlight contributors and PTLs for maintaining this project. [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] https://etherpad.opendev.org/p/tc-wallaby-ptg [3] https://www.stackalytics.com/?release=victoria&module=searchlight-group&metric=commits [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository -gmann From gmann at ghanshyammann.com Tue Nov 10 19:16:08 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Nov 2020 13:16:08 -0600 Subject: [tc][all][qinling] Retiring the Qinling project Message-ID: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> Hello Everyone, As you know, Qinling is a leaderless project for the Wallaby cycle, which means there is no PTL candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria which triggers TC to start checking the health, maintainers of the project for dropping the project from OpenStack Governance[1]. TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what activities are done in the Victoria development cycle. It seems no functional changes in Qinling repos except few gate fixes or community goal commits[3]. Based on all these checks and no maintainer for Qinling, TC decided to drop this project from OpenStack governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process is in the project guide docs [5]. If your organization product/customer use/rely on this project then this is the right time to step forward to maintain it otherwise from the Wallaby cycle, Qinling will move out of OpenStack governance by keeping their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. If someone from old or new maintainers shows interest to continue its development then it can be re-added to OpenStack governance. With that thanks to Qinling contributors and PTLs (especially lxkong ) for maintaining this project. [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] https://etherpad.opendev.org/p/tc-wallaby-ptg [3] https://www.stackalytics.com/?release=victoria&module=qinling-group&metric=commits [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository -gmann From henry at thebonaths.com Tue Nov 10 20:23:50 2020 From: henry at thebonaths.com (Henry Bonath) Date: Tue, 10 Nov 2020 15:23:50 -0500 Subject: CloudKitty deployment on Train and/or Ussuri In-Reply-To: References: Message-ID: Hi Jocelyn, I am also an Openstack-Ansible user, who is in the same market of trying to find Rating-as-A-Service for my Train cloud. I stumbled upon your message here after running into the same exact issue that you are having and searching for answers, and am glad to hear that I am not alone. (Don't you hate when you only find more questions??) I am testing Cloudkitty v13.0.0 right now and running into the same issue as you are as it relates to 'cloudkitty-api'. I also found that the templated config you referenced needs to be updated as well. I'm going to continue to soldier through this and try to get a working configuration, were you able to make any progress on the issue with cloudkitty-api? Please let me know if we can work together to get this working, I can submit any patches to the openstack-ansible-os_cloudkitty repo. -Henry On Fri, Oct 23, 2020 at 2:38 AM Thode Jocelyn wrote: > > Hi, > > > > I was looking at the possibilities to have Rating-as-A-Service in our Openstack (currently train but planning to move to Ussuri) and I stumbled upon CloudKitty. I’m currently facing multiple issues with cloudkitty itself and with openstack-ansible. (We are using a containerized deployment with LXC) > > > > I noticed that openstack-ansible has no playbook yet to install a service like CloudKitty even though starting from Ussuri it was added in the ansible-role-requirements.yml. I have created a small patch to be able to install CloudKitty via openstack-ansible and if this is of interest I could potentially submit a PR with this. > The os_cloudkitty role seems to kinda work and only with a relatively old version of CloudKitty. For example, here : https://opendev.org/openstack/openstack-ansible-os_cloudkitty/src/branch/stable/train/templates/cloudkitty.conf.j2#L38 since CloudKitty v8.0.0 the section should be named “[fetcher_keystone]”, but I found no information as to which CloudKitty version should be used with this role. Does someone have any recommendation as to which version should be used and if there is any interest in improving the role to support more recent versions of CloudKitty? > Even when tweaking the configuration and installing v7.0.0 of Cloudkitty it only fixes cloudkitty-processor, I was never able to make cloudkitty-api work I always get an error like “Fatal Python error: Py_Initialize: Unable to get the locale encoding ModuleNotFoundError: No module named 'encodings'”. Any input on this would be greatly appreciated. > > > > > Cheers > > Jocelyn From zigo at debian.org Tue Nov 10 21:12:33 2020 From: zigo at debian.org (Thomas Goirand) Date: Tue, 10 Nov 2020 22:12:33 +0100 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> Message-ID: <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> On 11/10/20 8:15 PM, Ghanshyam Mann wrote: > Hello Everyone, > > As you know, Searchlight is a leaderless project for the Wallaby cycle, which means there is no PTL > candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria > which triggers TC to start checking the health, maintainers of the project for dropping the project > from OpenStack Governance[1]. > > TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and > what activities are done in the Victoria development cycle. It seems no functional changes in Searchlight > repos except few gate fixes or community goal commits[3]. > > Based on all these checks and no maintainer for Searchlight, TC decided to drop this project from OpenStack > governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process > is in the project guide docs [5]. > > If your organization product/customer use/rely on this project then this is the right time to step forward to > maintain it otherwise from the Wallaby cycle, Searchlight will move out of OpenStack governance by keeping their > repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. > If someone from old or new maintainers shows interest to continue its development then it can be re-added > to OpenStack governance. > > With that thanks to Searchlight contributors and PTLs for maintaining this project. > > [1] https://governance.openstack.org/tc/reference/dropping-projects.html > [2] https://etherpad.opendev.org/p/tc-wallaby-ptg > [3] https://www.stackalytics.com/?release=victoria&module=searchlight-group&metric=commits > [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html > [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository > > -gmann Hi, When some projects are being removed, does this mean that there's going to be a community effort to remove the dependency on the clients? IMO, it really should be done. I'm thinking about: - congressclient - qinlinclient - searchlightclient - karborclient All of the above only reached Debian because they were dependencies of other projects... Cheers, Thomas Goirand From kennelson11 at gmail.com Tue Nov 10 21:18:13 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 10 Nov 2020 13:18:13 -0800 Subject: [tc][all][karbor] Retiring the Qinling project Message-ID: Hello Everyone, As you know, Karbor is a leaderless project for the Wallaby cycle, which means there is no PTL candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria which triggers TC to start checking the health, maintainers of the project for dropping the project from OpenStack Governance[1]. TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what activities are done in the Victoria development cycle. It seems no functional changes in Karbor repos except few gate fixes or community goal commits[3]. Based on all these checks and no maintainer for Karbor, TC decided to drop this project from OpenStack governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process is in the project guide docs [5]. If your organization product/customer use/rely on this project then this is the right time to step forward to maintain it otherwise from the Wallaby cycle, Karbor will move out of OpenStack governance by keeping their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. If someone from old or new maintainers shows interest to continue its development then it can be re-added to OpenStack governance. With that thanks to Karbor contributors and PTLs (especially Pengju Jiao) for maintaining this project. -Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] https://etherpad.opendev.org/p/tc-wallaby-ptg [3] https://www.stackalytics.com/?release=victoria&module=karbor-group&metric=commits [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Nov 10 21:20:31 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 10 Nov 2020 13:20:31 -0800 Subject: [tc][all][karbor] Retiring the Qinling project In-Reply-To: References: Message-ID: There's always something I miss... sorry. The subject line should say Karbor, nor Qinling. Starting a new thread. Please disregard this one. -Kendall (diablo_rojo) On Tue, Nov 10, 2020 at 1:18 PM Kendall Nelson wrote: > Hello Everyone, > > As you know, Karbor is a leaderless project for the Wallaby cycle, which means there is no PTL > candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria > which triggers TC to start checking the health, maintainers of the project for dropping the project > from OpenStack Governance[1]. > > TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what > activities are done in the Victoria development cycle. It seems no functional changes in Karbor repos > except few gate fixes or community goal commits[3]. > > Based on all these checks and no maintainer for Karbor, TC decided to drop this project from OpenStack > governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process > is in the project guide docs [5]. > > If your organization product/customer use/rely on this project then this is the right time to step forward to > maintain it otherwise from the Wallaby cycle, Karbor will move out of OpenStack governance by keeping > their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. > If someone from old or new maintainers shows interest to continue its development then it can be re-added > to OpenStack governance. > > With that thanks to Karbor contributors and PTLs (especially Pengju Jiao) for maintaining this project. > > -Kendall Nelson (diablo_rojo) > > [1] https://governance.openstack.org/tc/reference/dropping-projects.html > [2] https://etherpad.opendev.org/p/tc-wallaby-ptg > [3] https://www.stackalytics.com/?release=victoria&module=karbor-group&metric=commits > [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html > [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Nov 10 21:21:18 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 10 Nov 2020 13:21:18 -0800 Subject: [tc][all][karbor] Retiring the Karbor project Message-ID: Hello Everyone, As you know, Karbor is a leaderless project for the Wallaby cycle, which means there is no PTL candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria which triggers TC to start checking the health, maintainers of the project for dropping the project from OpenStack Governance[1]. TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what activities are done in the Victoria development cycle. It seems no functional changes in Karbor repos except few gate fixes or community goal commits[3]. Based on all these checks and no maintainer for Karbor, TC decided to drop this project from OpenStack governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process is in the project guide docs [5]. If your organization product/customer use/rely on this project then this is the right time to step forward to maintain it otherwise from the Wallaby cycle, Karbor will move out of OpenStack governance by keeping their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. If someone from old or new maintainers shows interest to continue its development then it can be re-added to OpenStack governance. With that thanks to Karbor contributors and PTLs (especially Pengju Jiao) for maintaining this project. -Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] https://etherpad.opendev.org/p/tc-wallaby-ptg [3] https://www.stackalytics.com/?release=victoria&module=karbor-group&metric=commits [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Nov 10 22:37:25 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 10 Nov 2020 16:37:25 -0600 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> Message-ID: <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> ---- On Tue, 10 Nov 2020 15:12:33 -0600 Thomas Goirand wrote ---- > On 11/10/20 8:15 PM, Ghanshyam Mann wrote: > > Hello Everyone, > > > > As you know, Searchlight is a leaderless project for the Wallaby cycle, which means there is no PTL > > candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria > > which triggers TC to start checking the health, maintainers of the project for dropping the project > > from OpenStack Governance[1]. > > > > TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and > > what activities are done in the Victoria development cycle. It seems no functional changes in Searchlight > > repos except few gate fixes or community goal commits[3]. > > > > Based on all these checks and no maintainer for Searchlight, TC decided to drop this project from OpenStack > > governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process > > is in the project guide docs [5]. > > > > If your organization product/customer use/rely on this project then this is the right time to step forward to > > maintain it otherwise from the Wallaby cycle, Searchlight will move out of OpenStack governance by keeping their > > repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. > > If someone from old or new maintainers shows interest to continue its development then it can be re-added > > to OpenStack governance. > > > > With that thanks to Searchlight contributors and PTLs for maintaining this project. > > > > [1] https://governance.openstack.org/tc/reference/dropping-projects.html > > [2] https://etherpad.opendev.org/p/tc-wallaby-ptg > > [3] https://www.stackalytics.com/?release=victoria&module=searchlight-group&metric=commits > > [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html > > [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository > > > > -gmann > > Hi, > > When some projects are being removed, does this mean that there's going > to be a community effort to remove the dependency on the clients? IMO, > it really should be done. I'm thinking about: > Yes, as part of the retirement process all deliverables under the project needs to be removed and before removal we do: 1. Remove all dependencies. 2. Refactor/remove the gate job dependency also. 3. Remove the code from the retiring repo. > - congressclient This is already retired in Victoria cycle[1]. > - qinlinclient This client is also on the list of retirement in Wallaby so let's see if no maintainer then we will retire this too[2]. > - searchlightclient This is one of the deliverables under the Searchlight project so we will be retiring this repo alsp. > - karborclient Ditto[3]. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-April/014292.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018638.html [3] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018643.html -gmann > > All of the above only reached Debian because they were dependencies of > other projects... > > Cheers, > > Thomas Goirand > > From Arkady.Kanevsky at dell.com Tue Nov 10 22:59:20 2020 From: Arkady.Kanevsky at dell.com (Kanevsky, Arkady) Date: Tue, 10 Nov 2020 22:59:20 +0000 Subject: [tc][all][karbor] Retiring the Karbor project In-Reply-To: References: Message-ID: Let’s make sure we update https://www.openstack.org/software/ with the change. The same it true for all retiring projects. Thanks, Arkady From: Kendall Nelson Sent: Tuesday, November 10, 2020 3:21 PM To: OpenStack Discuss Subject: [tc][all][karbor] Retiring the Karbor project [EXTERNAL EMAIL] Hello Everyone, As you know, Karbor is a leaderless project for the Wallaby cycle, which means there is no PTL candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria which triggers TC to start checking the health, maintainers of the project for dropping the project from OpenStack Governance[1]. TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what activities are done in the Victoria development cycle. It seems no functional changes in Karbor repos except few gate fixes or community goal commits[3]. Based on all these checks and no maintainer for Karbor, TC decided to drop this project from OpenStack governance in the Wallaby cycle. Ref: Mandatory Repository Retirement resolution [4] and the detailed process is in the project guide docs [5]. If your organization product/customer use/rely on this project then this is the right time to step forward to maintain it otherwise from the Wallaby cycle, Karbor will move out of OpenStack governance by keeping their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. If someone from old or new maintainers shows interest to continue its development then it can be re-added to OpenStack governance. With that thanks to Karbor contributors and PTLs (especially Pengju Jiao) for maintaining this project. -Kendall Nelson (diablo_rojo) [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] https://etherpad.opendev.org/p/tc-wallaby-ptg [3] https://www.stackalytics.com/?release=victoria&module=karbor-group&metric=commits [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Tue Nov 10 23:23:23 2020 From: helena at openstack.org (helena at openstack.org) Date: Tue, 10 Nov 2020 18:23:23 -0500 (EST) Subject: Victoria Release Community Meeting Message-ID: <1605050603.25113268@apps.rackspace.com> Hello, The community meeting for the Victoria release will be this Thursday, November 12th at 16:00 UTC. The meeting will be held via Zoom. We will show pre-recorded videos from our PTLs followed by live Q&A sessions. We will have updates from Masakari, Telemetry, Neutron, and Cinder. Zoom Info: [ https://zoom.us/j/2146866821?pwd=aDlpOXd5MXB3cExZWHlROEJURzh0QT09 ]( https://zoom.us/j/2146866821?pwd=aDlpOXd5MXB3cExZWHlROEJURzh0QT09 ) Meeting ID: 214 686 6821 Passcode: Victoria Find your local number: [ https://zoom.us/u/8BRrV ]( https://zoom.us/u/8BRrV ) Reminder to PTLs: We would love for you to participate and give an update on your project. I have attached a template for the slides that you may use if you wish. The video should be around 10 minutes. Please send in your video and slide for the community meeting ASAP. I have only received content for the listed above projects. If you are unable to make the meeting at the designated time we can show your video and forward any questions for you. Let me know if you have any other questions! Thank you for your participation, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From cwy5056 at gmail.com Wed Nov 11 03:21:09 2020 From: cwy5056 at gmail.com (fisheater chang) Date: Wed, 11 Nov 2020 13:21:09 +1000 Subject: [cinder][ceilometer][heat][instances] Installation Problems In-Reply-To: References: Message-ID: To whom it may concern, I am new to openstack, I got several errors when I installed the service followed by the installation guide. My openstack version : stein Ubuntu 18.0.4 1. Cinder-api is in /usr/bin/cinder-api, but when I type service cinder-api status, it shows Unit cinder-api.service could not be found. 2. Ceilometer and Heat tab didn't show in the Dashboard. 3. I was trying to launch an instance, but I got the status error and I tried to delete, but the instances can not be deleted. And I used nova force-delete instance-id, the error message is ERROR (ClientException): Unexpected API Error. Please report this at http://bugs.launchpad.net/nova/ and attach the Nova API log if possible. (HTTP 500) (Request-ID: req-e6c0d966-c7ec-4f20-8e9f-c788018a8d81) I explored the openstack installation myself, just wondering if there is any online hands-on training or course that I can enroll in? Thanks. Kind regards, Walsh -------------- next part -------------- An HTML attachment was scrubbed... URL: From pbasaras at gmail.com Wed Nov 11 07:30:40 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Wed, 11 Nov 2020 09:30:40 +0200 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] Message-ID: Dear community, i am new to openstack. I have deployed devstack (on a virtual machine), and I can successfully deploy instances at the host where the controller is installed. I followed the instructions from https://docs.openstack.org/nova/queens/install/compute-install-ubuntu.html to add a new compute host, in order to be able to deploy VMs at the PC. Here is the output: openstack compute service list --service nova-compute +----+--------------+-----------+------+---------+-------+----------------------------+ | ID | Binary | Host | Zone | Status | State | Updated At | +----+--------------+-----------+------+---------+-------+----------------------------+ | 3 | nova-compute | openstack | nova | enabled | up | 2020-11-11T07:10:41.000000 | | 5 | nova-compute | computenode | nova | enabled | up | 2020-11-11T07:10:42.000000 | +----+--------------+-----------+------+---------+-------+----------------------------+ "computenode" is the new device that i added. When i try to deploy an instance from cli: openstack server create --flavor m1.tiny --image cirros034 --nic net-id=internal --security-group c8b06902-6664-4776-a8e9-0735ae251d34 --availability-zone nova:computenode mym--debug the reply i see from horizon is since there is an error in deploying the instance No valid host was found. No such host - host: computenode node: None any directions? all the best, Pavlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Nov 11 08:48:21 2020 From: eblock at nde.ag (Eugen Block) Date: Wed, 11 Nov 2020 08:48:21 +0000 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] In-Reply-To: Message-ID: <20201111084821.Horde.IJ-VF3rvTEHRsXedHGAq22g@webmail.nde.ag> Hi, is the compute node already discovered? Is the compute node visible in the output of: controller:~ # nova-manage cell_v2 list_hosts If not, you can run controller:~ # nova-manage cell_v2 discover_hosts and see if that helps. Regards, Eugen Zitat von Pavlos Basaras : > Dear community, > > i am new to openstack. > > I have deployed devstack (on a virtual machine), and I can > successfully deploy instances at the host where the controller is installed. > > I followed the instructions from > https://docs.openstack.org/nova/queens/install/compute-install-ubuntu.html > to add a new compute host, in order to be able to deploy VMs at the PC. > > Here is the output: > > openstack compute service list --service nova-compute > +----+--------------+-----------+------+---------+-------+----------------------------+ > | ID | Binary | Host | Zone | Status | State | Updated At > | > +----+--------------+-----------+------+---------+-------+----------------------------+ > | 3 | nova-compute | openstack | nova | enabled | up | > 2020-11-11T07:10:41.000000 | > | 5 | nova-compute | computenode | nova | enabled | up | > 2020-11-11T07:10:42.000000 | > +----+--------------+-----------+------+---------+-------+----------------------------+ > > "computenode" is the new device that i added. > > When i try to deploy an instance from cli: > openstack server create --flavor m1.tiny --image cirros034 --nic > net-id=internal --security-group c8b06902-6664-4776-a8e9-0735ae251d34 > --availability-zone nova:computenode mym--debug > > the reply i see from horizon is since there is an error in deploying the > instance > No valid host was found. No such host - host: computenode node: None > > any directions? > > all the best, > Pavlos From zhang.lei.fly+os-discuss at gmail.com Wed Nov 11 09:01:54 2020 From: zhang.lei.fly+os-discuss at gmail.com (Jeffrey Zhang) Date: Wed, 11 Nov 2020 17:01:54 +0800 Subject: [kolla] Proposing wu.chunyang for Kolla core In-Reply-To: <626B9F93-BC2A-4D6B-A43A-99BAC93842B0@dincercelik.com> References: <626B9F93-BC2A-4D6B-A43A-99BAC93842B0@dincercelik.com> Message-ID: +1 On Wed, Nov 11, 2020 at 2:46 AM Dincer Celik wrote: > +1 > > > On 9 Nov 2020, at 14:33, Mark Goddard wrote: > > > > Hi, > > > > I would like to propose adding wu.chunyang to the kolla-core and > > kolla-ansible-core groups. wu.chunyang did some great work on Octavia > > integration in the Victoria release, and has provided some helpful > > reviews. > > > > Cores - please reply +1/-1 before the end of Friday 13th November. > > > > Thanks, > > Mark > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Nov 11 11:14:19 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 11 Nov 2020 12:14:19 +0100 Subject: Victoria Release Community Meeting In-Reply-To: <1605050603.25113268@apps.rackspace.com> References: <1605050603.25113268@apps.rackspace.com> Message-ID: <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> On 11/11/20 12:23 AM, helena at openstack.org wrote: > The meeting will be held via Zoom. Could we *PLEASE* stop the Zoom non-sense? Zoom is: - known to have a poor security record - imposes the install of the desktop non-free app (yes I know, in some cases, it is supposed to work without it, but so far it didn't work for me) - controlled by a 3rd party we cannot trust It's not as if we had no alternatives. Jitsi works perfectly and was used successfully for the whole of debconf, with voctomix and stuff, so viewers can read a normal live video stream... If the foundation doesn't know how to do it, I can put people in touch with the Debian video team. I'm sure they will be helpful. Cheers, Thomas Goirand (zigo) From Aija.Jaunteva at dell.com Wed Nov 11 11:22:02 2020 From: Aija.Jaunteva at dell.com (Jaunteva, Aija) Date: Wed, 11 Nov 2020 11:22:02 +0000 Subject: [ironic] Configuration mold follow-up Message-ID: Hi, thank you for the responses to the poll. The meeting to discuss outstanding questions will be held on Thu, Nov 19, 2020 15:00 UTC. For details see [1]. Regards, Aija [1] https://etherpad.opendev.org/p/ironic-configuration-molds -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Nov 11 12:24:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 11 Nov 2020 13:24:25 +0100 Subject: Victoria Release Community Meeting In-Reply-To: <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> Message-ID: <10079933.t8I2mb3pXR@p1> Hi, Dnia środa, 11 listopada 2020 12:14:19 CET Thomas Goirand pisze: > On 11/11/20 12:23 AM, helena at openstack.org wrote: > > The meeting will be held via Zoom. > > Could we *PLEASE* stop the Zoom non-sense? > > Zoom is: > - known to have a poor security record > - imposes the install of the desktop non-free app (yes I know, in some > cases, it is supposed to work without it, but so far it didn't work for me) > - controlled by a 3rd party we cannot trust > > It's not as if we had no alternatives. Jitsi works perfectly and was > used successfully for the whole of debconf, with voctomix and stuff, so > viewers can read a normal live video stream... > > If the foundation doesn't know how to do it, I can put people in touch > with the Debian video team. I'm sure they will be helpful. Actually foundation has Jitsii server also - see https://meetpad.opendev.org/ In Neutron team we were using it almost without problems during last PTG. But problem which we had with it was that it didn't work for people from China AFAIK so we had to switch to Zoom finally. Also I think that other teams had some scale issues with meetpad. But here I may be wrong and it could be problem on the PTG in June but not now, I don't really know for sure about it. > > Cheers, > > Thomas Goirand (zigo) -- Slawek Kaplonski Principal Software Engineer Red Hat From pbasaras at gmail.com Wed Nov 11 12:57:44 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Wed, 11 Nov 2020 14:57:44 +0200 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] Message-ID: Hello, thanks very much for your prompt reply. regarding the first command "nova-manage cell_v2 list_hosts" the output is the following (openstack is the host of the controller). I dont see any other node here, even when i execute the discover_hosts command +-----------+--------------------------------------+-----------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+-----------+ | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | +-----------+--------------------------------------+-----------+ Also this is the output from my controller when i use the command: sudo -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (not sure if this helps) Found 2 cell mappings. Skipping cell0 since it does not contain hosts. Getting computes from cell 'cell1': 1522c22f-64d4-4882-8ae8-ed0f9407407c Found 0 unmapped computes in cell: 1522c22f-64d4-4882-8ae8-ed0f9407407c any thoughts? best, Pavlos. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Nov 11 13:20:22 2020 From: eblock at nde.ag (Eugen Block) Date: Wed, 11 Nov 2020 13:20:22 +0000 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] In-Reply-To: Message-ID: <20201111132022.Horde.q9K8xVfq2klzzypYBltanTa@webmail.nde.ag> Hm, indeed rather strange to me. Do you see anything in the nova-scheduler.log? If you activated the discover_hosts_in_cells_interval = 300 it should query for new hosts every 5 minutes. Zitat von Pavlos Basaras : > Hello, > > thanks very much for your prompt reply. > > > regarding the first command "nova-manage cell_v2 list_hosts" the output is > the following (openstack is the host of the controller). I dont see any > other node here, even when i execute the discover_hosts command > > +-----------+--------------------------------------+-----------+ > | Cell Name | Cell UUID | Hostname | > +-----------+--------------------------------------+-----------+ > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | > +-----------+--------------------------------------+-----------+ > > > Also this is the output from my controller when i use the command: sudo -s > /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (not sure > if this helps) > Found 2 cell mappings. > Skipping cell0 since it does not contain hosts. > Getting computes from cell 'cell1': 1522c22f-64d4-4882-8ae8-ed0f9407407c > Found 0 unmapped computes in cell: 1522c22f-64d4-4882-8ae8-ed0f9407407c > > any thoughts? > > > best, > Pavlos. From pbasaras at gmail.com Wed Nov 11 13:37:14 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Wed, 11 Nov 2020 15:37:14 +0200 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] In-Reply-To: <20201111132022.Horde.q9K8xVfq2klzzypYBltanTa@webmail.nde.ag> References: <20201111132022.Horde.q9K8xVfq2klzzypYBltanTa@webmail.nde.ag> Message-ID: Hello, yes i have this discover_hosts_in_cells_interval = 300 interestingly when i issued a combination of map_cell_and_hosts and update_cell the output from: nova-manage cell_v2 list_hosts +-----------+--------------------------------------+-------------+ | Cell Name | Cell UUID | Hostname | +-----------+--------------------------------------+-------------+ | None | 1a0fde85-8906-46fb-b721-01a28c978439 | computenode | | None | 1a0fde85-8906-46fb-b721-01a28c978439 | nrUE | | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | +-----------+--------------------------------------+-------------+ when the new compute nodes seem to not have a cell mapped best, P. On Wed, Nov 11, 2020 at 3:26 PM Eugen Block wrote: > Hm, > > indeed rather strange to me. > > Do you see anything in the nova-scheduler.log? If you activated the > > discover_hosts_in_cells_interval = 300 > > it should query for new hosts every 5 minutes. > > > > Zitat von Pavlos Basaras : > > > Hello, > > > > thanks very much for your prompt reply. > > > > > > regarding the first command "nova-manage cell_v2 list_hosts" the output > is > > the following (openstack is the host of the controller). I dont see any > > other node here, even when i execute the discover_hosts command > > > > +-----------+--------------------------------------+-----------+ > > | Cell Name | Cell UUID | Hostname | > > +-----------+--------------------------------------+-----------+ > > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | > > +-----------+--------------------------------------+-----------+ > > > > > > Also this is the output from my controller when i use the command: sudo > -s > > /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (not > sure > > if this helps) > > Found 2 cell mappings. > > Skipping cell0 since it does not contain hosts. > > Getting computes from cell 'cell1': 1522c22f-64d4-4882-8ae8-ed0f9407407c > > Found 0 unmapped computes in cell: 1522c22f-64d4-4882-8ae8-ed0f9407407c > > > > any thoughts? > > > > > > best, > > Pavlos. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Wed Nov 11 14:03:28 2020 From: eblock at nde.ag (Eugen Block) Date: Wed, 11 Nov 2020 14:03:28 +0000 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] In-Reply-To: References: <20201111132022.Horde.q9K8xVfq2klzzypYBltanTa@webmail.nde.ag> Message-ID: <20201111140328.Horde.e4UaBmIP6Q_GOnn4VutjAea@webmail.nde.ag> There might be some mixup during the setup, I'm not sure how the other cell would be created. I'd probably delete the cell with UUID 1a0fde85-8906-46fb-b721-01a28c978439 and retry the discover_hosts with the right cell UUID: nova-manage cell_v2 delete_cell --cell_uuid 1a0fde85-8906-46fb-b721-01a28c978439 nova-manage cell_v2 discover_hosts --cell_uuid 1522c22f-64d4-4882-8ae8-ed0f9407407c Does that work? Zitat von Pavlos Basaras : > Hello, > > yes i have this discover_hosts_in_cells_interval = 300 > > > interestingly when i issued a combination of map_cell_and_hosts and > update_cell > > the output from: nova-manage cell_v2 list_hosts > +-----------+--------------------------------------+-------------+ > | Cell Name | Cell UUID | Hostname | > +-----------+--------------------------------------+-------------+ > | None | 1a0fde85-8906-46fb-b721-01a28c978439 | computenode | > | None | 1a0fde85-8906-46fb-b721-01a28c978439 | nrUE | > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | > +-----------+--------------------------------------+-------------+ > > when the new compute nodes seem to not have a cell mapped > > best, > P. > > > On Wed, Nov 11, 2020 at 3:26 PM Eugen Block wrote: > >> Hm, >> >> indeed rather strange to me. >> >> Do you see anything in the nova-scheduler.log? If you activated the >> >> discover_hosts_in_cells_interval = 300 >> >> it should query for new hosts every 5 minutes. >> >> >> >> Zitat von Pavlos Basaras : >> >> > Hello, >> > >> > thanks very much for your prompt reply. >> > >> > >> > regarding the first command "nova-manage cell_v2 list_hosts" the output >> is >> > the following (openstack is the host of the controller). I dont see any >> > other node here, even when i execute the discover_hosts command >> > >> > +-----------+--------------------------------------+-----------+ >> > | Cell Name | Cell UUID | Hostname | >> > +-----------+--------------------------------------+-----------+ >> > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | >> > +-----------+--------------------------------------+-----------+ >> > >> > >> > Also this is the output from my controller when i use the command: sudo >> -s >> > /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (not >> sure >> > if this helps) >> > Found 2 cell mappings. >> > Skipping cell0 since it does not contain hosts. >> > Getting computes from cell 'cell1': 1522c22f-64d4-4882-8ae8-ed0f9407407c >> > Found 0 unmapped computes in cell: 1522c22f-64d4-4882-8ae8-ed0f9407407c >> > >> > any thoughts? >> > >> > >> > best, >> > Pavlos. >> >> >> >> >> From florian at datalounges.com Wed Nov 11 14:45:51 2020 From: florian at datalounges.com (florian at datalounges.com) Date: Wed, 11 Nov 2020 16:45:51 +0200 Subject: VS: [octavia] Timeouts during building of lb? But then successful In-Reply-To: References: <6266636F-EB91-4CAC-B5CA-228710483B67@datalounges.com> Message-ID: <006701d6b839$5da32d30$18e98790$@datalounges.com> Hi just as an update, because i think its rude to ask for help and not provide the solution, however stupid it may be. WE managed to get this working. So because we have a lot of worker threads through the openstack services, the connections ran out on Mysql 8 and obviously we increased the max_connections. This however ended up just closing the connections with no explanation , which was the original problem. It turns out that we used in the past open_files_limit : -1 , whichi n mysql 8 signifies 10000, however as described in some redhatt Bugzilla article, this seems to be not enough. As soon as we increased it to 65000 (our linux limit is much higher than that), everything went perfectly... Octavia now deploys within 1 minute. And even through hosted Kubernetes we deploy a LB via Octavia in under 3 minutes. Thank you Michael again for pointing me into the right direction //F -----Alkuperäinen viesti----- Lähettäjä: Michael Johnson Lähetetty: Monday, 9 November 2020 18.41 Vastaanottaja: Florian Rommel Kopio: openstack-discuss Aihe: Re: [octavia] Timeouts during building of lb? But then successful Hi Florian, That is very unusual. It typically takes less than 30 seconds for a load balancer to be provisioned. It definitely sounds like the mysql instance is having trouble. This can also cause longer term issues if the query response time drops to 10 seconds or more(0.001 is normal), which could trigger unnecessary failovers. In Octavia there are layers of "retries" to attempt to handle clouds that are having trouble. It sounds like database issues are triggering one or more of these retries. There are a few retries that will be in play for database transactions: MySQL internal retries/timeouts such as lock timeouts (logged on the mysql side) oslo.db includes some automatic retries (typically not logged without configuration file settings) Octavia tenacity and flow retries (Typically logged if the configuration file has Debug = True enabled) This may also be a general network connection issue. The default REST timeouts (used when we connect to the amphora agents) is 600, I'm wondering if the lb-mgmt-network is also having an issue. Please check your health manager log files. If there are database query time issues logged, it would point specifically to a mysql issue. In the past we have seen mysql clustering setups that were bad and caused performance issues (flipping primary instance, lock contention between the instances, etc.). You should not be seeing any log messages that the mysql database went away, that is not normal. Michael On Sun, Nov 8, 2020 at 7:06 AM Florian Rommel wrote: > > Hi, so we have a fully functioning setup of octavia on ussuri and it works nicely, when it competes. > So here is what happens: > From octavia api to octavia worker takes 20 seconds for the job to be initiated. > The loadbalancer gets built quickly and then we get a mysql went away error, the listener gets built and then a member , that works too, then the mysql error comes up with query took too long to execute. > Now this is where it gets weird. This is all within the first 2 - 3 minutes. > At this point it hangs and takes 10 minutes (600 seconds) for the next step to complete and then another 10 minutes and another 10 until it’s completed. > It seems there is a timeout somewhere but even with debug on we do not see what is going on. Does anyone have a mysql 8 running and octavia executing fine? And could send me their redacted octavia or mysql conf files? We didn’t touch them but it seems that there is something off.. > especially since it then completes and works extremely nicely. > I would highly appreciate it , even off list. > Best regards, > //f > > From fungi at yuggoth.org Wed Nov 11 14:46:18 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 11 Nov 2020 14:46:18 +0000 Subject: Victoria Release Community Meeting In-Reply-To: <10079933.t8I2mb3pXR@p1> References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> <10079933.t8I2mb3pXR@p1> Message-ID: <20201111144618.ndckzezqgbb2xble@yuggoth.org> On 2020-11-11 13:24:25 +0100 (+0100), Slawek Kaplonski wrote: [...] > Actually foundation has Jitsii server also - see > https://meetpad.opendev.org/ In Neutron team we were using it > almost without problems during last PTG. It's not really "the foundation" though, it's run by the OpenDev Collaboratory sysadmins and community. > But problem which we had with it was that it didn't work for > people from China AFAIK so we had to switch to Zoom finally. [...] This is unsubstantiated. We suspect people worldwide experience problems with getting WebRTC traffic through corporate firewalls, but on the whole people who live in mainland China have been conditioned to assume anything which doesn't work is being blocked by the government. We're working with some folks in China to attempt to prove or disprove it, but coordinating between timezones has slowed progress on that. We hope to have a better understanding of the local access problems for China, if any, soon. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From johnsomor at gmail.com Wed Nov 11 15:37:48 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 11 Nov 2020 07:37:48 -0800 Subject: [octavia] Timeouts during building of lb? But then successful In-Reply-To: <006701d6b839$5da32d30$18e98790$@datalounges.com> References: <6266636F-EB91-4CAC-B5CA-228710483B67@datalounges.com> <006701d6b839$5da32d30$18e98790$@datalounges.com> Message-ID: Florian, I'm glad you are up and running. Thank you for providing the feedback on what was causing your issue, it will help us help others in the future. Michael On Wed, Nov 11, 2020 at 6:46 AM wrote: > > Hi just as an update, because i think its rude to ask for help and not provide the solution, however stupid it may be. > > WE managed to get this working. So because we have a lot of worker threads through the openstack services, the connections ran out on Mysql 8 and obviously we increased the max_connections. > > This however ended up just closing the connections with no explanation , which was the original problem. It turns out that we used in the past open_files_limit : -1 , whichi n mysql 8 signifies 10000, however as described in some redhatt Bugzilla article, this seems to be not enough. As soon as we increased it to 65000 (our linux limit is much higher than that), everything went perfectly... > > Octavia now deploys within 1 minute. And even through hosted Kubernetes we deploy a LB via Octavia in under 3 minutes. > > Thank you Michael again for pointing me into the right direction > //F > > -----Alkuperäinen viesti----- > Lähettäjä: Michael Johnson > Lähetetty: Monday, 9 November 2020 18.41 > Vastaanottaja: Florian Rommel > Kopio: openstack-discuss > Aihe: Re: [octavia] Timeouts during building of lb? But then successful > > Hi Florian, > > That is very unusual. It typically takes less than 30 seconds for a load balancer to be provisioned. It definitely sounds like the mysql instance is having trouble. This can also cause longer term issues if the query response time drops to 10 seconds or more(0.001 is normal), which could trigger unnecessary failovers. > > In Octavia there are layers of "retries" to attempt to handle clouds that are having trouble. It sounds like database issues are triggering one or more of these retries. > There are a few retries that will be in play for database transactions: > MySQL internal retries/timeouts such as lock timeouts (logged on the mysql side) oslo.db includes some automatic retries (typically not logged without configuration file settings) Octavia tenacity and flow retries (Typically logged if the configuration file has Debug = True enabled) > > This may also be a general network connection issue. The default REST timeouts (used when we connect to the amphora agents) is 600, I'm wondering if the lb-mgmt-network is also having an issue. > > Please check your health manager log files. If there are database query time issues logged, it would point specifically to a mysql issue. In the past we have seen mysql clustering setups that were bad and caused performance issues (flipping primary instance, lock contention between the instances, etc.). You should not be seeing any log messages that the mysql database went away, that is not normal. > > Michael > > On Sun, Nov 8, 2020 at 7:06 AM Florian Rommel wrote: > > > > Hi, so we have a fully functioning setup of octavia on ussuri and it works nicely, when it competes. > > So here is what happens: > > From octavia api to octavia worker takes 20 seconds for the job to be initiated. > > The loadbalancer gets built quickly and then we get a mysql went away error, the listener gets built and then a member , that works too, then the mysql error comes up with query took too long to execute. > > Now this is where it gets weird. This is all within the first 2 - 3 minutes. > > At this point it hangs and takes 10 minutes (600 seconds) for the next step to complete and then another 10 minutes and another 10 until it’s completed. > > It seems there is a timeout somewhere but even with debug on we do not see what is going on. Does anyone have a mysql 8 running and octavia executing fine? And could send me their redacted octavia or mysql conf files? We didn’t touch them but it seems that there is something off.. > > especially since it then completes and works extremely nicely. > > I would highly appreciate it , even off list. > > Best regards, > > //f > > > > > > > From eblock at nde.ag Wed Nov 11 15:41:17 2020 From: eblock at nde.ag (Eugen Block) Date: Wed, 11 Nov 2020 15:41:17 +0000 Subject: Question about the instance snapshot In-Reply-To: Message-ID: <20201111154117.Horde.rS6VTlLeua6yUDh3vuci3HY@webmail.nde.ag> Hi, taking an instance snapshot only applies to the root disk, other attached disks are not included. I don't have an explanation but I think it would be quite difficult to merge all contents from different disks into one image. If this is about backup and not creating new base images and your instances are volume-based you could create cinder snapshots of those volumes, for example by creating consistency-groups to have all volumes in a consistent state. If this is more about creating new base images rather than backups and you have an instance with its filesystem distributed across multiple volumes it would probably be better to either move everything to a single volume (easy when you're using LVM) or resize that instance with a larger disk size. There are several ways, it depends on your actual requirement. Regards, Eugen Zitat von Henry lol : > Hello, everyone > > I'm wondering whether the snapshot from the instance saves all disks > attached to the instance or only the main(=first) disk, because I can't > find any clear description for it. > > If the latter is true, should I detach all disks except for the main from > the instance before taking a snapshot, and why doesn't it support all > attached disks? > > Thanks > Sincerely, From oliver.weinmann at me.com Wed Nov 11 15:49:01 2020 From: oliver.weinmann at me.com (Oliver Weinmann) Date: Wed, 11 Nov 2020 15:49:01 -0000 Subject: =?utf-8?B?UmU6wqBDZW50T1MgOCBVc3N1cmkgY2FuJ3QgbGF1bmNoIGluc3RhbmNlIC91?= =?utf-8?B?c3IvbGliZXhlYy9xZW11LWt2bTogUGVybWlzc2lvbiBkZW5pZWQ=?= Message-ID: <07cd5a1b-1405-4e76-ae9d-fbce447ed8d3@me.com> Hi again, sorry to pick up this old post again but I manged to figure out what's wrong. The error: end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) only arises when using the nano flavor: openstack flavor create --id 0 --vcpus 1 --ram 64 --disk 1 m1.nano It works fine when using 128 instead of 64MB RAM: openstack flavor create --id 0 --vcpus 1 --ram 128 --disk 1 m1.nano Cheers, Oliver Am 19. Oktober 2020 um 16:21 schrieb Oliver Weinmann : Ok, I will try to disable selinux and deploy one more compute node. I just stumbled across another issue, not sure if it is related. The instance seems to be deployed just fine but now I looked on the console and neither cirros nor centos 7 seem to be booting up correctly. on cirros i see an error: [ 0.846019] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]--- and on centos7: error: not a correct XFS inode. I tried to create with ephemeral and volume. Cheers, Oliver Am 19. Oktober 2020 um 16:09 schrieb Alex Schultz : On Mon, Oct 19, 2020 at 7:59 AM Oliver Weinmann wrote: First of all thanks a lot for the quick reply. I just checked and it seems that the package is really not available for centos8 from the upstream repo: https://centos.pkgs.org/8/centos-appstream-x86_64/podman-1.6.4-15.module_el8.2.0+465+f9348e8f.x86_64.rpm.html When you say it should be available via rdo, does this mean I have to add or use a different repo when deploying undercloud / overcloud? I have followed the tripleo guide to deploy it: I thought we shipped it, maybe we don't because we run with selinux disabled so it doesn't show up in CI. https://docs.openstack.org/tripleo-docs/latest/ And is there a way to disable selinux on all overcloud nodes by default? I guess it is the default to disable it? Set the following in an environment file as part of the deployment: parameter_defaults: SELinuxMode: permissive Cheers, Oliver Am 19. Oktober 2020 um 15:29 schrieb Alex Schultz : On Mon, Oct 19, 2020 at 7:09 AM Oliver Weinmann wrote: Hi all, I have successfully deployed the overcloud many many times, but this time I have a strange behaviour. Whenever I try to launch an instance it fails. I checked the logs on the compute node and saw this error: Failed to build and run instance: libvirt.libvirtError: internal error: process exited while connecting to monitor: libvirt: error : cannot execute binary /usr/libexec/qemu-kvm: Permission denied googling led me to the solution to disable selinux: setenforce 0 I have not made this change persistent yet, as I would like to know why I'm facing this issue right now. What is actually the default for the overcloud nodes SeLinux? Enforcing, permissive or disabled? I build the ipa and overcloud image myself as I had to include drivers. Is this maybe the reason why SeLinux is now enabled, but is actually disabled when using the default ipa images? From a TripleO perspective, we do not officially support selinux enabled when running with CentOS. In theory it should work, however it is very dependent on versions. I think you're likely running into an issue with the correct version of podman which is likely causing this. We've had some issues as of late which require a very specific version of podman in order to work correctly with nova compute when running with selinux enabled. You need 1.6.4-15 or higher which I don't think is available with centos8. It should be available via RDO. Related: https://review.opendev.org/#/c/736173/ Thanks and Best Regards, Oliver -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed Nov 11 16:35:56 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?q?Bal=E1zs?= Gibizer) Date: Wed, 11 Nov 2020 17:35:56 +0100 Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service Message-ID: Dear packagers and deployment engine developers, Since Icehouse nova-compute service does not need any database configuration as it uses the message bus to access data in the database via the conductor service. Also, the nova configuration guide states that the nova-compute service should not have the [api_database]connection config set. Having any DB credentials configured for the nova-compute is a security risk as well since that service runs close to the hypervisor. Since Rocky[1] nova-compute service fails if you configure API DB credentials and set upgrade_level config to 'auto'. Now we are proposing a patch[2] that makes nova-compute fail at startup if the [database]connection or the [api_database]connection is configured. We know that this breaks at least the rpm packaging, debian packaging, and puppet-nova. The problem there is that in an all-in-on deployment scenario the nova.conf file generated by these tools is shared between all the nova services and therefore nova-compute sees DB credentials. As a counter-example, devstack generates a separate nova-cpu.conf and passes that to the nova-compute service even in an all-in-on setup. The nova team would like to merge [2] during Wallaby but we are OK to delay the patch until Wallaby Milestone 2 so that the packagers and deployment tools can catch up. Please let us know if you are impacted and provide a way to track when you are ready with the modification that allows [2] to be merged. There was a long discussion on #openstack-nova today[3] around this topic. So you can find more detailed reasoning there[3]. Cheers, gibi [1] https://github.com/openstack/nova/blob/dc93e3b510f53d5b2198c8edd22528f0c899617e/nova/compute/rpcapi.py#L441-L457 [2] https://review.opendev.org/#/c/762176 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-11.log.html#t2020-11-11T10:51:23 -- http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-11.log.html#t2020-11-11T14:40:51 From pbasaras at gmail.com Wed Nov 11 16:38:15 2020 From: pbasaras at gmail.com (Pavlos Basaras) Date: Wed, 11 Nov 2020 18:38:15 +0200 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] In-Reply-To: <20201111140328.Horde.e4UaBmIP6Q_GOnn4VutjAea@webmail.nde.ag> References: <20201111132022.Horde.q9K8xVfq2klzzypYBltanTa@webmail.nde.ag> <20201111140328.Horde.e4UaBmIP6Q_GOnn4VutjAea@webmail.nde.ag> Message-ID: Hello, maybe there is stg wrong with the installation. Let me clarify a few things. I have devstack deployed in a VM (pre-installed: keystone, glance, nova, placement, cinder, neutron, and horizon.) I can deploy machines successfully at the devstack controller space --everything seems to work fine. --> is seems to work fine with opensource mano as well I am trying to add another pc as a compute host to be able to deploy vms at this new compute host following ( https://docs.openstack.org/nova/queens/install/compute-install-ubuntu.html) Also attached the nova.conf file. The only major differences are: --the transport url for rabbitmq which i made according to the transport url of the controller, i.e., instead of rabbit://openstack:linux at controller i have rabbit://stackrabbit:linux at controller -- i replaced the ports with the service, e.g., instead using the 5000 port --> "http://controller/identity/v3" instead of "http://controller:5000/v3" please excuse (any) newbie questions all the best, Pavlos. On Wed, Nov 11, 2020 at 4:03 PM Eugen Block wrote: > There might be some mixup during the setup, I'm not sure how the other > cell would be created. I'd probably delete the cell with UUID > 1a0fde85-8906-46fb-b721-01a28c978439 and retry the discover_hosts with > the right cell UUID: > > nova-manage cell_v2 delete_cell --cell_uuid > 1a0fde85-8906-46fb-b721-01a28c978439 > nova-manage cell_v2 discover_hosts --cell_uuid > 1522c22f-64d4-4882-8ae8-ed0f9407407c > > Does that work? > > > Zitat von Pavlos Basaras : > > > Hello, > > > > yes i have this discover_hosts_in_cells_interval = 300 > > > > > > interestingly when i issued a combination of map_cell_and_hosts and > > update_cell > > > > the output from: nova-manage cell_v2 list_hosts > > +-----------+--------------------------------------+-------------+ > > | Cell Name | Cell UUID | Hostname | > > +-----------+--------------------------------------+-------------+ > > | None | 1a0fde85-8906-46fb-b721-01a28c978439 | computenode | > > | None | 1a0fde85-8906-46fb-b721-01a28c978439 | nrUE | > > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | > > +-----------+--------------------------------------+-------------+ > > > > when the new compute nodes seem to not have a cell mapped > > > > best, > > P. > > > > > > On Wed, Nov 11, 2020 at 3:26 PM Eugen Block wrote: > > > >> Hm, > >> > >> indeed rather strange to me. > >> > >> Do you see anything in the nova-scheduler.log? If you activated the > >> > >> discover_hosts_in_cells_interval = 300 > >> > >> it should query for new hosts every 5 minutes. > >> > >> > >> > >> Zitat von Pavlos Basaras : > >> > >> > Hello, > >> > > >> > thanks very much for your prompt reply. > >> > > >> > > >> > regarding the first command "nova-manage cell_v2 list_hosts" the > output > >> is > >> > the following (openstack is the host of the controller). I dont see > any > >> > other node here, even when i execute the discover_hosts command > >> > > >> > +-----------+--------------------------------------+-----------+ > >> > | Cell Name | Cell UUID | Hostname | > >> > +-----------+--------------------------------------+-----------+ > >> > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | > >> > +-----------+--------------------------------------+-----------+ > >> > > >> > > >> > Also this is the output from my controller when i use the command: > sudo > >> -s > >> > /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (not > >> sure > >> > if this helps) > >> > Found 2 cell mappings. > >> > Skipping cell0 since it does not contain hosts. > >> > Getting computes from cell 'cell1': > 1522c22f-64d4-4882-8ae8-ed0f9407407c > >> > Found 0 unmapped computes in cell: > 1522c22f-64d4-4882-8ae8-ed0f9407407c > >> > > >> > any thoughts? > >> > > >> > > >> > best, > >> > Pavlos. > >> > >> > >> > >> > >> > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- [DEFAULT] #log_dir = /var/log/nova lock_path = /var/lock/nova state_path = /var/lib/nova #pavlos --start #transport_url = rabbit://admin:linux at 192.168.40.184 #transport_url = rabbit://openstack:linux at controller transport_url = rabbit://stackrabbit:linux at controller #transport_url = rabbit://admin:linux at 192.168.40.184:5672/ my_ip = 192.168.111.139 use_neutron = True firewall_driver = nova.virt.firewall.NoopFirewallDriver #pavlos --end # # From nova.conf # # # Availability zone for internal services. # # This option determines the availability zone for the various internal nova # services, such as 'nova-scheduler', 'nova-conductor', etc. # # Possible values: # # * Any string representing an existing availability zone name. # (string value) #internal_service_availability_zone = internal # # Default availability zone for compute services. # # This option determines the default availability zone for 'nova-compute' # services, which will be used if the service(s) do not belong to aggregates # with # availability zone metadata. # # Possible values: # # * Any string representing an existing availability zone name. # (string value) #default_availability_zone = nova # # Default availability zone for instances. # # This option determines the default availability zone for instances, which will # be used when a user does not specify one when creating an instance. The # instance(s) will be bound to this availability zone for their lifetime. # # Possible values: # # * Any string representing an existing availability zone name. # * None, which means that the instance can move from one availability zone to # another during its lifetime if it is moved from one compute node to another. # (string value) #default_schedule_zone = # Length of generated instance admin passwords. (integer value) # Minimum value: 0 #password_length = 12 # # Time period to generate instance usages for. It is possible to define optional # offset to given period by appending @ character followed by a number defining # offset. # # Possible values: # # * period, example: ``hour``, ``day``, ``month` or ``year`` # * period with offset, example: ``month at 15`` will result in monthly audits # starting on 15th day of month. # (string value) #instance_usage_audit_period = month # # Start and use a daemon that can run the commands that need to be run with # root privileges. This option is usually enabled on nodes that run nova compute # processes. # (boolean value) #use_rootwrap_daemon = false # # Path to the rootwrap configuration file. # # Goal of the root wrapper is to allow a service-specific unprivileged user to # run a number of actions as the root user in the safest manner possible. # The configuration file used here must match the one defined in the sudoers # entry. # (string value) #rootwrap_config = /etc/nova/rootwrap.conf # Explicitly specify the temporary working directory. (string value) #tempdir = # DEPRECATED: # Determine if monkey patching should be applied. # # Related options: # # * ``monkey_patch_modules``: This must have values set for this option to # have any effect # (boolean value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: # Monkey patching nova is not tested, not supported, and is a barrier # for interoperability. #monkey_patch = false # DEPRECATED: # List of modules/decorators to monkey patch. # # This option allows you to patch a decorator for all functions in specified # modules. # # Possible values: # # * nova.compute.api:nova.notifications.notify_decorator # * [...] # # Related options: # # * ``monkey_patch``: This must be set to ``True`` for this option to # have any effect # (list value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: # Monkey patching nova is not tested, not supported, and is a barrier # for interoperability. #monkey_patch_modules = nova.compute.api:nova.notifications.notify_decorator # # Defines which driver to use for controlling virtualization. # # Possible values: # # * ``libvirt.LibvirtDriver`` # * ``xenapi.XenAPIDriver`` # * ``fake.FakeDriver`` # * ``ironic.IronicDriver`` # * ``vmwareapi.VMwareVCDriver`` # * ``hyperv.HyperVDriver`` # * ``powervm.PowerVMDriver`` # (string value) #compute_driver = # # Allow destination machine to match source for resize. Useful when # testing in single-host environments. By default it is not allowed # to resize to the same host. Setting this option to true will add # the same host to the destination options. Also set to true # if you allow the ServerGroupAffinityFilter and need to resize. # (boolean value) #allow_resize_to_same_host = false # # Image properties that should not be inherited from the instance # when taking a snapshot. # # This option gives an opportunity to select which image-properties # should not be inherited by newly created snapshots. # # Possible values: # # * A comma-separated list whose item is an image property. Usually only # the image properties that are only needed by base images can be included # here, since the snapshots that are created from the base images don't # need them. # * Default list: cache_in_nova, bittorrent, img_signature_hash_method, # img_signature, img_signature_key_type, # img_signature_certificate_uuid # # (list value) #non_inheritable_image_properties = cache_in_nova,bittorrent,img_signature_hash_method,img_signature,img_signature_key_type,img_signature_certificate_uuid # DEPRECATED: # When creating multiple instances with a single request using the # os-multiple-create API extension, this template will be used to build # the display name for each instance. The benefit is that the instances # end up with different hostnames. Example display names when creating # two VM's: name-1, name-2. # # Possible values: # # * Valid keys for the template are: name, uuid, count. # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # This config changes API behaviour. All changes in API behaviour should be # discoverable. #multi_instance_display_name_template = %(name)s-%(count)d # # Maximum number of devices that will result in a local image being # created on the hypervisor node. # # A negative number means unlimited. Setting max_local_block_devices # to 0 means that any request that attempts to create a local disk # will fail. This option is meant to limit the number of local discs # (so root local disc that is the result of --image being used, and # any other ephemeral and swap disks). 0 does not mean that images # will be automatically converted to volumes and boot instances from # volumes - it just means that all requests that attempt to create a # local disk will fail. # # Possible values: # # * 0: Creating a local disk is not allowed. # * Negative number: Allows unlimited number of local discs. # * Positive number: Allows only these many number of local discs. # (Default value is 3). # (integer value) #max_local_block_devices = 3 # # A comma-separated list of monitors that can be used for getting # compute metrics. You can use the alias/name from the setuptools # entry points for nova.compute.monitors.* namespaces. If no # namespace is supplied, the "cpu." namespace is assumed for # backwards-compatibility. # # NOTE: Only one monitor per namespace (For example: cpu) can be loaded at # a time. # # Possible values: # # * An empty list will disable the feature (Default). # * An example value that would enable both the CPU and NUMA memory # bandwidth monitors that use the virt driver variant: # # compute_monitors = cpu.virt_driver, numa_mem_bw.virt_driver # (list value) #compute_monitors = # # The default format an ephemeral_volume will be formatted with on creation. # # Possible values: # # * ``ext2`` # * ``ext3`` # * ``ext4`` # * ``xfs`` # * ``ntfs`` (only for Windows guests) # (string value) #default_ephemeral_format = # # Determine if instance should boot or fail on VIF plugging timeout. # # Nova sends a port update to Neutron after an instance has been scheduled, # providing Neutron with the necessary information to finish setup of the port. # Once completed, Neutron notifies Nova that it has finished setting up the # port, at which point Nova resumes the boot of the instance since network # connectivity is now supposed to be present. A timeout will occur if the reply # is not received after a given interval. # # This option determines what Nova does when the VIF plugging timeout event # happens. When enabled, the instance will error out. When disabled, the # instance will continue to boot on the assumption that the port is ready. # # Possible values: # # * True: Instances should fail after VIF plugging timeout # * False: Instances should continue booting after VIF plugging timeout # (boolean value) #vif_plugging_is_fatal = true # # Timeout for Neutron VIF plugging event message arrival. # # Number of seconds to wait for Neutron vif plugging events to # arrive before continuing or failing (see 'vif_plugging_is_fatal'). # # Related options: # # * vif_plugging_is_fatal - If ``vif_plugging_timeout`` is set to zero and # ``vif_plugging_is_fatal`` is False, events should not be expected to # arrive at all. # (integer value) # Minimum value: 0 #vif_plugging_timeout = 300 # Path to '/etc/network/interfaces' template. # # The path to a template file for the '/etc/network/interfaces'-style file, # which # will be populated by nova and subsequently used by cloudinit. This provides a # method to configure network connectivity in environments without a DHCP # server. # # The template will be rendered using Jinja2 template engine, and receive a # top-level key called ``interfaces``. This key will contain a list of # dictionaries, one for each interface. # # Refer to the cloudinit documentaion for more information: # # https://cloudinit.readthedocs.io/en/latest/topics/datasources.html # # Possible values: # # * A path to a Jinja2-formatted template for a Debian '/etc/network/interfaces' # file. This applies even if using a non Debian-derived guest. # # Related options: # # * ``flat_inject``: This must be set to ``True`` to ensure nova embeds network # configuration information in the metadata provided through the config drive. # (string value) #injected_network_template = $pybasedir/nova/virt/interfaces.template # # The image preallocation mode to use. # # Image preallocation allows storage for instance images to be allocated up # front # when the instance is initially provisioned. This ensures immediate feedback is # given if enough space isn't available. In addition, it should significantly # improve performance on writes to new blocks and may even improve I/O # performance to prewritten blocks due to reduced fragmentation. # # Possible values: # # * "none" => no storage provisioning is done up front # * "space" => storage is fully allocated at instance start # (string value) # Possible values: # none - # space - #preallocate_images = none # # Enable use of copy-on-write (cow) images. # # QEMU/KVM allow the use of qcow2 as backing files. By disabling this, # backing files will not be used. # (boolean value) #use_cow_images = true # # Force conversion of backing images to raw format. # # Possible values: # # * True: Backing image files will be converted to raw image format # * False: Backing image files will not be converted # # Related options: # # * ``compute_driver``: Only the libvirt driver uses this option. # (boolean value) #force_raw_images = true # # Name of the mkfs commands for ephemeral device. # # The format is = # (multi valued) #virt_mkfs = # # Enable resizing of filesystems via a block device. # # If enabled, attempt to resize the filesystem by accessing the image over a # block device. This is done by the host and may not be necessary if the image # contains a recent version of cloud-init. Possible mechanisms require the nbd # driver (for qcow and raw), or loop (for raw). # (boolean value) #resize_fs_using_block_device = false # Amount of time, in seconds, to wait for NBD device start up. (integer value) # Minimum value: 0 #timeout_nbd = 10 # # Location of cached images. # # This is NOT the full path - just a folder name relative to '$instances_path'. # For per-compute-host cached images, set to '_base_$my_ip' # (string value) #image_cache_subdirectory_name = _base # Should unused base images be removed? (boolean value) #remove_unused_base_images = true # # Unused unresized base images younger than this will not be removed. # (integer value) #remove_unused_original_minimum_age_seconds = 86400 # # Generic property to specify the pointer type. # # Input devices allow interaction with a graphical framebuffer. For # example to provide a graphic tablet for absolute cursor movement. # # If set, the 'hw_pointer_model' image property takes precedence over # this configuration option. # # Possible values: # # * None: Uses default behavior provided by drivers (mouse on PS2 for # libvirt x86) # * ps2mouse: Uses relative movement. Mouse connected by PS2 # * usbtablet: Uses absolute movement. Tablet connect by USB # # Related options: # # * usbtablet must be configured with VNC enabled or SPICE enabled and SPICE # agent disabled. When used with libvirt the instance mode should be # configured as HVM. # (string value) # Possible values: # - # ps2mouse - # usbtablet - #pointer_model = usbtablet # # Defines which physical CPUs (pCPUs) can be used by instance # virtual CPUs (vCPUs). # # Possible values: # # * A comma-separated list of physical CPU numbers that virtual CPUs can be # allocated to by default. Each element should be either a single CPU number, # a range of CPU numbers, or a caret followed by a CPU number to be # excluded from a previous range. For example: # # vcpu_pin_set = "4-12,^8,15" # (string value) #vcpu_pin_set = # # Number of huge/large memory pages to reserved per NUMA host cell. # # Possible values: # # * A list of valid key=value which reflect NUMA node ID, page size # (Default unit is KiB) and number of pages to be reserved. # # reserved_huge_pages = node:0,size:2048,count:64 # reserved_huge_pages = node:1,size:1GB,count:1 # # In this example we are reserving on NUMA node 0 64 pages of 2MiB # and on NUMA node 1 1 page of 1GiB. # (dict value) #reserved_huge_pages = # # Amount of disk resources in MB to make them always available to host. The # disk usage gets reported back to the scheduler from nova-compute running # on the compute nodes. To prevent the disk resources from being considered # as available, this option can be used to reserve disk space for that host. # # Possible values: # # * Any positive integer representing amount of disk in MB to reserve # for the host. # (integer value) # Minimum value: 0 #reserved_host_disk_mb = 0 # # Amount of memory in MB to reserve for the host so that it is always available # to host processes. The host resources usage is reported back to the scheduler # continuously from nova-compute running on the compute node. To prevent the # host # memory from being considered as available, this option is used to reserve # memory for the host. # # Possible values: # # * Any positive integer representing amount of memory in MB to reserve # for the host. # (integer value) # Minimum value: 0 #reserved_host_memory_mb = 512 # # Number of physical CPUs to reserve for the host. The host resources usage is # reported back to the scheduler continuously from nova-compute running on the # compute node. To prevent the host CPU from being considered as available, # this option is used to reserve random pCPU(s) for the host. # # Possible values: # # * Any positive integer representing number of physical CPUs to reserve # for the host. # (integer value) # Minimum value: 0 #reserved_host_cpus = 0 # # This option helps you specify virtual CPU to physical CPU allocation ratio. # # From Ocata (15.0.0) this is used to influence the hosts selected by # the Placement API. Note that when Placement is used, the CoreFilter # is redundant, because the Placement API will have already filtered # out hosts that would have failed the CoreFilter. # # This configuration specifies ratio for CoreFilter which can be set # per compute node. For AggregateCoreFilter, it will fall back to this # configuration value if no per-aggregate setting is found. # # NOTE: This can be set per-compute, or if set to 0.0, the value # set on the scheduler node(s) or compute node(s) will be used # and defaulted to 16.0. Once set to a non-default value, it is not possible # to "unset" the config to get back to the default behavior. If you want # to reset back to the default, explicitly specify 16.0. # # NOTE: As of the 16.0.0 Pike release, this configuration option is ignored # for the ironic.IronicDriver compute driver and is hardcoded to 1.0. # # Possible values: # # * Any valid positive integer or float value # (floating point value) # Minimum value: 0 #cpu_allocation_ratio = 0.0 # # This option helps you specify virtual RAM to physical RAM # allocation ratio. # # From Ocata (15.0.0) this is used to influence the hosts selected by # the Placement API. Note that when Placement is used, the RamFilter # is redundant, because the Placement API will have already filtered # out hosts that would have failed the RamFilter. # # This configuration specifies ratio for RamFilter which can be set # per compute node. For AggregateRamFilter, it will fall back to this # configuration value if no per-aggregate setting found. # # NOTE: This can be set per-compute, or if set to 0.0, the value # set on the scheduler node(s) or compute node(s) will be used and # defaulted to 1.5. Once set to a non-default value, it is not possible # to "unset" the config to get back to the default behavior. If you want # to reset back to the default, explicitly specify 1.5. # # NOTE: As of the 16.0.0 Pike release, this configuration option is ignored # for the ironic.IronicDriver compute driver and is hardcoded to 1.0. # # Possible values: # # * Any valid positive integer or float value # (floating point value) # Minimum value: 0 #ram_allocation_ratio = 0.0 # # This option helps you specify virtual disk to physical disk # allocation ratio. # # From Ocata (15.0.0) this is used to influence the hosts selected by # the Placement API. Note that when Placement is used, the DiskFilter # is redundant, because the Placement API will have already filtered # out hosts that would have failed the DiskFilter. # # A ratio greater than 1.0 will result in over-subscription of the # available physical disk, which can be useful for more # efficiently packing instances created with images that do not # use the entire virtual disk, such as sparse or compressed # images. It can be set to a value between 0.0 and 1.0 in order # to preserve a percentage of the disk for uses other than # instances. # # NOTE: This can be set per-compute, or if set to 0.0, the value # set on the scheduler node(s) or compute node(s) will be used and # defaulted to 1.0. Once set to a non-default value, it is not possible # to "unset" the config to get back to the default behavior. If you want # to reset back to the default, explicitly specify 1.0. # # NOTE: As of the 16.0.0 Pike release, this configuration option is ignored # for the ironic.IronicDriver compute driver and is hardcoded to 1.0. # # Possible values: # # * Any valid positive integer or float value # (floating point value) # Minimum value: 0 #disk_allocation_ratio = 0.0 # # Console proxy host to be used to connect to instances on this host. It is the # publicly visible name for the console host. # # Possible values: # # * Current hostname (default) or any string representing hostname. # (string value) #console_host = # # Name of the network to be used to set access IPs for instances. If there are # multiple IPs to choose from, an arbitrary one will be chosen. # # Possible values: # # * None (default) # * Any string representing network name. # (string value) #default_access_ip_network_name = # # Whether to batch up the application of IPTables rules during a host restart # and apply all at the end of the init phase. # (boolean value) #defer_iptables_apply = false # # Specifies where instances are stored on the hypervisor's disk. # It can point to locally attached storage or a directory on NFS. # # Possible values: # # * $state_path/instances where state_path is a config option that specifies # the top-level directory for maintaining nova's state. (default) or # Any string representing directory path. # # Related options: # # * ``[workarounds]/ensure_libvirt_rbd_instance_dir_cleanup`` # (string value) #instances_path = $state_path/instances # # This option enables periodic compute.instance.exists notifications. Each # compute node must be configured to generate system usage data. These # notifications are consumed by OpenStack Telemetry service. # (boolean value) #instance_usage_audit = false # # Maximum number of 1 second retries in live_migration. It specifies number # of retries to iptables when it complains. It happens when an user continuously # sends live-migration request to same host leading to concurrent request # to iptables. # # Possible values: # # * Any positive integer representing retry count. # (integer value) # Minimum value: 0 #live_migration_retry_count = 30 # # This option specifies whether to start guests that were running before the # host rebooted. It ensures that all of the instances on a Nova compute node # resume their state each time the compute node boots or restarts. # (boolean value) #resume_guests_state_on_host_boot = false # # Number of times to retry network allocation. It is required to attempt network # allocation retries if the virtual interface plug fails. # # Possible values: # # * Any positive integer representing retry count. # (integer value) # Minimum value: 0 #network_allocate_retries = 0 # # Limits the maximum number of instance builds to run concurrently by # nova-compute. Compute service can attempt to build an infinite number of # instances, if asked to do so. This limit is enforced to avoid building # unlimited instance concurrently on a compute node. This value can be set # per compute node. # # Possible Values: # # * 0 : treated as unlimited. # * Any positive integer representing maximum concurrent builds. # (integer value) # Minimum value: 0 #max_concurrent_builds = 10 # # Maximum number of live migrations to run concurrently. This limit is enforced # to avoid outbound live migrations overwhelming the host/network and causing # failures. It is not recommended that you change this unless you are very sure # that doing so is safe and stable in your environment. # # Possible values: # # * 0 : treated as unlimited. # * Negative value defaults to 0. # * Any positive integer representing maximum number of live migrations # to run concurrently. # (integer value) #max_concurrent_live_migrations = 1 # # Number of times to retry block device allocation on failures. Starting with # Liberty, Cinder can use image volume cache. This may help with block device # allocation performance. Look at the cinder image_volume_cache_enabled # configuration option. # # Possible values: # # * 60 (default) # * If value is 0, then one attempt is made. # * Any negative value is treated as 0. # * For any value > 0, total attempts are (value + 1) # (integer value) #block_device_allocate_retries = 60 # # Number of greenthreads available for use to sync power states. # # This option can be used to reduce the number of concurrent requests # made to the hypervisor or system with real instance power states # for performance reasons, for example, with Ironic. # # Possible values: # # * Any positive integer representing greenthreads count. # (integer value) #sync_power_state_pool_size = 1000 # # Number of seconds to wait between runs of the image cache manager. # # Possible values: # * 0: run at the default rate. # * -1: disable # * Any other value # (integer value) # Minimum value: -1 #image_cache_manager_interval = 2400 # # Interval to pull network bandwidth usage info. # # Not supported on all hypervisors. If a hypervisor doesn't support bandwidth # usage, it will not get the info in the usage events. # # Possible values: # # * 0: Will run at the default periodic interval. # * Any value < 0: Disables the option. # * Any positive integer in seconds. # (integer value) #bandwidth_poll_interval = 600 # # Interval to sync power states between the database and the hypervisor. # # The interval that Nova checks the actual virtual machine power state # and the power state that Nova has in its database. If a user powers # down their VM, Nova updates the API to report the VM has been # powered down. Should something turn on the VM unexpectedly, # Nova will turn the VM back off to keep the system in the expected # state. # # Possible values: # # * 0: Will run at the default periodic interval. # * Any value < 0: Disables the option. # * Any positive integer in seconds. # # Related options: # # * If ``handle_virt_lifecycle_events`` in workarounds_group is # false and this option is negative, then instances that get out # of sync between the hypervisor and the Nova database will have # to be synchronized manually. # (integer value) #sync_power_state_interval = 600 # # Interval between instance network information cache updates. # # Number of seconds after which each compute node runs the task of # querying Neutron for all of its instances networking information, # then updates the Nova db with that information. Nova will never # update it's cache if this option is set to 0. If we don't update the # cache, the metadata service and nova-api endpoints will be proxying # incorrect network data about the instance. So, it is not recommended # to set this option to 0. # # Possible values: # # * Any positive integer in seconds. # * Any value <=0 will disable the sync. This is not recommended. # (integer value) #heal_instance_info_cache_interval = 60 # # Interval for reclaiming deleted instances. # # A value greater than 0 will enable SOFT_DELETE of instances. # This option decides whether the server to be deleted will be put into # the SOFT_DELETED state. If this value is greater than 0, the deleted # server will not be deleted immediately, instead it will be put into # a queue until it's too old (deleted time greater than the value of # reclaim_instance_interval). The server can be recovered from the # delete queue by using the restore action. If the deleted server remains # longer than the value of reclaim_instance_interval, it will be # deleted by a periodic task in the compute service automatically. # # Note that this option is read from both the API and compute nodes, and # must be set globally otherwise servers could be put into a soft deleted # state in the API and never actually reclaimed (deleted) on the compute # node. # # Possible values: # # * Any positive integer(in seconds) greater than 0 will enable # this option. # * Any value <=0 will disable the option. # (integer value) #reclaim_instance_interval = 0 # # Interval for gathering volume usages. # # This option updates the volume usage cache for every # volume_usage_poll_interval number of seconds. # # Possible values: # # * Any positive integer(in seconds) greater than 0 will enable # this option. # * Any value <=0 will disable the option. # (integer value) #volume_usage_poll_interval = 0 # # Interval for polling shelved instances to offload. # # The periodic task runs for every shelved_poll_interval number # of seconds and checks if there are any shelved instances. If it # finds a shelved instance, based on the 'shelved_offload_time' config # value it offloads the shelved instances. Check 'shelved_offload_time' # config option description for details. # # Possible values: # # * Any value <= 0: Disables the option. # * Any positive integer in seconds. # # Related options: # # * ``shelved_offload_time`` # (integer value) #shelved_poll_interval = 3600 # # Time before a shelved instance is eligible for removal from a host. # # By default this option is set to 0 and the shelved instance will be # removed from the hypervisor immediately after shelve operation. # Otherwise, the instance will be kept for the value of # shelved_offload_time(in seconds) so that during the time period the # unshelve action will be faster, then the periodic task will remove # the instance from hypervisor after shelved_offload_time passes. # # Possible values: # # * 0: Instance will be immediately offloaded after being # shelved. # * Any value < 0: An instance will never offload. # * Any positive integer in seconds: The instance will exist for # the specified number of seconds before being offloaded. # (integer value) #shelved_offload_time = 0 # # Interval for retrying failed instance file deletes. # # This option depends on 'maximum_instance_delete_attempts'. # This option specifies how often to retry deletes whereas # 'maximum_instance_delete_attempts' specifies the maximum number # of retry attempts that can be made. # # Possible values: # # * 0: Will run at the default periodic interval. # * Any value < 0: Disables the option. # * Any positive integer in seconds. # # Related options: # # * ``maximum_instance_delete_attempts`` from instance_cleaning_opts # group. # (integer value) #instance_delete_interval = 300 # # Interval (in seconds) between block device allocation retries on failures. # # This option allows the user to specify the time interval between # consecutive retries. 'block_device_allocate_retries' option specifies # the maximum number of retries. # # Possible values: # # * 0: Disables the option. # * Any positive integer in seconds enables the option. # # Related options: # # * ``block_device_allocate_retries`` in compute_manager_opts group. # (integer value) # Minimum value: 0 #block_device_allocate_retries_interval = 3 # # Interval between sending the scheduler a list of current instance UUIDs to # verify that its view of instances is in sync with nova. # # If the CONF option 'scheduler_tracks_instance_changes' is # False, the sync calls will not be made. So, changing this option will # have no effect. # # If the out of sync situations are not very common, this interval # can be increased to lower the number of RPC messages being sent. # Likewise, if sync issues turn out to be a problem, the interval # can be lowered to check more frequently. # # Possible values: # # * 0: Will run at the default periodic interval. # * Any value < 0: Disables the option. # * Any positive integer in seconds. # # Related options: # # * This option has no impact if ``scheduler_tracks_instance_changes`` # is set to False. # (integer value) #scheduler_instance_sync_interval = 120 # # Interval for updating compute resources. # # This option specifies how often the update_available_resources # periodic task should run. A number less than 0 means to disable the # task completely. Leaving this at the default of 0 will cause this to # run at the default periodic interval. Setting it to any positive # value will cause it to run at approximately that number of seconds. # # Possible values: # # * 0: Will run at the default periodic interval. # * Any value < 0: Disables the option. # * Any positive integer in seconds. # (integer value) #update_resources_interval = 0 # # Time interval after which an instance is hard rebooted automatically. # # When doing a soft reboot, it is possible that a guest kernel is # completely hung in a way that causes the soft reboot task # to not ever finish. Setting this option to a time period in seconds # will automatically hard reboot an instance if it has been stuck # in a rebooting state longer than N seconds. # # Possible values: # # * 0: Disables the option (default). # * Any positive integer in seconds: Enables the option. # (integer value) # Minimum value: 0 #reboot_timeout = 0 # # Maximum time in seconds that an instance can take to build. # # If this timer expires, instance status will be changed to ERROR. # Enabling this option will make sure an instance will not be stuck # in BUILD state for a longer period. # # Possible values: # # * 0: Disables the option (default) # * Any positive integer in seconds: Enables the option. # (integer value) # Minimum value: 0 #instance_build_timeout = 0 # # Interval to wait before un-rescuing an instance stuck in RESCUE. # # Possible values: # # * 0: Disables the option (default) # * Any positive integer in seconds: Enables the option. # (integer value) # Minimum value: 0 #rescue_timeout = 0 # # Automatically confirm resizes after N seconds. # # Resize functionality will save the existing server before resizing. # After the resize completes, user is requested to confirm the resize. # The user has the opportunity to either confirm or revert all # changes. Confirm resize removes the original server and changes # server status from resized to active. Setting this option to a time # period (in seconds) will automatically confirm the resize if the # server is in resized state longer than that time. # # Possible values: # # * 0: Disables the option (default) # * Any positive integer in seconds: Enables the option. # (integer value) # Minimum value: 0 #resize_confirm_window = 0 # # Total time to wait in seconds for an instance toperform a clean # shutdown. # # It determines the overall period (in seconds) a VM is allowed to # perform a clean shutdown. While performing stop, rescue and shelve, # rebuild operations, configuring this option gives the VM a chance # to perform a controlled shutdown before the instance is powered off. # The default timeout is 60 seconds. # # The timeout value can be overridden on a per image basis by means # of os_shutdown_timeout that is an image metadata setting allowing # different types of operating systems to specify how much time they # need to shut down cleanly. # # Possible values: # # * Any positive integer in seconds (default value is 60). # (integer value) # Minimum value: 1 #shutdown_timeout = 60 # # The compute service periodically checks for instances that have been # deleted in the database but remain running on the compute node. The # above option enables action to be taken when such instances are # identified. # # Possible values: # # * reap: Powers down the instances and deletes them(default) # * log: Logs warning message about deletion of the resource # * shutdown: Powers down instances and marks them as non- # bootable which can be later used for debugging/analysis # * noop: Takes no action # # Related options: # # * running_deleted_instance_poll_interval # * running_deleted_instance_timeout # (string value) # Possible values: # noop - # log - # shutdown - # reap - #running_deleted_instance_action = reap # # Time interval in seconds to wait between runs for the clean up action. # If set to 0, above check will be disabled. If "running_deleted_instance # _action" is set to "log" or "reap", a value greater than 0 must be set. # # Possible values: # # * Any positive integer in seconds enables the option. # * 0: Disables the option. # * 1800: Default value. # # Related options: # # * running_deleted_instance_action # (integer value) #running_deleted_instance_poll_interval = 1800 # # Time interval in seconds to wait for the instances that have # been marked as deleted in database to be eligible for cleanup. # # Possible values: # # * Any positive integer in seconds(default is 0). # # Related options: # # * "running_deleted_instance_action" # (integer value) #running_deleted_instance_timeout = 0 # # The number of times to attempt to reap an instance's files. # # This option specifies the maximum number of retry attempts # that can be made. # # Possible values: # # * Any positive integer defines how many attempts are made. # * Any value <=0 means no delete attempts occur, but you should use # ``instance_delete_interval`` to disable the delete attempts. # # Related options: # * ``instance_delete_interval`` in interval_opts group can be used to disable # this option. # (integer value) #maximum_instance_delete_attempts = 5 # # Sets the scope of the check for unique instance names. # # The default doesn't check for unique names. If a scope for the name check is # set, a launch of a new instance or an update of an existing instance with a # duplicate name will result in an ''InstanceExists'' error. The uniqueness is # case-insensitive. Setting this option can increase the usability for end # users as they don't have to distinguish among instances with the same name # by their IDs. # # Possible values: # # * '': An empty value means that no uniqueness check is done and duplicate # names are possible. # * "project": The instance name check is done only for instances within the # same project. # * "global": The instance name check is done for all instances regardless of # the project. # (string value) # Possible values: # '' - # project - # global - #osapi_compute_unique_server_name_scope = # # Enable new nova-compute services on this host automatically. # # When a new nova-compute service starts up, it gets # registered in the database as an enabled service. Sometimes it can be useful # to register new compute services in disabled state and then enabled them at a # later point in time. This option only sets this behavior for nova-compute # services, it does not auto-disable other services like nova-conductor, # nova-scheduler, nova-consoleauth, or nova-osapi_compute. # # Possible values: # # * ``True``: Each new compute service is enabled as soon as it registers # itself. # * ``False``: Compute services must be enabled via an os-services REST API call # or with the CLI with ``nova service-enable ``, otherwise # they are not ready to use. # (boolean value) #enable_new_services = true # # Template string to be used to generate instance names. # # This template controls the creation of the database name of an instance. This # is *not* the display name you enter when creating an instance (via Horizon # or CLI). For a new deployment it is advisable to change the default value # (which uses the database autoincrement) to another value which makes use # of the attributes of an instance, like ``instance-%(uuid)s``. If you # already have instances in your deployment when you change this, your # deployment will break. # # Possible values: # # * A string which either uses the instance database ID (like the # default) # * A string with a list of named database columns, for example ``%(id)d`` # or ``%(uuid)s`` or ``%(hostname)s``. # # Related options: # # * not to be confused with: ``multi_instance_display_name_template`` # (string value) #instance_name_template = instance-%08x # # Number of times to retry live-migration before failing. # # Possible values: # # * If == -1, try until out of hosts (default) # * If == 0, only try once, no retries # * Integer greater than 0 # (integer value) # Minimum value: -1 #migrate_max_retries = -1 # # Configuration drive format # # Configuration drive format that will contain metadata attached to the # instance when it boots. # # Possible values: # # * iso9660: A file system image standard that is widely supported across # operating systems. NOTE: Mind the libvirt bug # (https://bugs.launchpad.net/nova/+bug/1246201) - If your hypervisor # driver is libvirt, and you want live migrate to work without shared storage, # then use VFAT. # * vfat: For legacy reasons, you can configure the configuration drive to # use VFAT format instead of ISO 9660. # # Related options: # # * This option is meaningful when one of the following alternatives occur: # 1. force_config_drive option set to 'true' # 2. the REST API call to create the instance contains an enable flag for # config drive option # 3. the image used to create the instance requires a config drive, # this is defined by img_config_drive property for that image. # * A compute node running Hyper-V hypervisor can be configured to attach # configuration drive as a CD drive. To attach the configuration drive as a CD # drive, set config_drive_cdrom option at hyperv section, to true. # (string value) # Possible values: # iso9660 - # vfat - #config_drive_format = iso9660 # # Force injection to take place on a config drive # # When this option is set to true configuration drive functionality will be # forced enabled by default, otherwise user can still enable configuration # drives via the REST API or image metadata properties. # # Possible values: # # * True: Force to use of configuration drive regardless the user's input in the # REST API call. # * False: Do not force use of configuration drive. Config drives can still be # enabled via the REST API or image metadata properties. # # Related options: # # * Use the 'mkisofs_cmd' flag to set the path where you install the # genisoimage program. If genisoimage is in same path as the # nova-compute service, you do not need to set this flag. # * To use configuration drive with Hyper-V, you must set the # 'mkisofs_cmd' value to the full path to an mkisofs.exe installation. # Additionally, you must set the qemu_img_cmd value in the hyperv # configuration section to the full path to an qemu-img command # installation. # (boolean value) #force_config_drive = false # # Name or path of the tool used for ISO image creation # # Use the mkisofs_cmd flag to set the path where you install the genisoimage # program. If genisoimage is on the system path, you do not need to change # the default value. # # To use configuration drive with Hyper-V, you must set the mkisofs_cmd value # to the full path to an mkisofs.exe installation. Additionally, you must set # the qemu_img_cmd value in the hyperv configuration section to the full path # to an qemu-img command installation. # # Possible values: # # * Name of the ISO image creator program, in case it is in the same directory # as the nova-compute service # * Path to ISO image creator program # # Related options: # # * This option is meaningful when config drives are enabled. # * To use configuration drive with Hyper-V, you must set the qemu_img_cmd # value in the hyperv configuration section to the full path to an qemu-img # command installation. # (string value) #mkisofs_cmd = genisoimage # DEPRECATED: The driver to use for database access (string value) # This option is deprecated for removal since 13.0.0. # Its value may be silently ignored in the future. #db_driver = nova.db # DEPRECATED: # Default flavor to use for the EC2 API only. # The Nova API does not support a default flavor. # (string value) # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: The EC2 API is deprecated. #default_flavor = m1.small # # The IP address which the host is using to connect to the management network. # # Possible values: # # * String with valid IP address. Default is IPv4 address of this host. # # Related options: # # * metadata_host # * my_block_storage_ip # * routing_source_ip # * vpn_ip # (string value) #my_ip = # # The IP address which is used to connect to the block storage network. # # Possible values: # # * String with valid IP address. Default is IP address of this host. # # Related options: # # * my_ip - if my_block_storage_ip is not set, then my_ip value is used. # (string value) #my_block_storage_ip = $my_ip # # Hostname, FQDN or IP address of this host. # # Used as: # # * the oslo.messaging queue name for nova-compute worker # * we use this value for the binding_host sent to neutron. This means if you # use # a neutron agent, it should have the same value for host. # * cinder host attachment information # # Must be valid within AMQP key. # # Possible values: # # * String with hostname, FQDN or IP address. Default is hostname of this host. # (string value) #host = # DEPRECATED: # This option is a list of full paths to one or more configuration files for # dhcpbridge. In most cases the default path of '/etc/nova/nova-dhcpbridge.conf' # should be sufficient, but if you have special needs for configuring # dhcpbridge, # you can change or add to this list. # # Possible values # # * A list of strings, where each string is the full path to a dhcpbridge # configuration file. # (multi valued) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dhcpbridge_flagfile = /etc/nova/nova-dhcpbridge.conf # DEPRECATED: # The location where the network configuration files will be kept. The default # is # the 'networks' directory off of the location where nova's Python module is # installed. # # Possible values # # * A string containing the full path to the desired configuration directory # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #networks_path = $state_path/networks # DEPRECATED: # This is the name of the network interface for public IP addresses. The default # is 'eth0'. # # Possible values: # # * Any string representing a network interface name # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #public_interface = eth0 # DEPRECATED: # The location of the binary nova-dhcpbridge. By default it is the binary named # 'nova-dhcpbridge' that is installed with all the other nova binaries. # # Possible values: # # * Any string representing the full path to the binary for dhcpbridge # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dhcpbridge = $bindir/nova-dhcpbridge # DEPRECATED: # The public IP address of the network host. # # This is used when creating an SNAT rule. # # Possible values: # # * Any valid IP address # # Related options: # # * ``force_snat_range`` # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #routing_source_ip = $my_ip # DEPRECATED: # The lifetime of a DHCP lease, in seconds. The default is 86400 (one day). # # Possible values: # # * Any positive integer value. # (integer value) # Minimum value: 1 # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dhcp_lease_time = 86400 # DEPRECATED: # Despite the singular form of the name of this option, it is actually a list of # zero or more server addresses that dnsmasq will use for DNS nameservers. If # this is not empty, dnsmasq will not read /etc/resolv.conf, but will only use # the servers specified in this option. If the option use_network_dns_servers is # True, the dns1 and dns2 servers from the network will be appended to this # list, # and will be used as DNS servers, too. # # Possible values: # # * A list of strings, where each string is either an IP address or a FQDN. # # Related options: # # * ``use_network_dns_servers`` # (multi valued) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dns_server = # DEPRECATED: # When this option is set to True, the dns1 and dns2 servers for the network # specified by the user on boot will be used for DNS, as well as any specified # in # the `dns_server` option. # # Related options: # # * ``dns_server`` # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #use_network_dns_servers = false # DEPRECATED: # This option is a list of zero or more IP address ranges in your network's DMZ # that should be accepted. # # Possible values: # # * A list of strings, each of which should be a valid CIDR. # (list value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dmz_cidr = # DEPRECATED: # This is a list of zero or more IP ranges that traffic from the # `routing_source_ip` will be SNATted to. If the list is empty, then no SNAT # rules are created. # # Possible values: # # * A list of strings, each of which should be a valid CIDR. # # Related options: # # * ``routing_source_ip`` # (multi valued) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #force_snat_range = # DEPRECATED: # The path to the custom dnsmasq configuration file, if any. # # Possible values: # # * The full path to the configuration file, or an empty string if there is no # custom dnsmasq configuration file. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dnsmasq_config_file = # DEPRECATED: # This is the class used as the ethernet device driver for linuxnet bridge # operations. The default value should be all you need for most cases, but if # you # wish to use a customized class, set this option to the full dot-separated # import path for that class. # # Possible values: # # * Any string representing a dot-separated class path that Nova can import. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #linuxnet_interface_driver = nova.network.linux_net.LinuxBridgeInterfaceDriver # DEPRECATED: # The name of the Open vSwitch bridge that is used with linuxnet when connecting # with Open vSwitch." # # Possible values: # # * Any string representing a valid bridge name. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #linuxnet_ovs_integration_bridge = br-int # # When True, when a device starts up, and upon binding floating IP addresses, # arp # messages will be sent to ensure that the arp caches on the compute hosts are # up-to-date. # # Related options: # # * ``send_arp_for_ha_count`` # (boolean value) #send_arp_for_ha = false # # When arp messages are configured to be sent, they will be sent with the count # set to the value of this option. Of course, if this is set to zero, no arp # messages will be sent. # # Possible values: # # * Any integer greater than or equal to 0 # # Related options: # # * ``send_arp_for_ha`` # (integer value) #send_arp_for_ha_count = 3 # DEPRECATED: # When set to True, only the firt nic of a VM will get its default gateway from # the DHCP server. # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #use_single_default_gateway = false # DEPRECATED: # One or more interfaces that bridges can forward traffic to. If any of the # items # in this list is the special keyword 'all', then all traffic will be forwarded. # # Possible values: # # * A list of zero or more interface names, or the word 'all'. # (multi valued) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #forward_bridge_interface = all # # This option determines the IP address for the network metadata API server. # # This is really the client side of the metadata host equation that allows # nova-network to find the metadata server when doing a default multi host # networking. # # Possible values: # # * Any valid IP address. The default is the address of the Nova API server. # # Related options: # # * ``metadata_port`` # (string value) #metadata_host = $my_ip # DEPRECATED: # This option determines the port used for the metadata API server. # # Related options: # # * ``metadata_host`` # (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #metadata_port = 8775 # DEPRECATED: # This expression, if defined, will select any matching iptables rules and place # them at the top when applying metadata changes to the rules. # # Possible values: # # * Any string representing a valid regular expression, or an empty string # # Related options: # # * ``iptables_bottom_regex`` # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #iptables_top_regex = # DEPRECATED: # This expression, if defined, will select any matching iptables rules and place # them at the bottom when applying metadata changes to the rules. # # Possible values: # # * Any string representing a valid regular expression, or an empty string # # Related options: # # * iptables_top_regex # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #iptables_bottom_regex = # DEPRECATED: # By default, packets that do not pass the firewall are DROPped. In many cases, # though, an operator may find it more useful to change this from DROP to # REJECT, # so that the user issuing those packets may have a better idea as to what's # going on, or LOGDROP in order to record the blocked traffic before DROPping. # # Possible values: # # * A string representing an iptables chain. The default is DROP. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #iptables_drop_action = DROP # DEPRECATED: # This option represents the period of time, in seconds, that the ovs_vsctl # calls # will wait for a response from the database before timing out. A setting of 0 # means that the utility should wait forever for a response. # # Possible values: # # * Any positive integer if a limited timeout is desired, or zero if the calls # should wait forever for a response. # (integer value) # Minimum value: 0 # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ovs_vsctl_timeout = 120 # DEPRECATED: # This option is used mainly in testing to avoid calls to the underlying network # utilities. # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #fake_network = false # DEPRECATED: # This option determines the number of times to retry ebtables commands before # giving up. The minimum number of retries is 1. # # Possible values: # # * Any positive integer # # Related options: # # * ``ebtables_retry_interval`` # (integer value) # Minimum value: 1 # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ebtables_exec_attempts = 3 # DEPRECATED: # This option determines the time, in seconds, that the system will sleep in # between ebtables retries. Note that each successive retry waits a multiple of # this value, so for example, if this is set to the default of 1.0 seconds, and # ebtables_exec_attempts is 4, after the first failure, the system will sleep # for # 1 * 1.0 seconds, after the second failure it will sleep 2 * 1.0 seconds, and # after the third failure it will sleep 3 * 1.0 seconds. # # Possible values: # # * Any non-negative float or integer. Setting this to zero will result in no # waiting between attempts. # # Related options: # # * ebtables_exec_attempts # (floating point value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ebtables_retry_interval = 1.0 # DEPRECATED: # Enable neutron as the backend for networking. # # Determine whether to use Neutron or Nova Network as the back end. Set to true # to use neutron. # (boolean value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #use_neutron = true # # This option determines whether the network setup information is injected into # the VM before it is booted. While it was originally designed to be used only # by nova-network, it is also used by the vmware and xenapi virt drivers to # control whether network information is injected into a VM. The libvirt virt # driver also uses it when we use config_drive to configure network to control # whether network information is injected into a VM. # (boolean value) #flat_injected = false # DEPRECATED: # This option determines the bridge used for simple network interfaces when no # bridge is specified in the VM creation request. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any string representing a valid network bridge, such as 'br100' # # Related options: # # * ``use_neutron`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #flat_network_bridge = # DEPRECATED: # This is the address of the DNS server for a simple network. If this option is # not specified, the default of '8.8.4.4' is used. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any valid IP address. # # Related options: # # * ``use_neutron`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #flat_network_dns = 8.8.4.4 # DEPRECATED: # This option is the name of the virtual interface of the VM on which the bridge # will be built. While it was originally designed to be used only by # nova-network, it is also used by libvirt for the bridge interface name. # # Possible values: # # * Any valid virtual interface name, such as 'eth0' # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #flat_interface = # DEPRECATED: # This is the VLAN number used for private networks. Note that the when creating # the networks, if the specified number has already been assigned, nova-network # will increment this number until it finds an available VLAN. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. It also will be ignored if the configuration # option # for `network_manager` is not set to the default of # 'nova.network.manager.VlanManager'. # # Possible values: # # * Any integer between 1 and 4094. Values outside of that range will raise a # ValueError exception. # # Related options: # # * ``network_manager`` # * ``use_neutron`` # (integer value) # Minimum value: 1 # Maximum value: 4094 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #vlan_start = 100 # DEPRECATED: # This option is the name of the virtual interface of the VM on which the VLAN # bridge will be built. While it was originally designed to be used only by # nova-network, it is also used by libvirt and xenapi for the bridge interface # name. # # Please note that this setting will be ignored in nova-network if the # configuration option for `network_manager` is not set to the default of # 'nova.network.manager.VlanManager'. # # Possible values: # # * Any valid virtual interface name, such as 'eth0' # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. While # this option has an effect when using neutron, it incorrectly override the # value # provided by neutron and should therefore not be used. #vlan_interface = # DEPRECATED: # This option represents the number of networks to create if not explicitly # specified when the network is created. The only time this is used is if a CIDR # is specified, but an explicit network_size is not. In that case, the subnets # are created by diving the IP address space of the CIDR by num_networks. The # resulting subnet sizes cannot be larger than the configuration option # `network_size`; in that event, they are reduced to `network_size`, and a # warning is logged. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any positive integer is technically valid, although there are practical # limits based upon available IP address space and virtual interfaces. # # Related options: # # * ``use_neutron`` # * ``network_size`` # (integer value) # Minimum value: 1 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #num_networks = 1 # DEPRECATED: # This option is no longer used since the /os-cloudpipe API was removed in the # 16.0.0 Pike release. This is the public IP address for the cloudpipe VPN # servers. It defaults to the IP address of the host. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. It also will be ignored if the configuration # option # for `network_manager` is not set to the default of # 'nova.network.manager.VlanManager'. # # Possible values: # # * Any valid IP address. The default is ``$my_ip``, the IP address of the VM. # # Related options: # # * ``network_manager`` # * ``use_neutron`` # * ``vpn_start`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #vpn_ip = $my_ip # DEPRECATED: # This is the port number to use as the first VPN port for private networks. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. It also will be ignored if the configuration # option # for `network_manager` is not set to the default of # 'nova.network.manager.VlanManager', or if you specify a value the 'vpn_start' # parameter when creating a network. # # Possible values: # # * Any integer representing a valid port number. The default is 1000. # # Related options: # # * ``use_neutron`` # * ``vpn_ip`` # * ``network_manager`` # (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #vpn_start = 1000 # DEPRECATED: # This option determines the number of addresses in each private subnet. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any positive integer that is less than or equal to the available network # size. Note that if you are creating multiple networks, they must all fit in # the available IP address space. The default is 256. # # Related options: # # * ``use_neutron`` # * ``num_networks`` # (integer value) # Minimum value: 1 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #network_size = 256 # DEPRECATED: # This option determines the fixed IPv6 address block when creating a network. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any valid IPv6 CIDR # # Related options: # # * ``use_neutron`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #fixed_range_v6 = fd00::/48 # DEPRECATED: # This is the default IPv4 gateway. It is used only in the testing suite. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any valid IP address. # # Related options: # # * ``use_neutron`` # * ``gateway_v6`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #gateway = # DEPRECATED: # This is the default IPv6 gateway. It is used only in the testing suite. # # Please note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Possible values: # # * Any valid IP address. # # Related options: # # * ``use_neutron`` # * ``gateway`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #gateway_v6 = # DEPRECATED: # This option represents the number of IP addresses to reserve at the top of the # address range for VPN clients. It also will be ignored if the configuration # option for `network_manager` is not set to the default of # 'nova.network.manager.VlanManager'. # # Possible values: # # * Any integer, 0 or greater. # # Related options: # # * ``use_neutron`` # * ``network_manager`` # (integer value) # Minimum value: 0 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #cnt_vpn_clients = 0 # DEPRECATED: # This is the number of seconds to wait before disassociating a deallocated # fixed # IP address. This is only used with the nova-network service, and has no effect # when using neutron for networking. # # Possible values: # # * Any integer, zero or greater. # # Related options: # # * ``use_neutron`` # (integer value) # Minimum value: 0 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #fixed_ip_disassociate_timeout = 600 # DEPRECATED: # This option determines how many times nova-network will attempt to create a # unique MAC address before giving up and raising a # `VirtualInterfaceMacAddressException` error. # # Possible values: # # * Any positive integer. The default is 5. # # Related options: # # * ``use_neutron`` # (integer value) # Minimum value: 1 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #create_unique_mac_address_attempts = 5 # DEPRECATED: # Determines whether unused gateway devices, both VLAN and bridge, are deleted # if # the network is in nova-network VLAN mode and is multi-hosted. # # Related options: # # * ``use_neutron`` # * ``vpn_ip`` # * ``fake_network`` # (boolean value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #teardown_unused_network_gateway = false # DEPRECATED: # When this option is True, a call is made to release the DHCP for the instance # when that instance is terminated. # # Related options: # # * ``use_neutron`` # (boolean value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #force_dhcp_release = true # DEPRECATED: # When this option is True, whenever a DNS entry must be updated, a fanout cast # message is sent to all network hosts to update their DNS entries in multi-host # mode. # # Related options: # # * ``use_neutron`` # (boolean value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #update_dns_entries = false # DEPRECATED: # This option determines the time, in seconds, to wait between refreshing DNS # entries for the network. # # Possible values: # # * A positive integer # * -1 to disable updates # # Related options: # # * ``use_neutron`` # (integer value) # Minimum value: -1 # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dns_update_periodic_interval = -1 # DEPRECATED: # This option allows you to specify the domain for the DHCP server. # # Possible values: # # * Any string that is a valid domain name. # # Related options: # # * ``use_neutron`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #dhcp_domain = novalocal # DEPRECATED: # This option allows you to specify the L3 management library to be used. # # Possible values: # # * Any dot-separated string that represents the import path to an L3 networking # library. # # Related options: # # * ``use_neutron`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #l3_lib = nova.network.l3.LinuxNetL3 # DEPRECATED: # THIS VALUE SHOULD BE SET WHEN CREATING THE NETWORK. # # If True in multi_host mode, all compute hosts share the same dhcp address. The # same IP address used for DHCP will be added on each nova-network node which is # only visible to the VMs on the same host. # # The use of this configuration has been deprecated and may be removed in any # release after Mitaka. It is recommended that instead of relying on this # option, # an explicit value should be passed to 'create_networks()' as a keyword # argument # with the name 'share_address'. # (boolean value) # This option is deprecated for removal since 2014.2. # Its value may be silently ignored in the future. #share_dhcp_address = false # DEPRECATED: # URL for LDAP server which will store DNS entries # # Possible values: # # * A valid LDAP URL representing the server # (uri value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_url = ldap://ldap.example.com:389 # DEPRECATED: Bind user for LDAP server (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_user = uid=admin,ou=people,dc=example,dc=org # DEPRECATED: Bind user's password for LDAP server (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_password = password # DEPRECATED: # Hostmaster for LDAP DNS driver Statement of Authority # # Possible values: # # * Any valid string representing LDAP DNS hostmaster. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_soa_hostmaster = hostmaster at example.org # DEPRECATED: # DNS Servers for LDAP DNS driver # # Possible values: # # * A valid URL representing a DNS server # (multi valued) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_servers = dns.example.org # DEPRECATED: # Base distinguished name for the LDAP search query # # This option helps to decide where to look up the host in LDAP. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_base_dn = ou=hosts,dc=example,dc=org # DEPRECATED: # Refresh interval (in seconds) for LDAP DNS driver Start of Authority # # Time interval, a secondary/slave DNS server waits before requesting for # primary DNS server's current SOA record. If the records are different, # secondary DNS server will request a zone transfer from primary. # # NOTE: Lower values would cause more traffic. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_soa_refresh = 1800 # DEPRECATED: # Retry interval (in seconds) for LDAP DNS driver Start of Authority # # Time interval, a secondary/slave DNS server should wait, if an # attempt to transfer zone failed during the previous refresh interval. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_soa_retry = 3600 # DEPRECATED: # Expiry interval (in seconds) for LDAP DNS driver Start of Authority # # Time interval, a secondary/slave DNS server holds the information # before it is no longer considered authoritative. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_soa_expiry = 86400 # DEPRECATED: # Minimum interval (in seconds) for LDAP DNS driver Start of Authority # # It is Minimum time-to-live applies for all resource records in the # zone file. This value is supplied to other servers how long they # should keep the data in cache. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ldap_dns_soa_minimum = 7200 # DEPRECATED: # Default value for multi_host in networks. # # nova-network service can operate in a multi-host or single-host mode. # In multi-host mode each compute node runs a copy of nova-network and the # instances on that compute node use the compute node as a gateway to the # Internet. Where as in single-host mode, a central server runs the nova-network # service. All compute nodes forward traffic from the instances to the # cloud controller which then forwards traffic to the Internet. # # If this options is set to true, some rpc network calls will be sent directly # to host. # # Note that this option is only used when using nova-network instead of # Neutron in your deployment. # # Related options: # # * ``use_neutron`` # (boolean value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #multi_host = false # DEPRECATED: # Driver to use for network creation. # # Network driver initializes (creates bridges and so on) only when the # first VM lands on a host node. All network managers configure the # network using network drivers. The driver is not tied to any particular # network manager. # # The default Linux driver implements vlans, bridges, and iptables rules # using linux utilities. # # Note that this option is only used when using nova-network instead # of Neutron in your deployment. # # Related options: # # * ``use_neutron`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #network_driver = nova.network.linux_net # DEPRECATED: # Firewall driver to use with ``nova-network`` service. # # This option only applies when using the ``nova-network`` service. When using # another networking services, such as Neutron, this should be to set to the # ``nova.virt.firewall.NoopFirewallDriver``. # # Possible values: # # * ``nova.virt.firewall.IptablesFirewallDriver`` # * ``nova.virt.firewall.NoopFirewallDriver`` # * ``nova.virt.libvirt.firewall.IptablesFirewallDriver`` # * [...] # # Related options: # # * ``use_neutron``: This must be set to ``False`` to enable ``nova-network`` # networking # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #firewall_driver = nova.virt.firewall.NoopFirewallDriver # DEPRECATED: # Determine whether to allow network traffic from same network. # # When set to true, hosts on the same subnet are not filtered and are allowed # to pass all types of traffic between them. On a flat network, this allows # all instances from all projects unfiltered communication. With VLAN # networking, this allows access between instances within the same project. # # This option only applies when using the ``nova-network`` service. When using # another networking services, such as Neutron, security groups or other # approaches should be used. # # Possible values: # # * True: Network traffic should be allowed pass between all instances on the # same network, regardless of their tenant and security policies # * False: Network traffic should not be allowed pass between instances unless # it is unblocked in a security group # # Related options: # # * ``use_neutron``: This must be set to ``False`` to enable ``nova-network`` # networking # * ``firewall_driver``: This must be set to # ``nova.virt.libvirt.firewall.IptablesFirewallDriver`` to ensure the # libvirt firewall driver is enabled. # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #allow_same_net_traffic = true # DEPRECATED: # Default pool for floating IPs. # # This option specifies the default floating IP pool for allocating floating # IPs. # # While allocating a floating ip, users can optionally pass in the name of the # pool they want to allocate from, otherwise it will be pulled from the # default pool. # # If this option is not set, then 'nova' is used as default floating pool. # # Possible values: # # * Any string representing a floating IP pool name # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # This option was used for two purposes: to set the floating IP pool name for # nova-network and to do the same for neutron. nova-network is deprecated, as # are # any related configuration options. Users of neutron, meanwhile, should use the # 'default_floating_pool' option in the '[neutron]' group. #default_floating_pool = nova # DEPRECATED: # Autoassigning floating IP to VM # # When set to True, floating IP is auto allocated and associated # to the VM upon creation. # # Related options: # # * use_neutron: this options only works with nova-network. # (boolean value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #auto_assign_floating_ip = false # DEPRECATED: # Full class name for the DNS Manager for floating IPs. # # This option specifies the class of the driver that provides functionality # to manage DNS entries associated with floating IPs. # # When a user adds a DNS entry for a specified domain to a floating IP, # nova will add a DNS entry using the specified floating DNS driver. # When a floating IP is deallocated, its DNS entry will automatically be # deleted. # # Possible values: # # * Full Python path to the class to be used # # Related options: # # * use_neutron: this options only works with nova-network. # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #floating_ip_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver # DEPRECATED: # Full class name for the DNS Manager for instance IPs. # # This option specifies the class of the driver that provides functionality # to manage DNS entries for instances. # # On instance creation, nova will add DNS entries for the instance name and # id, using the specified instance DNS driver and domain. On instance deletion, # nova will remove the DNS entries. # # Possible values: # # * Full Python path to the class to be used # # Related options: # # * use_neutron: this options only works with nova-network. # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #instance_dns_manager = nova.network.noop_dns_driver.NoopDNSDriver # DEPRECATED: # If specified, Nova checks if the availability_zone of every instance matches # what the database says the availability_zone should be for the specified # dns_domain. # # Related options: # # * use_neutron: this options only works with nova-network. # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #instance_dns_domain = # DEPRECATED: # Assign IPv6 and IPv4 addresses when creating instances. # # Related options: # # * use_neutron: this only works with nova-network. # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #use_ipv6 = false # DEPRECATED: # Abstracts out IPv6 address generation to pluggable backends. # # nova-network can be put into dual-stack mode, so that it uses # both IPv4 and IPv6 addresses. In dual-stack mode, by default, instances # acquire IPv6 global unicast addresses with the help of stateless address # auto-configuration mechanism. # # Related options: # # * use_neutron: this option only works with nova-network. # * use_ipv6: this option only works if ipv6 is enabled for nova-network. # (string value) # Possible values: # rfc2462 - # account_identifier - # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #ipv6_backend = rfc2462 # DEPRECATED: # This option is used to enable or disable quota checking for tenant networks. # # Related options: # # * quota_networks # (boolean value) # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: # CRUD operations on tenant networks are only available when using nova-network # and nova-network is itself deprecated. #enable_network_quota = false # DEPRECATED: # This option controls the number of private networks that can be created per # project (or per tenant). # # Related options: # # * enable_network_quota # (integer value) # Minimum value: 0 # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: # CRUD operations on tenant networks are only available when using nova-network # and nova-network is itself deprecated. #quota_networks = 3 # # Filename that will be used for storing websocket frames received # and sent by a proxy service (like VNC, spice, serial) running on this host. # If this is not set, no recording will be done. # (string value) #record = # Run as a background process. (boolean value) #daemon = false # Disallow non-encrypted connections. (boolean value) #ssl_only = false # Set to True if source host is addressed with IPv6. (boolean value) #source_is_ipv6 = false # Path to SSL certificate file. (string value) #cert = self.pem # SSL key file (if separate from cert). (string value) #key = # # Path to directory with content which will be served by a web server. # (string value) #web = /usr/share/spice-html5 # # The directory where the Nova python modules are installed. # # This directory is used to store template files for networking and remote # console access. It is also the default path for other config options which # need to persist Nova internal data. It is very unlikely that you need to # change this option from its default value. # # Possible values: # # * The full path to a directory. # # Related options: # # * ``state_path`` # (string value) #pybasedir = /build/nova-wLnpHi/nova-17.0.13 # # The directory where the Nova binaries are installed. # # This option is only relevant if the networking capabilities from Nova are # used (see services below). Nova's networking capabilities are targeted to # be fully replaced by Neutron in the future. It is very unlikely that you need # to change this option from its default value. # # Possible values: # # * The full path to a directory. # (string value) #bindir = /usr/local/bin # # The top-level directory for maintaining Nova's state. # # This directory is used to store Nova's internal state. It is used by a # variety of other config options which derive from this. In some scenarios # (for example migrations) it makes sense to use a storage location which is # shared between multiple compute hosts (for example via NFS). Unless the # option ``instances_path`` gets overwritten, this directory can grow very # large. # # Possible values: # # * The full path to a directory. Defaults to value provided in ``pybasedir``. # (string value) #state_path = $pybasedir # # Number of seconds indicating how frequently the state of services on a # given hypervisor is reported. Nova needs to know this to determine the # overall health of the deployment. # # Related Options: # # * service_down_time # report_interval should be less than service_down_time. If service_down_time # is less than report_interval, services will routinely be considered down, # because they report in too rarely. # (integer value) #report_interval = 10 # # Maximum time in seconds since last check-in for up service # # Each compute node periodically updates their database status based on the # specified report interval. If the compute node hasn't updated the status # for more than service_down_time, then the compute node is considered down. # # Related Options: # # * report_interval (service_down_time should not be less than report_interval) # (integer value) #service_down_time = 60 # # Enable periodic tasks. # # If set to true, this option allows services to periodically run tasks # on the manager. # # In case of running multiple schedulers or conductors you may want to run # periodic tasks on only one host - in this case disable this option for all # hosts but one. # (boolean value) #periodic_enable = true # # Number of seconds to randomly delay when starting the periodic task # scheduler to reduce stampeding. # # When compute workers are restarted in unison across a cluster, # they all end up running the periodic tasks at the same time # causing problems for the external services. To mitigate this # behavior, periodic_fuzzy_delay option allows you to introduce a # random initial delay when starting the periodic task scheduler. # # Possible Values: # # * Any positive integer (in seconds) # * 0 : disable the random delay # (integer value) # Minimum value: 0 #periodic_fuzzy_delay = 60 # List of APIs to be enabled by default. (list value) #enabled_apis = osapi_compute,metadata # # List of APIs with enabled SSL. # # Nova provides SSL support for the API servers. enabled_ssl_apis option # allows configuring the SSL support. # (list value) #enabled_ssl_apis = # # IP address on which the OpenStack API will listen. # # The OpenStack API service listens on this IP address for incoming # requests. # (string value) #osapi_compute_listen = 0.0.0.0 # # Port on which the OpenStack API will listen. # # The OpenStack API service listens on this port number for incoming # requests. # (port value) # Minimum value: 0 # Maximum value: 65535 #osapi_compute_listen_port = 8774 # # Number of workers for OpenStack API service. The default will be the number # of CPUs available. # # OpenStack API services can be configured to run as multi-process (workers). # This overcomes the problem of reduction in throughput when API request # concurrency increases. OpenStack API service will run in the specified # number of processes. # # Possible Values: # # * Any positive integer # * None (default value) # (integer value) # Minimum value: 1 #osapi_compute_workers = # # IP address on which the metadata API will listen. # # The metadata API service listens on this IP address for incoming # requests. # (string value) #metadata_listen = 0.0.0.0 # # Port on which the metadata API will listen. # # The metadata API service listens on this port number for incoming # requests. # (port value) # Minimum value: 0 # Maximum value: 65535 #metadata_listen_port = 8775 # # Number of workers for metadata service. If not specified the number of # available CPUs will be used. # # The metadata service can be configured to run as multi-process (workers). # This overcomes the problem of reduction in throughput when API request # concurrency increases. The metadata service will run in the specified # number of processes. # # Possible Values: # # * Any positive integer # * None (default value) # (integer value) # Minimum value: 1 #metadata_workers = # Full class name for the Manager for network (string value) # Possible values: # nova.network.manager.FlatManager - # nova.network.manager.FlatDHCPManager - # nova.network.manager.VlanManager - #network_manager = nova.network.manager.VlanManager # # This option specifies the driver to be used for the servicegroup service. # # ServiceGroup API in nova enables checking status of a compute node. When a # compute worker running the nova-compute daemon starts, it calls the join API # to join the compute group. Services like nova scheduler can query the # ServiceGroup API to check if a node is alive. Internally, the ServiceGroup # client driver automatically updates the compute worker status. There are # multiple backend implementations for this service: Database ServiceGroup # driver # and Memcache ServiceGroup driver. # # Possible Values: # # * db : Database ServiceGroup driver # * mc : Memcache ServiceGroup driver # # Related Options: # # * service_down_time (maximum time since last check-in for up service) # (string value) # Possible values: # db - # mc - #servicegroup_driver = db # # From oslo.log # # If set to true, the logging level will be set to DEBUG instead of the default # INFO level. (boolean value) # Note: This option can be changed without restarting. #debug = false # The name of a logging configuration file. This file is appended to any # existing logging configuration files. For details about logging configuration # files, see the Python logging module documentation. Note that when logging # configuration files are used then all logging configuration is set in the # configuration file and other logging configuration options are ignored (for # example, logging_context_format_string). (string value) # Note: This option can be changed without restarting. # Deprecated group/name - [DEFAULT]/log_config #log_config_append = # Defines the format string for %%(asctime)s in log records. Default: # %(default)s . This option is ignored if log_config_append is set. (string # value) #log_date_format = %Y-%m-%d %H:%M:%S # (Optional) Name of log file to send logging output to. If no default is set, # logging will go to stderr as defined by use_stderr. This option is ignored if # log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logfile #log_file = # (Optional) The base directory used for relative log_file paths. This option # is ignored if log_config_append is set. (string value) # Deprecated group/name - [DEFAULT]/logdir #log_dir = # Uses logging handler designed to watch file system. When log file is moved or # removed this handler will open a new log file with specified path # instantaneously. It makes sense only if log_file option is specified and Linux # platform is used. This option is ignored if log_config_append is set. (boolean # value) #watch_log_file = false # Use syslog for logging. Existing syslog format is DEPRECATED and will be # changed later to honor RFC5424. This option is ignored if log_config_append is # set. (boolean value) #use_syslog = false # Enable journald for logging. If running in a systemd environment you may wish # to enable journal support. Doing so will use the journal native protocol which # includes structured metadata in addition to log messages.This option is # ignored if log_config_append is set. (boolean value) #use_journal = false # Syslog facility to receive log lines. This option is ignored if # log_config_append is set. (string value) #syslog_log_facility = LOG_USER # Use JSON formatting for logging. This option is ignored if log_config_append # is set. (boolean value) #use_json = false # Log output to standard error. This option is ignored if log_config_append is # set. (boolean value) #use_stderr = false # Format string to use for log messages with context. (string value) #logging_context_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(request_id)s %(user_identity)s] %(instance)s%(message)s # Format string to use for log messages when context is undefined. (string # value) #logging_default_format_string = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s # Additional data to append to log message when logging level for the message is # DEBUG. (string value) #logging_debug_format_suffix = %(funcName)s %(pathname)s:%(lineno)d # Prefix each line of exception output with this format. (string value) #logging_exception_prefix = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s # Defines the format string for %(user_identity)s that is used in # logging_context_format_string. (string value) #logging_user_identity_format = %(user)s %(tenant)s %(domain)s %(user_domain)s %(project_domain)s # List of package logging levels in logger=LEVEL pairs. This option is ignored # if log_config_append is set. (list value) #default_log_levels = amqp=WARN,amqplib=WARN,boto=WARN,qpid=WARN,sqlalchemy=WARN,suds=INFO,oslo.messaging=INFO,oslo_messaging=INFO,iso8601=WARN,requests.packages.urllib3.connectionpool=WARN,urllib3.connectionpool=WARN,websocket=WARN,requests.packages.urllib3.util.retry=WARN,urllib3.util.retry=WARN,keystonemiddleware=WARN,routes.middleware=WARN,stevedore=WARN,taskflow=WARN,keystoneauth=WARN,oslo.cache=INFO,dogpile.core.dogpile=INFO # Enables or disables publication of error events. (boolean value) #publish_errors = false # The format for an instance that is passed with the log message. (string value) #instance_format = "[instance: %(uuid)s] " # The format for an instance UUID that is passed with the log message. (string # value) #instance_uuid_format = "[instance: %(uuid)s] " # Interval, number of seconds, of log rate limiting. (integer value) #rate_limit_interval = 0 # Maximum number of logged messages per rate_limit_interval. (integer value) #rate_limit_burst = 0 # Log level name used by rate limiting: CRITICAL, ERROR, INFO, WARNING, DEBUG or # empty string. Logs with level greater or equal to rate_limit_except_level are # not filtered. An empty string means that all levels are filtered. (string # value) #rate_limit_except_level = CRITICAL # Enables or disables fatal status of deprecations. (boolean value) #fatal_deprecations = false # # From oslo.messaging # # Size of RPC connection pool. (integer value) #rpc_conn_pool_size = 30 # The pool size limit for connections expiration policy (integer value) #conn_pool_min_size = 2 # The time-to-live in sec of idle connections in the pool (integer value) #conn_pool_ttl = 1200 # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. (string value) #rpc_zmq_bind_address = * # MatchMaker driver. (string value) # Possible values: # redis - # sentinel - # dummy - #rpc_zmq_matchmaker = redis # Number of ZeroMQ contexts, defaults to 1. (integer value) #rpc_zmq_contexts = 1 # Maximum number of ingress messages to locally buffer per topic. Default is # unlimited. (integer value) #rpc_zmq_topic_backlog = # Directory for holding IPC sockets. (string value) #rpc_zmq_ipc_dir = /var/run/openstack # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match # "host" option, if running Nova. (string value) #rpc_zmq_host = localhost # Number of seconds to wait before all pending messages will be sent after # closing a socket. The default value of -1 specifies an infinite linger period. # The value of 0 specifies no linger period. Pending messages shall be discarded # immediately when the socket is closed. Positive values specify an upper bound # for the linger period. (integer value) # Deprecated group/name - [DEFAULT]/rpc_cast_timeout #zmq_linger = -1 # The default number of seconds that poll should wait. Poll raises timeout # exception when timeout expired. (integer value) #rpc_poll_timeout = 1 # Expiration timeout in seconds of a name service record about existing target ( # < 0 means no timeout). (integer value) #zmq_target_expire = 300 # Update period in seconds of a name service record about existing target. # (integer value) #zmq_target_update = 180 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean # value) #use_pub_sub = false # Use ROUTER remote proxy. (boolean value) #use_router_proxy = false # This option makes direct connections dynamic or static. It makes sense only # with use_router_proxy=False which means to use direct connections for direct # message types (ignored otherwise). (boolean value) #use_dynamic_connections = false # How many additional connections to a host will be made for failover reasons. # This option is actual only in dynamic connections mode. (integer value) #zmq_failover_connections = 2 # Minimal port number for random ports range. (port value) # Minimum value: 0 # Maximum value: 65535 #rpc_zmq_min_port = 49153 # Maximal port number for random ports range. (integer value) # Minimum value: 1 # Maximum value: 65536 #rpc_zmq_max_port = 65536 # Number of retries to find free port number before fail with ZMQBindError. # (integer value) #rpc_zmq_bind_port_retries = 100 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #rpc_zmq_serialization = json # This option configures round-robin mode in zmq socket. True means not keeping # a queue when server side disconnects. False means to keep queue and messages # even if server is disconnected, when the server appears we send all # accumulated messages to it. (boolean value) #zmq_immediate = true # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any # other negative value) means to skip any overrides and leave it to OS default; # 0 and 1 (or any other positive value) mean to disable and enable the option # respectively. (integer value) #zmq_tcp_keepalive = -1 # The duration between two keepalive transmissions in idle condition. The unit # is platform dependent, for example, seconds in Linux, milliseconds in Windows # etc. The default value of -1 (or any other negative value and 0) means to skip # any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_idle = -1 # The number of retransmissions to be carried out before declaring that remote # end is not available. The default value of -1 (or any other negative value and # 0) means to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_cnt = -1 # The duration between two successive keepalive retransmissions, if # acknowledgement to the previous keepalive transmission is not received. The # unit is platform dependent, for example, seconds in Linux, milliseconds in # Windows etc. The default value of -1 (or any other negative value and 0) means # to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_intvl = -1 # Maximum number of (green) threads to work concurrently. (integer value) #rpc_thread_pool_size = 100 # Expiration timeout in seconds of a sent/received message after which it is not # tracked anymore by a client/server. (integer value) #rpc_message_ttl = 300 # Wait for message acknowledgements from receivers. This mechanism works only # via proxy without PUB/SUB. (boolean value) #rpc_use_acks = false # Number of seconds to wait for an ack from a cast/call. After each retry # attempt this timeout is multiplied by some specified multiplier. (integer # value) #rpc_ack_timeout_base = 15 # Number to multiply base ack timeout by after each retry attempt. (integer # value) #rpc_ack_timeout_multiplier = 2 # Default number of message sending attempts in case of any problems occurred: # positive value N means at most N retries, 0 means no retries, None or -1 (or # any other negative values) mean to retry forever. This option is used only if # acknowledgments are enabled. (integer value) #rpc_retry_attempts = 3 # List of publisher hosts SubConsumer can subscribe on. This option has higher # priority then the default publishers list taken from the matchmaker. (list # value) #subscribe_on = # Size of executor thread pool when executor is threading or eventlet. (integer # value) # Deprecated group/name - [DEFAULT]/rpc_thread_pool_size #executor_thread_pool_size = 64 # Seconds to wait for a response from a call. (integer value) #rpc_response_timeout = 60 # The network address and optional user credentials for connecting to the # messaging backend, in URL format. The expected format is: # # driver://[user:pass@]host:port[,[userN:passN@]hostN:portN]/virtual_host?query # # Example: rabbit://rabbitmq:password at 127.0.0.1:5672// # # For full details on the fields in the URL see the documentation of # oslo_messaging.TransportURL at # https://docs.openstack.org/oslo.messaging/latest/reference/transport.html # (string value) #transport_url = # DEPRECATED: The messaging driver to use, defaults to rabbit. Other drivers # include amqp and zmq. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rpc_backend = rabbit # The default exchange under which topics are scoped. May be overridden by an # exchange name specified in the transport_url option. (string value) #control_exchange = openstack # # From oslo.service.periodic_task # # Some periodic tasks can be run in a separate process. Should we run them here? # (boolean value) #run_external_periodic_tasks = true # # From oslo.service.service # # Enable eventlet backdoor. Acceptable values are 0, , and :, # where 0 results in listening on a random tcp port number; results in # listening on the specified port number (and not enabling backdoor if that port # is in use); and : results in listening on the smallest unused port # number within the specified range of port numbers. The chosen port is # displayed in the service's log file. (string value) #backdoor_port = # Enable eventlet backdoor, using the provided path as a unix socket that can # receive connections. This option is mutually exclusive with 'backdoor_port' in # that only one should be provided. If both are provided then the existence of # this option overrides the usage of that option. (string value) #backdoor_socket = # Enables or disables logging values of all registered options when starting a # service (at DEBUG level). (boolean value) #log_options = true # Specify a timeout after which a gracefully shutdown server will exit. Zero # value means endless wait. (integer value) #graceful_shutdown_timeout = 60 [api] # # Options under this group are used to define Nova API. #pavlos auth_strategy = keystone # # From nova.conf # # # This determines the strategy to use for authentication: keystone or noauth2. # 'noauth2' is designed for testing only, as it does no actual credential # checking. 'noauth2' provides administrative credentials only if 'admin' is # specified as the username. # (string value) # Possible values: # keystone - # noauth2 - #auth_strategy = keystone # # When True, the 'X-Forwarded-For' header is treated as the canonical remote # address. When False (the default), the 'remote_address' header is used. # # You should only enable this if you have an HTML sanitizing proxy. # (boolean value) #use_forwarded_for = false # # When gathering the existing metadata for a config drive, the EC2-style # metadata is returned for all versions that don't appear in this option. # As of the Liberty release, the available versions are: # # * 1.0 # * 2007-01-19 # * 2007-03-01 # * 2007-08-29 # * 2007-10-10 # * 2007-12-15 # * 2008-02-01 # * 2008-09-01 # * 2009-04-04 # # The option is in the format of a single string, with each version separated # by a space. # # Possible values: # # * Any string that represents zero or more versions, separated by spaces. # (string value) #config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 # # A list of vendordata providers. # # vendordata providers are how deployers can provide metadata via configdrive # and metadata that is specific to their deployment. There are currently two # supported providers: StaticJSON and DynamicJSON. # # StaticJSON reads a JSON file configured by the flag vendordata_jsonfile_path # and places the JSON from that file into vendor_data.json and # vendor_data2.json. # # DynamicJSON is configured via the vendordata_dynamic_targets flag, which is # documented separately. For each of the endpoints specified in that flag, a # section is added to the vendor_data2.json. # # For more information on the requirements for implementing a vendordata # dynamic endpoint, please see the vendordata.rst file in the nova developer # reference. # # Possible values: # # * A list of vendordata providers, with StaticJSON and DynamicJSON being # current options. # # Related options: # # * vendordata_dynamic_targets # * vendordata_dynamic_ssl_certfile # * vendordata_dynamic_connect_timeout # * vendordata_dynamic_read_timeout # * vendordata_dynamic_failure_fatal # (list value) #vendordata_providers = StaticJSON # # A list of targets for the dynamic vendordata provider. These targets are of # the form @. # # The dynamic vendordata provider collects metadata by contacting external REST # services and querying them for information about the instance. This behaviour # is documented in the vendordata.rst file in the nova developer reference. # (list value) #vendordata_dynamic_targets = # # Path to an optional certificate file or CA bundle to verify dynamic # vendordata REST services ssl certificates against. # # Possible values: # # * An empty string, or a path to a valid certificate file # # Related options: # # * vendordata_providers # * vendordata_dynamic_targets # * vendordata_dynamic_connect_timeout # * vendordata_dynamic_read_timeout # * vendordata_dynamic_failure_fatal # (string value) #vendordata_dynamic_ssl_certfile = # # Maximum wait time for an external REST service to connect. # # Possible values: # # * Any integer with a value greater than three (the TCP packet retransmission # timeout). Note that instance start may be blocked during this wait time, # so this value should be kept small. # # Related options: # # * vendordata_providers # * vendordata_dynamic_targets # * vendordata_dynamic_ssl_certfile # * vendordata_dynamic_read_timeout # * vendordata_dynamic_failure_fatal # (integer value) # Minimum value: 3 #vendordata_dynamic_connect_timeout = 5 # # Maximum wait time for an external REST service to return data once connected. # # Possible values: # # * Any integer. Note that instance start is blocked during this wait time, # so this value should be kept small. # # Related options: # # * vendordata_providers # * vendordata_dynamic_targets # * vendordata_dynamic_ssl_certfile # * vendordata_dynamic_connect_timeout # * vendordata_dynamic_failure_fatal # (integer value) # Minimum value: 0 #vendordata_dynamic_read_timeout = 5 # # Should failures to fetch dynamic vendordata be fatal to instance boot? # # Related options: # # * vendordata_providers # * vendordata_dynamic_targets # * vendordata_dynamic_ssl_certfile # * vendordata_dynamic_connect_timeout # * vendordata_dynamic_read_timeout # (boolean value) #vendordata_dynamic_failure_fatal = false # # This option is the time (in seconds) to cache metadata. When set to 0, # metadata caching is disabled entirely; this is generally not recommended for # performance reasons. Increasing this setting should improve response times # of the metadata API when under heavy load. Higher values may increase memory # usage, and result in longer times for host metadata changes to take effect. # (integer value) # Minimum value: 0 #metadata_cache_expiration = 15 # # Cloud providers may store custom data in vendor data file that will then be # available to the instances via the metadata service, and to the rendering of # config-drive. The default class for this, JsonFileVendorData, loads this # information from a JSON file, whose path is configured by this option. If # there is no path set by this option, the class returns an empty dictionary. # # Possible values: # # * Any string representing the path to the data file, or an empty string # (default). # (string value) #vendordata_jsonfile_path = # # As a query can potentially return many thousands of items, you can limit the # maximum number of items in a single response by setting this option. # (integer value) # Minimum value: 0 # Deprecated group/name - [DEFAULT]/osapi_max_limit #max_limit = 1000 # # This string is prepended to the normal URL that is returned in links to the # OpenStack Compute API. If it is empty (the default), the URLs are returned # unchanged. # # Possible values: # # * Any string, including an empty string (the default). # (string value) # Deprecated group/name - [DEFAULT]/osapi_compute_link_prefix #compute_link_prefix = # # This string is prepended to the normal URL that is returned in links to # Glance resources. If it is empty (the default), the URLs are returned # unchanged. # # Possible values: # # * Any string, including an empty string (the default). # (string value) # Deprecated group/name - [DEFAULT]/osapi_glance_link_prefix #glance_link_prefix = # DEPRECATED: # Operators can turn off the ability for a user to take snapshots of their # instances by setting this option to False. When disabled, any attempt to # take a snapshot will result in a HTTP 400 response ("Bad Request"). # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: This option disables the createImage server action API in a non- # discoverable way and is thus a barrier to interoperability. Also, it is not # used for other APIs that create snapshots like shelve or createBackup. # Disabling snapshots should be done via policy if so desired. #allow_instance_snapshots = true # DEPRECATED: # This option is a list of all instance states for which network address # information should not be returned from the API. # # Possible values: # # A list of strings, where each string is a valid VM state, as defined in # nova/compute/vm_states.py. As of the Newton release, they are: # # * "active" # * "building" # * "paused" # * "suspended" # * "stopped" # * "rescued" # * "resized" # * "soft-delete" # * "deleted" # * "error" # * "shelved" # * "shelved_offloaded" # (list value) # Deprecated group/name - [DEFAULT]/osapi_hide_server_address_states # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: This option hide the server address in server representation for # configured server states. Which makes GET server API controlled by this config # options. Due to this config options, user would not be able to discover the # API behavior on different clouds which leads to the interop issue. #hide_server_address_states = building # The full path to the fping binary. (string value) #fping_path = /usr/sbin/fping # # When True, the TenantNetworkController will query the Neutron API to get the # default networks to use. # # Related options: # # * neutron_default_tenant_id # (boolean value) #use_neutron_default_nets = false # # Tenant ID for getting the default network from Neutron API (also referred in # some places as the 'project ID') to use. # # Related options: # # * use_neutron_default_nets # (string value) #neutron_default_tenant_id = default # # Enables returning of the instance password by the relevant server API calls # such as create, rebuild, evacuate, or rescue. If the hypervisor does not # support password injection, then the password returned will not be correct, # so if your hypervisor does not support password injection, set this to False. # (boolean value) #enable_instance_password = true [api_database] connection = sqlite:////var/lib/nova/nova_api.sqlite # # The *Nova API Database* is a separate database which is used for information # which is used across *cells*. This database is mandatory since the Mitaka # release (13.0.0). # # From nova.conf # # The SQLAlchemy connection string to use to connect to the database. (string # value) #connection = # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set by # the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [api_database]/idle_timeout #connection_recycle_time = 3600 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0 # indicates no limit. (integer value) #max_pool_size = # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) #max_overflow = # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) #pool_timeout = [barbican] # # From nova.conf # # Use this endpoint to connect to Barbican, for example: # "http://localhost:9311/" (string value) #barbican_endpoint = # Version of the Barbican API, for example: "v1" (string value) #barbican_api_version = # Use this endpoint to connect to Keystone (string value) # Deprecated group/name - [key_manager]/auth_url #auth_endpoint = http://localhost/identity/v3 # Number of seconds to wait before retrying poll for key creation completion # (integer value) #retry_delay = 1 # Number of times to retry poll for key creation completion (integer value) #number_of_retries = 60 # Specifies if insecure TLS (https) requests. If False, the server's certificate # will not be validated (boolean value) #verify_ssl = true [cache] # # From nova.conf # # Prefix for building the configuration dictionary for the cache region. This # should not need to be changed unless there is another dogpile.cache region # with the same configuration name. (string value) #config_prefix = cache.oslo # Default TTL, in seconds, for any cached item in the dogpile.cache region. This # applies to any cached method that doesn't have an explicit cache expiration # time defined for it. (integer value) #expiration_time = 600 # Cache backend module. For eventlet-based or environments with hundreds of # threaded servers, Memcache with pooling (oslo_cache.memcache_pool) is # recommended. For environments with less than 100 threaded servers, Memcached # (dogpile.cache.memcached) or Redis (dogpile.cache.redis) is recommended. Test # environments with a single instance of the server can use the # dogpile.cache.memory backend. (string value) # Possible values: # oslo_cache.memcache_pool - # oslo_cache.dict - # oslo_cache.mongo - # oslo_cache.etcd3gw - # dogpile.cache.memcached - # dogpile.cache.pylibmc - # dogpile.cache.bmemcached - # dogpile.cache.dbm - # dogpile.cache.redis - # dogpile.cache.memory - # dogpile.cache.memory_pickle - # dogpile.cache.null - #backend = dogpile.cache.null # Arguments supplied to the backend module. Specify this option once per # argument to be passed to the dogpile.cache backend. Example format: # ":". (multi valued) #backend_argument = # Proxy classes to import that will affect the way the dogpile.cache backend # functions. See the dogpile.cache documentation on changing-backend-behavior. # (list value) #proxies = # Global toggle for caching. (boolean value) #enabled = false # Extra debugging from the cache backend (cache keys, get/set/delete/etc calls). # This is only really useful if you need to see the specific cache-backend # get/set/delete calls with the keys/values. Typically this should be left set # to false. (boolean value) #debug_cache_backend = false # Memcache servers in the format of "host:port". (dogpile.cache.memcache and # oslo_cache.memcache_pool backends only). (list value) #memcache_servers = localhost:11211 # Number of seconds memcached server is considered dead before it is tried # again. (dogpile.cache.memcache and oslo_cache.memcache_pool backends only). # (integer value) #memcache_dead_retry = 300 # Timeout in seconds for every call to a server. (dogpile.cache.memcache and # oslo_cache.memcache_pool backends only). (integer value) #memcache_socket_timeout = 3 # Max total number of open connections to every memcached server. # (oslo_cache.memcache_pool backend only). (integer value) #memcache_pool_maxsize = 10 # Number of seconds a connection to memcached is held unused in the pool before # it is closed. (oslo_cache.memcache_pool backend only). (integer value) #memcache_pool_unused_timeout = 60 # Number of seconds that an operation will wait to get a memcache client # connection. (integer value) #memcache_pool_connection_get_timeout = 10 [cells] enable = False # # DEPRECATED: Cells options allow you to use cells v1 functionality in an # OpenStack deployment. # # Note that the options in this group are only for cells v1 functionality, which # is considered experimental and not recommended for new deployments. Cells v1 # is being replaced with cells v2, which starting in the 15.0.0 Ocata release is # required and all Nova deployments will be at least a cells v2 cell of one. # # # From nova.conf # # DEPRECATED: # Enable cell v1 functionality. # # Note that cells v1 is considered experimental and not recommended for new # Nova deployments. Cells v1 is being replaced by cells v2 which starting in # the 15.0.0 Ocata release, all Nova deployments are at least a cells v2 cell # of one. Setting this option, or any other options in the [cells] group, is # not required for cells v2. # # When this functionality is enabled, it lets you to scale an OpenStack # Compute cloud in a more distributed fashion without having to use # complicated technologies like database and message queue clustering. # Cells are configured as a tree. The top-level cell should have a host # that runs a nova-api service, but no nova-compute services. Each # child cell should run all of the typical nova-* services in a regular # Compute cloud except for nova-api. You can think of cells as a normal # Compute deployment in that each cell has its own database server and # message queue broker. # # Related options: # # * name: A unique cell name must be given when this functionality # is enabled. # * cell_type: Cell type should be defined for all cells. # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #enable = false # DEPRECATED: # Name of the current cell. # # This value must be unique for each cell. Name of a cell is used as # its id, leaving this option unset or setting the same name for # two or more cells may cause unexpected behaviour. # # Related options: # # * enabled: This option is meaningful only when cells service # is enabled # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #name = nova # DEPRECATED: # Cell capabilities. # # List of arbitrary key=value pairs defining capabilities of the # current cell to be sent to the parent cells. These capabilities # are intended to be used in cells scheduler filters/weighers. # # Possible values: # # * key=value pairs list for example; # ``hypervisor=xenserver;kvm,os=linux;windows`` # (list value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #capabilities = hypervisor=xenserver;kvm,os=linux;windows # DEPRECATED: # Call timeout. # # Cell messaging module waits for response(s) to be put into the # eventlet queue. This option defines the seconds waited for # response from a call to a cell. # # Possible values: # # * An integer, corresponding to the interval time in seconds. # (integer value) # Minimum value: 0 # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #call_timeout = 60 # DEPRECATED: # Reserve percentage # # Percentage of cell capacity to hold in reserve, so the minimum # amount of free resource is considered to be; # # min_free = total * (reserve_percent / 100.0) # # This option affects both memory and disk utilization. # # The primary purpose of this reserve is to ensure some space is # available for users who want to resize their instance to be larger. # Note that currently once the capacity expands into this reserve # space this option is ignored. # # Possible values: # # * An integer or float, corresponding to the percentage of cell capacity to # be held in reserve. # (floating point value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #reserve_percent = 10.0 # DEPRECATED: # Type of cell. # # When cells feature is enabled the hosts in the OpenStack Compute # cloud are partitioned into groups. Cells are configured as a tree. # The top-level cell's cell_type must be set to ``api``. All other # cells are defined as a ``compute cell`` by default. # # Related option: # # * quota_driver: Disable quota checking for the child cells. # (nova.quota.NoopQuotaDriver) # (string value) # Possible values: # api - # compute - # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #cell_type = compute # DEPRECATED: # Mute child interval. # # Number of seconds after which a lack of capability and capacity # update the child cell is to be treated as a mute cell. Then the # child cell will be weighed as recommend highly that it be skipped. # # Possible values: # # * An integer, corresponding to the interval time in seconds. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #mute_child_interval = 300 # DEPRECATED: # Bandwidth update interval. # # Seconds between bandwidth usage cache updates for cells. # # Possible values: # # * An integer, corresponding to the interval time in seconds. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #bandwidth_update_interval = 600 # DEPRECATED: # Instance update sync database limit. # # Number of instances to pull from the database at one time for # a sync. If there are more instances to update the results will # be paged through. # # Possible values: # # * An integer, corresponding to a number of instances. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #instance_update_sync_database_limit = 100 # DEPRECATED: # Mute weight multiplier. # # Multiplier used to weigh mute children. Mute children cells are # recommended to be skipped so their weight is multiplied by this # negative value. # # Possible values: # # * Negative numeric number # (floating point value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #mute_weight_multiplier = -10000.0 # DEPRECATED: # Ram weight multiplier. # # Multiplier used for weighing ram. Negative numbers indicate that # Compute should stack VMs on one host instead of spreading out new # VMs to more hosts in the cell. # # Possible values: # # * Numeric multiplier # (floating point value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #ram_weight_multiplier = 10.0 # DEPRECATED: # Offset weight multiplier # # Multiplier used to weigh offset weigher. Cells with higher # weight_offsets in the DB will be preferred. The weight_offset # is a property of a cell stored in the database. It can be used # by a deployer to have scheduling decisions favor or disfavor # cells based on the setting. # # Possible values: # # * Numeric multiplier # (floating point value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #offset_weight_multiplier = 1.0 # DEPRECATED: # Instance updated at threshold # # Number of seconds after an instance was updated or deleted to # continue to update cells. This option lets cells manager to only # attempt to sync instances that have been updated recently. # i.e., a threshold of 3600 means to only update instances that # have modified in the last hour. # # Possible values: # # * Threshold in seconds # # Related options: # # * This value is used with the ``instance_update_num_instances`` # value in a periodic task run. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #instance_updated_at_threshold = 3600 # DEPRECATED: # Instance update num instances # # On every run of the periodic task, nova cells manager will attempt to # sync instance_updated_at_threshold number of instances. When the # manager gets the list of instances, it shuffles them so that multiple # nova-cells services do not attempt to sync the same instances in # lockstep. # # Possible values: # # * Positive integer number # # Related options: # # * This value is used with the ``instance_updated_at_threshold`` # value in a periodic task run. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #instance_update_num_instances = 1 # DEPRECATED: # Maximum hop count # # When processing a targeted message, if the local cell is not the # target, a route is defined between neighbouring cells. And the # message is processed across the whole routing path. This option # defines the maximum hop counts until reaching the target. # # Possible values: # # * Positive integer value # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #max_hop_count = 10 # DEPRECATED: # Cells scheduler. # # The class of the driver used by the cells scheduler. This should be # the full Python path to the class to be used. If nothing is specified # in this option, the CellsScheduler is used. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #scheduler = nova.cells.scheduler.CellsScheduler # DEPRECATED: # RPC driver queue base. # # When sending a message to another cell by JSON-ifying the message # and making an RPC cast to 'process_message', a base queue is used. # This option defines the base queue name to be used when communicating # between cells. Various topics by message type will be appended to this. # # Possible values: # # * The base queue name to be used when communicating between cells. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #rpc_driver_queue_base = cells.intercell # DEPRECATED: # Scheduler filter classes. # # Filter classes the cells scheduler should use. An entry of # "nova.cells.filters.all_filters" maps to all cells filters # included with nova. As of the Mitaka release the following # filter classes are available: # # Different cell filter: A scheduler hint of 'different_cell' # with a value of a full cell name may be specified to route # a build away from a particular cell. # # Image properties filter: Image metadata named # 'hypervisor_version_requires' with a version specification # may be specified to ensure the build goes to a cell which # has hypervisors of the required version. If either the version # requirement on the image or the hypervisor capability of the # cell is not present, this filter returns without filtering out # the cells. # # Target cell filter: A scheduler hint of 'target_cell' with a # value of a full cell name may be specified to route a build to # a particular cell. No error handling is done as there's no way # to know whether the full path is a valid. # # As an admin user, you can also add a filter that directs builds # to a particular cell. # # (list value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #scheduler_filter_classes = nova.cells.filters.all_filters # DEPRECATED: # Scheduler weight classes. # # Weigher classes the cells scheduler should use. An entry of # "nova.cells.weights.all_weighers" maps to all cell weighers # included with nova. As of the Mitaka release the following # weight classes are available: # # mute_child: Downgrades the likelihood of child cells being # chosen for scheduling requests, which haven't sent capacity # or capability updates in a while. Options include # mute_weight_multiplier (multiplier for mute children; value # should be negative). # # ram_by_instance_type: Select cells with the most RAM capacity # for the instance type being requested. Because higher weights # win, Compute returns the number of available units for the # instance type requested. The ram_weight_multiplier option defaults # to 10.0 that adds to the weight by a factor of 10. Use a negative # number to stack VMs on one host instead of spreading out new VMs # to more hosts in the cell. # # weight_offset: Allows modifying the database to weight a particular # cell. The highest weight will be the first cell to be scheduled for # launching an instance. When the weight_offset of a cell is set to 0, # it is unlikely to be picked but it could be picked if other cells # have a lower weight, like if they're full. And when the weight_offset # is set to a very high value (for example, '999999999999999'), it is # likely to be picked if another cell do not have a higher weight. # (list value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #scheduler_weight_classes = nova.cells.weights.all_weighers # DEPRECATED: # Scheduler retries. # # How many retries when no cells are available. Specifies how many # times the scheduler tries to launch a new instance when no cells # are available. # # Possible values: # # * Positive integer value # # Related options: # # * This value is used with the ``scheduler_retry_delay`` value # while retrying to find a suitable cell. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #scheduler_retries = 10 # DEPRECATED: # Scheduler retry delay. # # Specifies the delay (in seconds) between scheduling retries when no # cell can be found to place the new instance on. When the instance # could not be scheduled to a cell after ``scheduler_retries`` in # combination with ``scheduler_retry_delay``, then the scheduling # of the instance failed. # # Possible values: # # * Time in seconds. # # Related options: # # * This value is used with the ``scheduler_retries`` value # while retrying to find a suitable cell. # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #scheduler_retry_delay = 2 # DEPRECATED: # DB check interval. # # Cell state manager updates cell status for all cells from the DB # only after this particular interval time is passed. Otherwise cached # status are used. If this value is 0 or negative all cell status are # updated from the DB whenever a state is needed. # # Possible values: # # * Interval time, in seconds. # # (integer value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #db_check_interval = 60 # DEPRECATED: # Optional cells configuration. # # Configuration file from which to read cells configuration. If given, # overrides reading cells from the database. # # Cells store all inter-cell communication data, including user names # and passwords, in the database. Because the cells data is not updated # very frequently, use this option to specify a JSON file to store # cells data. With this configuration, the database is no longer # consulted when reloading the cells data. The file must have columns # present in the Cell model (excluding common database fields and the # id column). You must specify the queue connection information through # a transport_url field, instead of username, password, and so on. # # The transport_url has the following form: # rabbit://USERNAME:PASSWORD at HOSTNAME:PORT/VIRTUAL_HOST # # Possible values: # # The scheme can be either qpid or rabbit, the following sample shows # this optional configuration: # # { # "parent": { # "name": "parent", # "api_url": "http://api.example.com:8774", # "transport_url": "rabbit://rabbit.example.com", # "weight_offset": 0.0, # "weight_scale": 1.0, # "is_parent": true # }, # "cell1": { # "name": "cell1", # "api_url": "http://api.example.com:8774", # "transport_url": "rabbit://rabbit1.example.com", # "weight_offset": 0.0, # "weight_scale": 1.0, # "is_parent": false # }, # "cell2": { # "name": "cell2", # "api_url": "http://api.example.com:8774", # "transport_url": "rabbit://rabbit2.example.com", # "weight_offset": 0.0, # "weight_scale": 1.0, # "is_parent": false # } # } # # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: Cells v1 is being replaced with Cells v2. #cells_config = [cinder] # # From nova.conf # # # Info to match when looking for cinder in the service catalog. # # Possible values: # # * Format is separated values of the form: # :: # # Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens # release. # # Related options: # # * endpoint_template - Setting this option will override catalog_info # (string value) #catalog_info = volumev3:cinderv3:publicURL # # If this option is set then it will override service catalog lookup with # this template for cinder endpoint # # Possible values: # # * URL for cinder endpoint API # e.g. http://localhost:8776/v3/%(project_id)s # # Note: Nova does not support the Cinder v2 API since the Nova 17.0.0 Queens # release. # # Related options: # # * catalog_info - If endpoint_template is not set, catalog_info will be used. # (string value) #endpoint_template = # # Region name of this node. This is used when picking the URL in the service # catalog. # # Possible values: # # * Any string representing region name # (string value) #os_region_name = # # Number of times cinderclient should retry on any failed http call. # 0 means connection is attempted only once. Setting it to any positive integer # means that on failure connection is retried that many times e.g. setting it # to 3 means total attempts to connect will be 4. # # Possible values: # # * Any integer value. 0 means connection is attempted only once # (integer value) # Minimum value: 0 #http_retries = 3 # # Allow attach between instance and volume in different availability zones. # # If False, volumes attached to an instance must be in the same availability # zone in Cinder as the instance availability zone in Nova. # This also means care should be taken when booting an instance from a volume # where source is not "volume" because Nova will attempt to create a volume # using # the same availability zone as what is assigned to the instance. # If that AZ is not in Cinder (or allow_availability_zone_fallback=False in # cinder.conf), the volume create request will fail and the instance will fail # the build request. # By default there is no availability zone restriction on volume attach. # (boolean value) #cross_az_attach = true # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # Authentication type to load (string value) # Deprecated group/name - [cinder]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = # Authentication URL (string value) #auth_url = # Scope for system operations (string value) #system_scope = # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Project ID to scope to (string value) #project_id = # Project name to scope to (string value) #project_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = # Trust ID (string value) #trust_id = # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string # value) #default_domain_id = # Optional domain name to use with v3 API and v2 parameters. It will be used for # both the user and project domain in v3 and ignored in v2 authentication. # (string value) #default_domain_name = # User ID (string value) #user_id = # Username (string value) # Deprecated group/name - [cinder]/user_name #username = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # User's password (string value) #password = # Tenant ID (string value) #tenant_id = # Tenant Name (string value) #tenant_name = [compute] # # From nova.conf # # # Enables reporting of build failures to the scheduler. # # Any nonzero value will enable sending build failure statistics to the # scheduler for use by the BuildFailureWeigher. # # Possible values: # # * Any positive integer enables reporting build failures. # * Zero to disable reporting build failures. # # Related options: # # * [filter_scheduler]/build_failure_weight_multiplier # # (integer value) #consecutive_build_service_disable_threshold = 10 # # Interval for updating nova-compute-side cache of the compute node resource # provider's aggregates and traits info. # # This option specifies the number of seconds between attempts to update a # provider's aggregates and traits information in the local cache of the compute # node. # # Possible values: # # * Any positive integer in seconds. # (integer value) # Minimum value: 1 #resource_provider_association_refresh = 300 # # Determine if the source compute host should wait for a ``network-vif-plugged`` # event from the (neutron) networking service before starting the actual # transfer # of the guest to the destination compute host. # # If you set this option the same on all of your compute hosts, which you should # do if you use the same networking backend universally, you do not have to # worry about this. # # Before starting the transfer of the guest, some setup occurs on the # destination # compute host, including plugging virtual interfaces. Depending on the # networking backend **on the destination host**, a ``network-vif-plugged`` # event may be triggered and then received on the source compute host and the # source compute can wait for that event to ensure networking is set up on the # destination host before starting the guest transfer in the hypervisor. # # By default, this is False for two reasons: # # 1. Backward compatibility: deployments should test this out and ensure it # works # for them before enabling it. # # 2. The compute service cannot reliably determine which types of virtual # interfaces (``port.binding:vif_type``) will send ``network-vif-plugged`` # events without an accompanying port ``binding:host_id`` change. # Open vSwitch and linuxbridge should be OK, but OpenDaylight is at least # one known backend that will not currently work in this case, see bug # https://launchpad.net/bugs/1755890 for more details. # # Possible values: # # * True: wait for ``network-vif-plugged`` events before starting guest transfer # * False: do not wait for ``network-vif-plugged`` events before starting guest # transfer (this is how things have always worked before this option # was introduced) # # Related options: # # * [DEFAULT]/vif_plugging_is_fatal: if ``live_migration_wait_for_vif_plug`` is # True and ``vif_plugging_timeout`` is greater than 0, and a timeout is # reached, the live migration process will fail with an error but the guest # transfer will not have started to the destination host # * [DEFAULT]/vif_plugging_timeout: if ``live_migration_wait_for_vif_plug`` is # True, this controls the amount of time to wait before timing out and either # failing if ``vif_plugging_is_fatal`` is True, or simply continuing with the # live migration # (boolean value) #live_migration_wait_for_vif_plug = false [conductor] # # Options under this group are used to define Conductor's communication, # which manager should be act as a proxy between computes and database, # and finally, how many worker processes will be used. # # From nova.conf # # DEPRECATED: # Topic exchange name on which conductor nodes listen. # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # There is no need to let users choose the RPC topic for all services - there # is little gain from this. Furthermore, it makes it really easy to break Nova # by using this option. #topic = conductor # # Number of workers for OpenStack Conductor service. The default will be the # number of CPUs available. # (integer value) #workers = [console] # # Options under this group allow to tune the configuration of the console proxy # service. # # Note: in configuration of every compute is a ``console_host`` option, # which allows to select the console proxy service to connect to. # # From nova.conf # # # Adds list of allowed origins to the console websocket proxy to allow # connections from other origin hostnames. # Websocket proxy matches the host header with the origin header to # prevent cross-site requests. This list specifies if any there are # values other than host are allowed in the origin header. # # Possible values: # # * A list where each element is an allowed origin hostnames, else an empty list # (list value) # Deprecated group/name - [DEFAULT]/console_allowed_origins #allowed_origins = [consoleauth] # # From nova.conf # # # The lifetime of a console auth token (in seconds). # # A console auth token is used in authorizing console access for a user. # Once the auth token time to live count has elapsed, the token is # considered expired. Expired tokens are then deleted. # (integer value) # Minimum value: 0 # Deprecated group/name - [DEFAULT]/console_token_ttl #token_ttl = 600 [cors] # # From oslo.middleware # # Indicate whether this resource may be shared with the domain received in the # requests "origin" header. Format: "://[:]", no trailing # slash. Example: https://horizon.example.com (list value) #allowed_origin = # Indicate that the actual request can include user credentials (boolean value) #allow_credentials = true # Indicate which headers are safe to expose to the API. Defaults to HTTP Simple # Headers. (list value) #expose_headers = X-Auth-Token,X-Openstack-Request-Id,X-Subject-Token,X-Service-Token # Maximum cache age of CORS preflight requests. (integer value) #max_age = 3600 # Indicate which methods can be used during the actual request. (list value) #allow_methods = GET,PUT,POST,DELETE,PATCH # Indicate which header field names may be used during the actual request. (list # value) #allow_headers = X-Auth-Token,X-Openstack-Request-Id,X-Identity-Status,X-Roles,X-Service-Catalog,X-User-Id,X-Tenant-Id [crypto] # # From nova.conf # # # Filename of root CA (Certificate Authority). This is a container format # and includes root certificates. # # Possible values: # # * Any file name containing root CA, cacert.pem is default # # Related options: # # * ca_path # (string value) #ca_file = cacert.pem # # Filename of a private key. # # Related options: # # * keys_path # (string value) #key_file = private/cakey.pem # # Filename of root Certificate Revocation List (CRL). This is a list of # certificates that have been revoked, and therefore, entities presenting # those (revoked) certificates should no longer be trusted. # # Related options: # # * ca_path # (string value) #crl_file = crl.pem # # Directory path where keys are located. # # Related options: # # * key_file # (string value) #keys_path = $state_path/keys # # Directory path where root CA is located. # # Related options: # # * ca_file # (string value) #ca_path = $state_path/CA # Option to enable/disable use of CA for each project. (boolean value) #use_project_ca = false # # Subject for certificate for users, %s for # project, user, timestamp # (string value) #user_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=%.16s-%.16s-%s # # Subject for certificate for projects, %s for # project, timestamp # (string value) #project_cert_subject = /C=US/ST=California/O=OpenStack/OU=NovaDev/CN=project-ca-%.16s-%s [database] connection = sqlite:////var/lib/nova/nova.sqlite # # From oslo.db # # If True, SQLite uses synchronous mode. (boolean value) #sqlite_synchronous = true # The back end to use for the database. (string value) # Deprecated group/name - [DEFAULT]/db_backend #backend = sqlalchemy # The SQLAlchemy connection string to use to connect to the database. (string # value) # Deprecated group/name - [DEFAULT]/sql_connection # Deprecated group/name - [DATABASE]/sql_connection # Deprecated group/name - [sql]/connection #connection = # The SQLAlchemy connection string to use to connect to the slave database. # (string value) #slave_connection = # The SQL mode to be used for MySQL sessions. This option, including the # default, overrides any server-set SQL mode. To use whatever SQL mode is set by # the server configuration, set this to no value. Example: mysql_sql_mode= # (string value) #mysql_sql_mode = TRADITIONAL # If True, transparently enables support for handling MySQL Cluster (NDB). # (boolean value) #mysql_enable_ndb = false # Connections which have been present in the connection pool longer than this # number of seconds will be replaced with a new one the next time they are # checked out from the pool. (integer value) # Deprecated group/name - [DATABASE]/idle_timeout # Deprecated group/name - [database]/idle_timeout # Deprecated group/name - [DEFAULT]/sql_idle_timeout # Deprecated group/name - [DATABASE]/sql_idle_timeout # Deprecated group/name - [sql]/idle_timeout #connection_recycle_time = 3600 # Minimum number of SQL connections to keep open in a pool. (integer value) # Deprecated group/name - [DEFAULT]/sql_min_pool_size # Deprecated group/name - [DATABASE]/sql_min_pool_size #min_pool_size = 1 # Maximum number of SQL connections to keep open in a pool. Setting a value of 0 # indicates no limit. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_pool_size # Deprecated group/name - [DATABASE]/sql_max_pool_size #max_pool_size = 5 # Maximum number of database connection retries during startup. Set to -1 to # specify an infinite retry count. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_retries # Deprecated group/name - [DATABASE]/sql_max_retries #max_retries = 10 # Interval between retries of opening a SQL connection. (integer value) # Deprecated group/name - [DEFAULT]/sql_retry_interval # Deprecated group/name - [DATABASE]/reconnect_interval #retry_interval = 10 # If set, use this value for max_overflow with SQLAlchemy. (integer value) # Deprecated group/name - [DEFAULT]/sql_max_overflow # Deprecated group/name - [DATABASE]/sqlalchemy_max_overflow #max_overflow = 50 # Verbosity of SQL debugging information: 0=None, 100=Everything. (integer # value) # Minimum value: 0 # Maximum value: 100 # Deprecated group/name - [DEFAULT]/sql_connection_debug #connection_debug = 0 # Add Python stack traces to SQL as comment strings. (boolean value) # Deprecated group/name - [DEFAULT]/sql_connection_trace #connection_trace = false # If set, use this value for pool_timeout with SQLAlchemy. (integer value) # Deprecated group/name - [DATABASE]/sqlalchemy_pool_timeout #pool_timeout = # Enable the experimental use of database reconnect on connection lost. (boolean # value) #use_db_reconnect = false # Seconds between retries of a database transaction. (integer value) #db_retry_interval = 1 # If True, increases the interval between retries of a database operation up to # db_max_retry_interval. (boolean value) #db_inc_retry_interval = true # If db_inc_retry_interval is set, the maximum seconds between retries of a # database operation. (integer value) #db_max_retry_interval = 10 # Maximum retries in case of connection error or deadlock error before error is # raised. Set to -1 to specify an infinite retry count. (integer value) #db_max_retries = 20 # # From oslo.db.concurrency # # Enable the experimental use of thread pooling for all DB API calls (boolean # value) # Deprecated group/name - [DEFAULT]/dbapi_use_tpool #use_tpool = false [devices] # # From nova.conf # # # A list of the vGPU types enabled in the compute node. # # Some pGPUs (e.g. NVIDIA GRID K1) support different vGPU types. User can use # this option to specify a list of enabled vGPU types that may be assigned to a # guest instance. But please note that Nova only supports a single type in the # Queens release. If more than one vGPU type is specified (as a comma-separated # list), only the first one will be used. An example is as the following: # [devices] # enabled_vgpu_types = GRID K100,Intel GVT-g,MxGPU.2,nvidia-11 # (list value) #enabled_vgpu_types = [ephemeral_storage_encryption] # # From nova.conf # # # Enables/disables LVM ephemeral storage encryption. # (boolean value) #enabled = false # # Cipher-mode string to be used. # # The cipher and mode to be used to encrypt ephemeral storage. The set of # cipher-mode combinations available depends on kernel support. According # to the dm-crypt documentation, the cipher is expected to be in the format: # "--". # # Possible values: # # * Any crypto option listed in ``/proc/crypto``. # (string value) #cipher = aes-xts-plain64 # # Encryption key length in bits. # # The bit length of the encryption key to be used to encrypt ephemeral storage. # In XTS mode only half of the bits are used for encryption key. # (integer value) # Minimum value: 1 #key_size = 512 [filter_scheduler] # # From nova.conf # # # Size of subset of best hosts selected by scheduler. # # New instances will be scheduled on a host chosen randomly from a subset of the # N best hosts, where N is the value set by this option. # # Setting this to a value greater than 1 will reduce the chance that multiple # scheduler processes handling similar requests will select the same host, # creating a potential race condition. By selecting a host randomly from the N # hosts that best fit the request, the chance of a conflict is reduced. However, # the higher you set this value, the less optimal the chosen host may be for a # given request. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * An integer, where the integer corresponds to the size of a host subset. Any # integer is valid, although any value less than 1 will be treated as 1 # (integer value) # Minimum value: 1 # Deprecated group/name - [DEFAULT]/scheduler_host_subset_size #host_subset_size = 1 # # The number of instances that can be actively performing IO on a host. # # Instances performing IO includes those in the following states: build, resize, # snapshot, migrate, rescue, unshelve. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'io_ops_filter' filter is enabled. # # Possible values: # # * An integer, where the integer corresponds to the max number of instances # that can be actively performing IO on any given host. # (integer value) #max_io_ops_per_host = 8 # # Maximum number of instances that be active on a host. # # If you need to limit the number of instances on any given host, set this # option # to the maximum number of instances you want to allow. The num_instances_filter # will reject any host that has at least as many instances as this option's # value. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'num_instances_filter' filter is enabled. # # Possible values: # # * An integer, where the integer corresponds to the max instances that can be # scheduled on a host. # (integer value) # Minimum value: 1 #max_instances_per_host = 50 # # Enable querying of individual hosts for instance information. # # The scheduler may need information about the instances on a host in order to # evaluate its filters and weighers. The most common need for this information # is # for the (anti-)affinity filters, which need to choose a host based on the # instances already running on a host. # # If the configured filters and weighers do not need this information, disabling # this option will improve performance. It may also be disabled when the # tracking # overhead proves too heavy, although this will cause classes requiring host # usage data to query the database on each request instead. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # NOTE: In a multi-cell (v2) setup where the cell MQ is separated from the # top-level, computes cannot directly communicate with the scheduler. Thus, # this option cannot be enabled in that scenario. See also the # [workarounds]/disable_group_policy_check_upcall option. # (boolean value) # Deprecated group/name - [DEFAULT]/scheduler_tracks_instance_changes #track_instance_changes = true # # Filters that the scheduler can use. # # An unordered list of the filter classes the nova scheduler may apply. Only # the # filters specified in the 'enabled_filters' option will be used, but # any filter appearing in that option must also be included in this list. # # By default, this is set to all filters that are included with nova. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * A list of zero or more strings, where each string corresponds to the name of # a filter that may be used for selecting a host # # Related options: # # * enabled_filters # (multi valued) # Deprecated group/name - [DEFAULT]/scheduler_available_filters #available_filters = nova.scheduler.filters.all_filters # # Filters that the scheduler will use. # # An ordered list of filter class names that will be used for filtering # hosts. These filters will be applied in the order they are listed so # place your most restrictive filters first to make the filtering process more # efficient. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * A list of zero or more strings, where each string corresponds to the name of # a filter to be used for selecting a host # # Related options: # # * All of the filters in this option *must* be present in the # 'scheduler_available_filters' option, or a SchedulerHostFilterNotFound # exception will be raised. # (list value) # Deprecated group/name - [DEFAULT]/scheduler_default_filters #enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter # DEPRECATED: # Filters used for filtering baremetal hosts. # # Filters are applied in order, so place your most restrictive filters first to # make the filtering process more efficient. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * A list of zero or more strings, where each string corresponds to the name of # a filter to be used for selecting a baremetal host # # Related options: # # * If the 'scheduler_use_baremetal_filters' option is False, this option has # no effect. # (list value) # Deprecated group/name - [DEFAULT]/baremetal_scheduler_default_filters # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # These filters were used to overcome some of the baremetal scheduling # limitations in Nova prior to the use of the Placement API. Now scheduling will # use the custom resource class defined for each baremetal node to make its # selection. #baremetal_enabled_filters = RetryFilter,AvailabilityZoneFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ExactRamFilter,ExactDiskFilter,ExactCoreFilter # DEPRECATED: # Enable baremetal filters. # # Set this to True to tell the nova scheduler that it should use the filters # specified in the 'baremetal_enabled_filters' option. If you are not # scheduling baremetal nodes, leave this at the default setting of False. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Related options: # # * If this option is set to True, then the filters specified in the # 'baremetal_enabled_filters' are used instead of the filters # specified in 'enabled_filters'. # (boolean value) # Deprecated group/name - [DEFAULT]/scheduler_use_baremetal_filters # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: # These filters were used to overcome some of the baremetal scheduling # limitations in Nova prior to the use of the Placement API. Now scheduling will # use the custom resource class defined for each baremetal node to make its # selection. #use_baremetal_filters = false # # Weighers that the scheduler will use. # # Only hosts which pass the filters are weighed. The weight for any host starts # at 0, and the weighers order these hosts by adding to or subtracting from the # weight assigned by the previous weigher. Weights may become negative. An # instance will be scheduled to one of the N most-weighted hosts, where N is # 'scheduler_host_subset_size'. # # By default, this is set to all weighers that are included with Nova. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * A list of zero or more strings, where each string corresponds to the name of # a weigher that will be used for selecting a host # (list value) # Deprecated group/name - [DEFAULT]/scheduler_weight_classes #weight_classes = nova.scheduler.weights.all_weighers # # Ram weight multipler ratio. # # This option determines how hosts with more or less available RAM are weighed. # A # positive value will result in the scheduler preferring hosts with more # available RAM, and a negative number will result in the scheduler preferring # hosts with less available RAM. Another way to look at it is that positive # values for this option will tend to spread instances across many hosts, while # negative values will tend to fill up (stack) hosts as much as possible before # scheduling to a less-used host. The absolute value, whether positive or # negative, controls how strong the RAM weigher is relative to other weighers. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'ram' weigher is enabled. # # Possible values: # # * An integer or float value, where the value corresponds to the multipler # ratio for this weigher. # (floating point value) #ram_weight_multiplier = 1.0 # # Disk weight multipler ratio. # # Multiplier used for weighing free disk space. Negative numbers mean to # stack vs spread. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'disk' weigher is enabled. # # Possible values: # # * An integer or float value, where the value corresponds to the multipler # ratio for this weigher. # (floating point value) #disk_weight_multiplier = 1.0 # # IO operations weight multipler ratio. # # This option determines how hosts with differing workloads are weighed. # Negative # values, such as the default, will result in the scheduler preferring hosts # with # lighter workloads whereas positive values will prefer hosts with heavier # workloads. Another way to look at it is that positive values for this option # will tend to schedule instances onto hosts that are already busy, while # negative values will tend to distribute the workload across more hosts. The # absolute value, whether positive or negative, controls how strong the io_ops # weigher is relative to other weighers. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'io_ops' weigher is enabled. # # Possible values: # # * An integer or float value, where the value corresponds to the multipler # ratio for this weigher. # (floating point value) #io_ops_weight_multiplier = -1.0 # # PCI device affinity weight multiplier. # # The PCI device affinity weighter computes a weighting based on the number of # PCI devices on the host and the number of PCI devices requested by the # instance. The ``NUMATopologyFilter`` filter must be enabled for this to have # any significance. For more information, refer to the filter documentation: # # https://docs.openstack.org/nova/latest/user/filter-scheduler.html # # Possible values: # # * A positive integer or float value, where the value corresponds to the # multiplier ratio for this weigher. # (floating point value) # Minimum value: 0 #pci_weight_multiplier = 1.0 # # Multiplier used for weighing hosts for group soft-affinity. # # Possible values: # # * An integer or float value, where the value corresponds to weight multiplier # for hosts with group soft affinity. Only a positive value are meaningful, as # negative values would make this behave as a soft anti-affinity weigher. # (floating point value) #soft_affinity_weight_multiplier = 1.0 # # Multiplier used for weighing hosts for group soft-anti-affinity. # # Possible values: # # * An integer or float value, where the value corresponds to weight multiplier # for hosts with group soft anti-affinity. Only a positive value are # meaningful, as negative values would make this behave as a soft affinity # weigher. # (floating point value) #soft_anti_affinity_weight_multiplier = 1.0 # # Multiplier used for weighing hosts that have had recent build failures. # # This option determines how much weight is placed on a compute node with # recent build failures. Build failures may indicate a failing, misconfigured, # or otherwise ailing compute node, and avoiding it during scheduling may be # beneficial. The weight is inversely proportional to the number of recent # build failures the compute node has experienced. This value should be # set to some high value to offset weight given by other enabled weighers # due to available resources. To disable weighing compute hosts by the # number of recent failures, set this to zero. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * An integer or float value, where the value corresponds to the multiplier # ratio for this weigher. # # Related options: # # * [compute]/consecutive_build_service_disable_threshold - Must be nonzero # for a compute to report data considered by this weigher. # (floating point value) #build_failure_weight_multiplier = 1000000.0 # # Enable spreading the instances between hosts with the same best weight. # # Enabling it is beneficial for cases when host_subset_size is 1 # (default), but there is a large number of hosts with same maximal weight. # This scenario is common in Ironic deployments where there are typically many # baremetal nodes with identical weights returned to the scheduler. # In such case enabling this option will reduce contention and chances for # rescheduling events. # At the same time it will make the instance packing (even in unweighed case) # less dense. # (boolean value) #shuffle_best_same_weighed_hosts = false # # The default architecture to be used when using the image properties filter. # # When using the ImagePropertiesFilter, it is possible that you want to define # a default architecture to make the user experience easier and avoid having # something like x86_64 images landing on aarch64 compute nodes because the # user did not specify the 'hw_architecture' property in Glance. # # Possible values: # # * CPU Architectures such as x86_64, aarch64, s390x. # (string value) # Possible values: # alpha - # armv6 - # armv7l - # armv7b - # aarch64 - # cris - # i686 - # ia64 - # lm32 - # m68k - # microblaze - # microblazeel - # mips - # mipsel - # mips64 - # mips64el - # openrisc - # parisc - # parisc64 - # ppc - # ppcle - # ppc64 - # ppc64le - # ppcemb - # s390 - # s390x - # sh4 - # sh4eb - # sparc - # sparc64 - # unicore32 - # x86_64 - # xtensa - # xtensaeb - #image_properties_default_architecture = # # List of UUIDs for images that can only be run on certain hosts. # # If there is a need to restrict some images to only run on certain designated # hosts, list those image UUIDs here. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. # # Possible values: # # * A list of UUID strings, where each string corresponds to the UUID of an # image # # Related options: # # * scheduler/isolated_hosts # * scheduler/restrict_isolated_hosts_to_isolated_images # (list value) #isolated_images = # # List of hosts that can only run certain images. # # If there is a need to restrict some images to only run on certain designated # hosts, list those host names here. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. # # Possible values: # # * A list of strings, where each string corresponds to the name of a host # # Related options: # # * scheduler/isolated_images # * scheduler/restrict_isolated_hosts_to_isolated_images # (list value) #isolated_hosts = # # Prevent non-isolated images from being built on isolated hosts. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'IsolatedHostsFilter' filter is enabled. Even # then, this option doesn't affect the behavior of requests for isolated images, # which will *always* be restricted to isolated hosts. # # Related options: # # * scheduler/isolated_images # * scheduler/isolated_hosts # (boolean value) #restrict_isolated_hosts_to_isolated_images = true # # Image property namespace for use in the host aggregate. # # Images and hosts can be configured so that certain images can only be # scheduled # to hosts in a particular aggregate. This is done with metadata values set on # the host aggregate that are identified by beginning with the value of this # option. If the host is part of an aggregate with such a metadata key, the # image # in the request spec must have the value of that metadata in its properties in # order for the scheduler to consider the host as acceptable. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'aggregate_image_properties_isolation' filter # is # enabled. # # Possible values: # # * A string, where the string corresponds to an image property namespace # # Related options: # # * aggregate_image_properties_isolation_separator # (string value) #aggregate_image_properties_isolation_namespace = # # Separator character(s) for image property namespace and name. # # When using the aggregate_image_properties_isolation filter, the relevant # metadata keys are prefixed with the namespace defined in the # aggregate_image_properties_isolation_namespace configuration option plus a # separator. This option defines the separator to be used. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. Also note that this setting # only affects scheduling if the 'aggregate_image_properties_isolation' filter # is enabled. # # Possible values: # # * A string, where the string corresponds to an image property namespace # separator character # # Related options: # # * aggregate_image_properties_isolation_namespace # (string value) #aggregate_image_properties_isolation_separator = . [glance] # Configuration options for the Image service #pavlos api_servers = http://controller:9292 # # From nova.conf # # # List of glance api servers endpoints available to nova. # # https is used for ssl-based glance api servers. # # NOTE: The preferred mechanism for endpoint discovery is via keystoneauth1 # loading options. Only use api_servers if you need multiple endpoints and are # unable to use a load balancer for some reason. # # Possible values: # # * A list of any fully qualified url of the form # "scheme://hostname:port[/path]" # (i.e. "http://10.0.1.0:9292" or "https://my.glance.server/image"). # (list value) #api_servers = # # Enable glance operation retries. # # Specifies the number of retries when uploading / downloading # an image to / from glance. 0 means no retries. # (integer value) # Minimum value: 0 #num_retries = 0 # DEPRECATED: # List of url schemes that can be directly accessed. # # This option specifies a list of url schemes that can be downloaded # directly via the direct_url. This direct_URL can be fetched from # Image metadata which can be used by nova to get the # image more efficiently. nova-compute could benefit from this by # invoking a copy when it has access to the same file system as glance. # # Possible values: # # * [file], Empty list (default) # (list value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: # This was originally added for the 'nova.image.download.file' FileTransfer # extension which was removed in the 16.0.0 Pike release. The # 'nova.image.download.modules' extension point is not maintained # and there is no indication of its use in production clouds. #allowed_direct_url_schemes = # # Enable image signature verification. # # nova uses the image signature metadata from glance and verifies the signature # of a signed image while downloading that image. If the image signature cannot # be verified or if the image signature metadata is either incomplete or # unavailable, then nova will not boot the image and instead will place the # instance into an error state. This provides end users with stronger assurances # of the integrity of the image data they are using to create servers. # # Related options: # # * The options in the `key_manager` group, as the key_manager is used # for the signature validation. # * Both enable_certificate_validation and default_trusted_certificate_ids # below depend on this option being enabled. # (boolean value) #verify_glance_signatures = false # DEPRECATED: # Enable certificate validation for image signature verification. # # During image signature verification nova will first verify the validity of the # image's signing certificate using the set of trusted certificates associated # with the instance. If certificate validation fails, signature verification # will not be performed and the image will be placed into an error state. This # provides end users with stronger assurances that the image data is unmodified # and trustworthy. If left disabled, image signature verification can still # occur but the end user will not have any assurance that the signing # certificate used to generate the image signature is still trustworthy. # # Related options: # # * This option only takes effect if verify_glance_signatures is enabled. # * The value of default_trusted_certificate_ids may be used when this option # is enabled. # (boolean value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # This option is intended to ease the transition for deployments leveraging # image signature verification. The intended state long-term is for signature # verification and certificate validation to always happen together. #enable_certificate_validation = false # # List of certificate IDs for certificates that should be trusted. # # May be used as a default list of trusted certificate IDs for certificate # validation. The value of this option will be ignored if the user provides a # list of trusted certificate IDs with an instance API request. The value of # this option will be persisted with the instance data if signature verification # and certificate validation are enabled and if the user did not provide an # alternative list. If left empty when certificate validation is enabled the # user must provide a list of trusted certificate IDs otherwise certificate # validation will fail. # # Related options: # # * The value of this option may be used if both verify_glance_signatures and # enable_certificate_validation are enabled. # (list value) #default_trusted_certificate_ids = # Enable or disable debug logging with glanceclient. (boolean value) #debug = false # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # The default service_type for endpoint URL discovery. (string value) #service_type = image # The default service_name for endpoint URL discovery. (string value) #service_name = # List of interfaces, in order of preference, for endpoint URL. (list value) #valid_interfaces = internal,public # The default region_name for endpoint URL discovery. (string value) #region_name = # Always use this endpoint URL for requests for this client. NOTE: The # unversioned endpoint should be specified here; to request a particular API # version, use the `version`, `min-version`, and/or `max-version` options. # (string value) #endpoint_override = [guestfs] # # libguestfs is a set of tools for accessing and modifying virtual # machine (VM) disk images. You can use this for viewing and editing # files inside guests, scripting changes to VMs, monitoring disk # used/free statistics, creating guests, P2V, V2V, performing backups, # cloning VMs, building VMs, formatting disks and resizing disks. # # From nova.conf # # # Enable/disables guestfs logging. # # This configures guestfs to debug messages and push them to OpenStack # logging system. When set to True, it traces libguestfs API calls and # enable verbose debug messages. In order to use the above feature, # "libguestfs" package must be installed. # # Related options: # Since libguestfs access and modifies VM's managed by libvirt, below options # should be set to give access to those VM's. # * libvirt.inject_key # * libvirt.inject_partition # * libvirt.inject_password # (boolean value) #debug = false [healthcheck] # # From oslo.middleware # # DEPRECATED: The path to respond to healtcheck requests on. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #path = /healthcheck # Show more detailed information as part of the response (boolean value) #detailed = false # Additional backends that can perform health checks and report that information # back as part of a request. (list value) #backends = # Check the presence of a file to determine if an application is running on a # port. Used by DisableByFileHealthcheck plugin. (string value) #disable_by_file_path = # Check the presence of a file based on a port to determine if an application is # running on a port. Expects a "port:path" list of strings. Used by # DisableByFilesPortsHealthcheck plugin. (list value) #disable_by_file_paths = [hyperv] # # The hyperv feature allows you to configure the Hyper-V hypervisor # driver to be used within an OpenStack deployment. # # From nova.conf # # # Dynamic memory ratio # # Enables dynamic memory allocation (ballooning) when set to a value # greater than 1. The value expresses the ratio between the total RAM # assigned to an instance and its startup RAM amount. For example a # ratio of 2.0 for an instance with 1024MB of RAM implies 512MB of # RAM allocated at startup. # # Possible values: # # * 1.0: Disables dynamic memory allocation (Default). # * Float values greater than 1.0: Enables allocation of total implied # RAM divided by this value for startup. # (floating point value) #dynamic_memory_ratio = 1.0 # # Enable instance metrics collection # # Enables metrics collections for an instance by using Hyper-V's # metric APIs. Collected data can be retrieved by other apps and # services, e.g.: Ceilometer. # (boolean value) #enable_instance_metrics_collection = false # # Instances path share # # The name of a Windows share mapped to the "instances_path" dir # and used by the resize feature to copy files to the target host. # If left blank, an administrative share (hidden network share) will # be used, looking for the same "instances_path" used locally. # # Possible values: # # * "": An administrative share will be used (Default). # * Name of a Windows share. # # Related options: # # * "instances_path": The directory which will be used if this option # here is left blank. # (string value) #instances_path_share = # # Limit CPU features # # This flag is needed to support live migration to hosts with # different CPU features and checked during instance creation # in order to limit the CPU features used by the instance. # (boolean value) #limit_cpu_features = false # # Mounted disk query retry count # # The number of times to retry checking for a mounted disk. # The query runs until the device can be found or the retry # count is reached. # # Possible values: # # * Positive integer values. Values greater than 1 is recommended # (Default: 10). # # Related options: # # * Time interval between disk mount retries is declared with # "mounted_disk_query_retry_interval" option. # (integer value) # Minimum value: 0 #mounted_disk_query_retry_count = 10 # # Mounted disk query retry interval # # Interval between checks for a mounted disk, in seconds. # # Possible values: # # * Time in seconds (Default: 5). # # Related options: # # * This option is meaningful when the mounted_disk_query_retry_count # is greater than 1. # * The retry loop runs with mounted_disk_query_retry_count and # mounted_disk_query_retry_interval configuration options. # (integer value) # Minimum value: 0 #mounted_disk_query_retry_interval = 5 # # Power state check timeframe # # The timeframe to be checked for instance power state changes. # This option is used to fetch the state of the instance from Hyper-V # through the WMI interface, within the specified timeframe. # # Possible values: # # * Timeframe in seconds (Default: 60). # (integer value) # Minimum value: 0 #power_state_check_timeframe = 60 # # Power state event polling interval # # Instance power state change event polling frequency. Sets the # listener interval for power state events to the given value. # This option enhances the internal lifecycle notifications of # instances that reboot themselves. It is unlikely that an operator # has to change this value. # # Possible values: # # * Time in seconds (Default: 2). # (integer value) # Minimum value: 0 #power_state_event_polling_interval = 2 # # qemu-img command # # qemu-img is required for some of the image related operations # like converting between different image types. You can get it # from here: (http://qemu.weilnetz.de/) or you can install the # Cloudbase OpenStack Hyper-V Compute Driver # (https://cloudbase.it/openstack-hyperv-driver/) which automatically # sets the proper path for this config option. You can either give the # full path of qemu-img.exe or set its path in the PATH environment # variable and leave this option to the default value. # # Possible values: # # * Name of the qemu-img executable, in case it is in the same # directory as the nova-compute service or its path is in the # PATH environment variable (Default). # * Path of qemu-img command (DRIVELETTER:\PATH\TO\QEMU-IMG\COMMAND). # # Related options: # # * If the config_drive_cdrom option is False, qemu-img will be used to # convert the ISO to a VHD, otherwise the configuration drive will # remain an ISO. To use configuration drive with Hyper-V, you must # set the mkisofs_cmd value to the full path to an mkisofs.exe # installation. # (string value) #qemu_img_cmd = qemu-img.exe # # External virtual switch name # # The Hyper-V Virtual Switch is a software-based layer-2 Ethernet # network switch that is available with the installation of the # Hyper-V server role. The switch includes programmatically managed # and extensible capabilities to connect virtual machines to both # virtual networks and the physical network. In addition, Hyper-V # Virtual Switch provides policy enforcement for security, isolation, # and service levels. The vSwitch represented by this config option # must be an external one (not internal or private). # # Possible values: # # * If not provided, the first of a list of available vswitches # is used. This list is queried using WQL. # * Virtual switch name. # (string value) #vswitch_name = # # Wait soft reboot seconds # # Number of seconds to wait for instance to shut down after soft # reboot request is made. We fall back to hard reboot if instance # does not shutdown within this window. # # Possible values: # # * Time in seconds (Default: 60). # (integer value) # Minimum value: 0 #wait_soft_reboot_seconds = 60 # # Configuration drive cdrom # # OpenStack can be configured to write instance metadata to # a configuration drive, which is then attached to the # instance before it boots. The configuration drive can be # attached as a disk drive (default) or as a CD drive. # # Possible values: # # * True: Attach the configuration drive image as a CD drive. # * False: Attach the configuration drive image as a disk drive (Default). # # Related options: # # * This option is meaningful with force_config_drive option set to 'True' # or when the REST API call to create an instance will have # '--config-drive=True' flag. # * config_drive_format option must be set to 'iso9660' in order to use # CD drive as the configuration drive image. # * To use configuration drive with Hyper-V, you must set the # mkisofs_cmd value to the full path to an mkisofs.exe installation. # Additionally, you must set the qemu_img_cmd value to the full path # to an qemu-img command installation. # * You can configure the Compute service to always create a configuration # drive by setting the force_config_drive option to 'True'. # (boolean value) #config_drive_cdrom = false # # Configuration drive inject password # # Enables setting the admin password in the configuration drive image. # # Related options: # # * This option is meaningful when used with other options that enable # configuration drive usage with Hyper-V, such as force_config_drive. # * Currently, the only accepted config_drive_format is 'iso9660'. # (boolean value) #config_drive_inject_password = false # # Volume attach retry count # # The number of times to retry attaching a volume. Volume attachment # is retried until success or the given retry count is reached. # # Possible values: # # * Positive integer values (Default: 10). # # Related options: # # * Time interval between attachment attempts is declared with # volume_attach_retry_interval option. # (integer value) # Minimum value: 0 #volume_attach_retry_count = 10 # # Volume attach retry interval # # Interval between volume attachment attempts, in seconds. # # Possible values: # # * Time in seconds (Default: 5). # # Related options: # # * This options is meaningful when volume_attach_retry_count # is greater than 1. # * The retry loop runs with volume_attach_retry_count and # volume_attach_retry_interval configuration options. # (integer value) # Minimum value: 0 #volume_attach_retry_interval = 5 # # Enable RemoteFX feature # # This requires at least one DirectX 11 capable graphics adapter for # Windows / Hyper-V Server 2012 R2 or newer and RDS-Virtualization # feature has to be enabled. # # Instances with RemoteFX can be requested with the following flavor # extra specs: # # **os:resolution**. Guest VM screen resolution size. Acceptable values:: # # 1024x768, 1280x1024, 1600x1200, 1920x1200, 2560x1600, 3840x2160 # # ``3840x2160`` is only available on Windows / Hyper-V Server 2016. # # **os:monitors**. Guest VM number of monitors. Acceptable values:: # # [1, 4] - Windows / Hyper-V Server 2012 R2 # [1, 8] - Windows / Hyper-V Server 2016 # # **os:vram**. Guest VM VRAM amount. Only available on # Windows / Hyper-V Server 2016. Acceptable values:: # # 64, 128, 256, 512, 1024 # (boolean value) #enable_remotefx = false # # Use multipath connections when attaching iSCSI or FC disks. # # This requires the Multipath IO Windows feature to be enabled. MPIO must be # configured to claim such devices. # (boolean value) #use_multipath_io = false # # List of iSCSI initiators that will be used for estabilishing iSCSI sessions. # # If none are specified, the Microsoft iSCSI initiator service will choose the # initiator. # (list value) #iscsi_initiator_list = [ironic] # # Configuration options for Ironic driver (Bare Metal). # If using the Ironic driver following options must be set: # * auth_type # * auth_url # * project_name # * username # * password # * project_domain_id or project_domain_name # * user_domain_id or user_domain_name # # From nova.conf # # DEPRECATED: URL override for the Ironic API endpoint. (uri value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Endpoint lookup uses the service catalog via common keystoneauth1 # Adapter configuration options. In the current release, api_endpoint will # override this behavior, but will be ignored and/or removed in a future # release. To achieve the same result, use the endpoint_override option instead. #api_endpoint = http://ironic.example.org:6385/ # # The number of times to retry when a request conflicts. # If set to 0, only try once, no retries. # # Related options: # # * api_retry_interval # (integer value) # Minimum value: 0 #api_max_retries = 60 # # The number of seconds to wait before retrying the request. # # Related options: # # * api_max_retries # (integer value) # Minimum value: 0 #api_retry_interval = 2 # Timeout (seconds) to wait for node serial console state changed. Set to 0 to # disable timeout. (integer value) # Minimum value: 0 #serial_console_state_timeout = 10 # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # Authentication type to load (string value) # Deprecated group/name - [ironic]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = # Authentication URL (string value) #auth_url = # Scope for system operations (string value) #system_scope = # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Project ID to scope to (string value) #project_id = # Project name to scope to (string value) #project_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = # Trust ID (string value) #trust_id = # User ID (string value) #user_id = # Username (string value) # Deprecated group/name - [ironic]/user_name #username = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # User's password (string value) #password = # The default service_type for endpoint URL discovery. (string value) #service_type = baremetal # The default service_name for endpoint URL discovery. (string value) #service_name = # List of interfaces, in order of preference, for endpoint URL. (list value) #valid_interfaces = internal,public # The default region_name for endpoint URL discovery. (string value) #region_name = # Always use this endpoint URL for requests for this client. NOTE: The # unversioned endpoint should be specified here; to request a particular API # version, use the `version`, `min-version`, and/or `max-version` options. # (string value) # Deprecated group/name - [ironic]/api_endpoint #endpoint_override = [key_manager] # # From nova.conf # # # Fixed key returned by key manager, specified in hex. # # Possible values: # # * Empty string or a key in hex value # (string value) #fixed_key = # Specify the key manager implementation. Options are "barbican" and "vault". # Default is "barbican". Will support the values earlier set using # [key_manager]/api_class for some time. (string value) # Deprecated group/name - [key_manager]/api_class #backend = barbican # The type of authentication credential to create. Possible values are 'token', # 'password', 'keystone_token', and 'keystone_password'. Required if no context # is passed to the credential factory. (string value) #auth_type = # Token for authentication. Required for 'token' and 'keystone_token' auth_type # if no context is passed to the credential factory. (string value) #token = # Username for authentication. Required for 'password' auth_type. Optional for # the 'keystone_password' auth_type. (string value) #username = # Password for authentication. Required for 'password' and 'keystone_password' # auth_type. (string value) #password = # Use this endpoint to connect to Keystone. (string value) #auth_url = # User ID for authentication. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #user_id = # User's domain ID for authentication. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #user_domain_id = # User's domain name for authentication. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #user_domain_name = # Trust ID for trust scoping. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #trust_id = # Domain ID for domain scoping. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #domain_id = # Domain name for domain scoping. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #domain_name = # Project ID for project scoping. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #project_id = # Project name for project scoping. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #project_name = # Project's domain ID for project. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #project_domain_id = # Project's domain name for project. Optional for 'keystone_token' and # 'keystone_password' auth_type. (string value) #project_domain_name = # Allow fetching a new token if the current one is going to expire. Optional for # 'keystone_token' and 'keystone_password' auth_type. (boolean value) #reauthenticate = true [keystone] # Configuration options for the identity service # # From nova.conf # # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # The default service_type for endpoint URL discovery. (string value) #service_type = identity # The default service_name for endpoint URL discovery. (string value) #service_name = # List of interfaces, in order of preference, for endpoint URL. (list value) #valid_interfaces = internal,public # The default region_name for endpoint URL discovery. (string value) #region_name = # Always use this endpoint URL for requests for this client. NOTE: The # unversioned endpoint should be specified here; to request a particular API # version, use the `version`, `min-version`, and/or `max-version` options. # (string value) #endpoint_override = [keystone_authtoken] #pavlos --start auth_url = http://controller/identity/v3 #auth_url = http://controller:5000/v3 #auth_url = http://192.168.40.184/identity memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = linux #pavlos --end # # From keystonemiddleware.auth_token # # Complete "public" Identity API endpoint. This endpoint should not be an # "admin" endpoint, as it should be accessible by all end users. Unauthenticated # clients are redirected to this endpoint to authenticate. Although this # endpoint should ideally be unversioned, client support in the wild varies. If # you're using a versioned v2 endpoint here, then this should *not* be the same # endpoint the service user utilizes for validating tokens, because normal end # users may not be able to reach that endpoint. (string value) # Deprecated group/name - [keystone_authtoken]/auth_uri #www_authenticate_uri = # DEPRECATED: Complete "public" Identity API endpoint. This endpoint should not # be an "admin" endpoint, as it should be accessible by all end users. # Unauthenticated clients are redirected to this endpoint to authenticate. # Although this endpoint should ideally be unversioned, client support in the # wild varies. If you're using a versioned v2 endpoint here, then this should # *not* be the same endpoint the service user utilizes for validating tokens, # because normal end users may not be able to reach that endpoint. This option # is deprecated in favor of www_authenticate_uri and will be removed in the S # release. (string value) # This option is deprecated for removal since Queens. # Its value may be silently ignored in the future. # Reason: The auth_uri option is deprecated in favor of www_authenticate_uri and # will be removed in the S release. #auth_uri = # API version of the admin Identity API endpoint. (string value) #auth_version = # Do not handle authorization requests within the middleware, but delegate the # authorization decision to downstream WSGI components. (boolean value) #delay_auth_decision = false # Request timeout value for communicating with Identity API server. (integer # value) #http_connect_timeout = # How many times are we trying to reconnect when communicating with Identity API # Server. (integer value) #http_request_max_retries = 3 # Request environment key where the Swift cache object is stored. When # auth_token middleware is deployed with a Swift cache, use this option to have # the middleware share a caching backend with swift. Otherwise, use the # ``memcached_servers`` option instead. (string value) #cache = # Required if identity server requires client certificate (string value) #certfile = # Required if identity server requires client certificate (string value) #keyfile = # A PEM encoded Certificate Authority to use when verifying HTTPs connections. # Defaults to system CAs. (string value) #cafile = # Verify HTTPS connections. (boolean value) #insecure = false # The region in which the identity server can be found. (string value) #region_name = # DEPRECATED: Directory used to cache files related to PKI tokens. This option # has been deprecated in the Ocata release and will be removed in the P release. # (string value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #signing_dir = # Optionally specify a list of memcached server(s) to use for caching. If left # undefined, tokens will instead be cached in-process. (list value) # Deprecated group/name - [keystone_authtoken]/memcache_servers #memcached_servers = # In order to prevent excessive effort spent validating tokens, the middleware # caches previously-seen tokens for a configurable duration (in seconds). Set to # -1 to disable caching completely. (integer value) #token_cache_time = 300 # DEPRECATED: Determines the frequency at which the list of revoked tokens is # retrieved from the Identity service (in seconds). A high number of revocation # events combined with a low cache duration may significantly reduce # performance. Only valid for PKI tokens. This option has been deprecated in the # Ocata release and will be removed in the P release. (integer value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #revocation_cache_time = 10 # (Optional) If defined, indicate whether token data should be authenticated or # authenticated and encrypted. If MAC, token data is authenticated (with HMAC) # in the cache. If ENCRYPT, token data is encrypted and authenticated in the # cache. If the value is not one of these options or empty, auth_token will # raise an exception on initialization. (string value) # Possible values: # None - # MAC - # ENCRYPT - #memcache_security_strategy = None # (Optional, mandatory if memcache_security_strategy is defined) This string is # used for key derivation. (string value) #memcache_secret_key = # (Optional) Number of seconds memcached server is considered dead before it is # tried again. (integer value) #memcache_pool_dead_retry = 300 # (Optional) Maximum total number of open connections to every memcached server. # (integer value) #memcache_pool_maxsize = 10 # (Optional) Socket timeout in seconds for communicating with a memcached # server. (integer value) #memcache_pool_socket_timeout = 3 # (Optional) Number of seconds a connection to memcached is held unused in the # pool before it is closed. (integer value) #memcache_pool_unused_timeout = 60 # (Optional) Number of seconds that an operation will wait to get a memcached # client connection from the pool. (integer value) #memcache_pool_conn_get_timeout = 10 # (Optional) Use the advanced (eventlet safe) memcached client pool. The # advanced pool will only work under python 2.x. (boolean value) #memcache_use_advanced_pool = false # (Optional) Indicate whether to set the X-Service-Catalog header. If False, # middleware will not ask for service catalog on token validation and will not # set the X-Service-Catalog header. (boolean value) #include_service_catalog = true # Used to control the use and type of token binding. Can be set to: "disabled" # to not check token binding. "permissive" (default) to validate binding # information if the bind type is of a form known to the server and ignore it if # not. "strict" like "permissive" but if the bind type is unknown the token will # be rejected. "required" any form of token binding is needed to be allowed. # Finally the name of a binding method that must be present in tokens. (string # value) #enforce_token_bind = permissive # DEPRECATED: If true, the revocation list will be checked for cached tokens. # This requires that PKI tokens are configured on the identity server. (boolean # value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #check_revocations_for_cached = false # DEPRECATED: Hash algorithms to use for hashing PKI tokens. This may be a # single algorithm or multiple. The algorithms are those supported by Python # standard hashlib.new(). The hashes will be tried in the order given, so put # the preferred one first for performance. The result of the first hash will be # stored in the cache. This will typically be set to multiple values only while # migrating from a less secure algorithm to a more secure one. Once all the old # tokens are expired this option should be set to a single value for better # performance. (list value) # This option is deprecated for removal since Ocata. # Its value may be silently ignored in the future. # Reason: PKI token format is no longer supported. #hash_algorithms = md5 # A choice of roles that must be present in a service token. Service tokens are # allowed to request that an expired token can be used and so this check should # tightly control that only actual services should be sending this token. Roles # here are applied as an ANY check so any role in this list must be present. For # backwards compatibility reasons this currently only affects the allow_expired # check. (list value) #service_token_roles = service # For backwards compatibility reasons we must let valid service tokens pass that # don't pass the service_token_roles check as valid. Setting this true will # become the default in a future release and should be enabled if possible. # (boolean value) #service_token_roles_required = false # Prefix to prepend at the beginning of the path. Deprecated, use identity_uri. # (string value) #auth_admin_prefix = # Host providing the admin Identity API endpoint. Deprecated, use identity_uri. # (string value) #auth_host = 127.0.0.1 # Port of the admin Identity API endpoint. Deprecated, use identity_uri. # (integer value) #auth_port = 35357 # Protocol of the admin Identity API endpoint. Deprecated, use identity_uri. # (string value) # Possible values: # http - # https - #auth_protocol = https # Complete admin Identity API endpoint. This should specify the unversioned root # endpoint e.g. https://localhost:35357/ (string value) #identity_uri = # This option is deprecated and may be removed in a future release. Single # shared secret with the Keystone configuration used for bootstrapping a # Keystone installation, or otherwise bypassing the normal authentication # process. This option should not be used, use `admin_user` and `admin_password` # instead. (string value) #admin_token = # Service username. (string value) #admin_user = # Service user password. (string value) #admin_password = # Service tenant name. (string value) #admin_tenant_name = admin # Authentication type to load (string value) # Deprecated group/name - [keystone_authtoken]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = [libvirt] # # Libvirt options allows cloud administrator to configure related # libvirt hypervisor driver to be used within an OpenStack deployment. # # Almost all of the libvirt config options are influence by ``virt_type`` config # which describes the virtualization type (or so called domain type) libvirt # should use for specific features such as live migration, snapshot. # # From nova.conf # # # The ID of the image to boot from to rescue data from a corrupted instance. # # If the rescue REST API operation doesn't provide an ID of an image to # use, the image which is referenced by this ID is used. If this # option is not set, the image from the instance is used. # # Possible values: # # * An ID of an image or nothing. If it points to an *Amazon Machine # Image* (AMI), consider to set the config options ``rescue_kernel_id`` # and ``rescue_ramdisk_id`` too. If nothing is set, the image of the instance # is used. # # Related options: # # * ``rescue_kernel_id``: If the chosen rescue image allows the separate # definition of its kernel disk, the value of this option is used, # if specified. This is the case when *Amazon*'s AMI/AKI/ARI image # format is used for the rescue image. # * ``rescue_ramdisk_id``: If the chosen rescue image allows the separate # definition of its RAM disk, the value of this option is used if, # specified. This is the case when *Amazon*'s AMI/AKI/ARI image # format is used for the rescue image. # (string value) #rescue_image_id = # # The ID of the kernel (AKI) image to use with the rescue image. # # If the chosen rescue image allows the separate definition of its kernel # disk, the value of this option is used, if specified. This is the case # when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image. # # Possible values: # # * An ID of an kernel image or nothing. If nothing is specified, the kernel # disk from the instance is used if it was launched with one. # # Related options: # # * ``rescue_image_id``: If that option points to an image in *Amazon*'s # AMI/AKI/ARI image format, it's useful to use ``rescue_kernel_id`` too. # (string value) #rescue_kernel_id = # # The ID of the RAM disk (ARI) image to use with the rescue image. # # If the chosen rescue image allows the separate definition of its RAM # disk, the value of this option is used, if specified. This is the case # when *Amazon*'s AMI/AKI/ARI image format is used for the rescue image. # # Possible values: # # * An ID of a RAM disk image or nothing. If nothing is specified, the RAM # disk from the instance is used if it was launched with one. # # Related options: # # * ``rescue_image_id``: If that option points to an image in *Amazon*'s # AMI/AKI/ARI image format, it's useful to use ``rescue_ramdisk_id`` too. # (string value) #rescue_ramdisk_id = # # Describes the virtualization type (or so called domain type) libvirt should # use. # # The choice of this type must match the underlying virtualization strategy # you have chosen for this host. # # Possible values: # # * See the predefined set of case-sensitive values. # # Related options: # # * ``connection_uri``: depends on this # * ``disk_prefix``: depends on this # * ``cpu_mode``: depends on this # * ``cpu_model``: depends on this # (string value) # Possible values: # kvm - # lxc - # qemu - # uml - # xen - # parallels - #virt_type = kvm # # Overrides the default libvirt URI of the chosen virtualization type. # # If set, Nova will use this URI to connect to libvirt. # # Possible values: # # * An URI like ``qemu:///system`` or ``xen+ssh://oirase/`` for example. # This is only necessary if the URI differs to the commonly known URIs # for the chosen virtualization type. # # Related options: # # * ``virt_type``: Influences what is used as default value here. # (string value) #connection_uri = # # Allow the injection of an admin password for instance only at ``create`` and # ``rebuild`` process. # # There is no agent needed within the image to do this. If *libguestfs* is # available on the host, it will be used. Otherwise *nbd* is used. The file # system of the image will be mounted and the admin password, which is provided # in the REST API call will be injected as password for the root user. If no # root user is available, the instance won't be launched and an error is thrown. # Be aware that the injection is *not* possible when the instance gets launched # from a volume. # # Possible values: # # * True: Allows the injection. # * False (default): Disallows the injection. Any via the REST API provided # admin password will be silently ignored. # # Related options: # # * ``inject_partition``: That option will decide about the discovery and usage # of the file system. It also can disable the injection at all. # (boolean value) #inject_password = false # # Allow the injection of an SSH key at boot time. # # There is no agent needed within the image to do this. If *libguestfs* is # available on the host, it will be used. Otherwise *nbd* is used. The file # system of the image will be mounted and the SSH key, which is provided # in the REST API call will be injected as SSH key for the root user and # appended to the ``authorized_keys`` of that user. The SELinux context will # be set if necessary. Be aware that the injection is *not* possible when the # instance gets launched from a volume. # # This config option will enable directly modifying the instance disk and does # not affect what cloud-init may do using data from config_drive option or the # metadata service. # # Related options: # # * ``inject_partition``: That option will decide about the discovery and usage # of the file system. It also can disable the injection at all. # (boolean value) #inject_key = false # # Determines the way how the file system is chosen to inject data into it. # # *libguestfs* will be used a first solution to inject data. If that's not # available on the host, the image will be locally mounted on the host as a # fallback solution. If libguestfs is not able to determine the root partition # (because there are more or less than one root partition) or cannot mount the # file system it will result in an error and the instance won't be boot. # # Possible values: # # * -2 => disable the injection of data. # * -1 => find the root partition with the file system to mount with libguestfs # * 0 => The image is not partitioned # * >0 => The number of the partition to use for the injection # # Related options: # # * ``inject_key``: If this option allows the injection of a SSH key it depends # on value greater or equal to -1 for ``inject_partition``. # * ``inject_password``: If this option allows the injection of an admin # password # it depends on value greater or equal to -1 for ``inject_partition``. # * ``guestfs`` You can enable the debug log level of libguestfs with this # config option. A more verbose output will help in debugging issues. # * ``virt_type``: If you use ``lxc`` as virt_type it will be treated as a # single partition image # (integer value) # Minimum value: -2 #inject_partition = -2 # DEPRECATED: # Enable a mouse cursor within a graphical VNC or SPICE sessions. # # This will only be taken into account if the VM is fully virtualized and VNC # and/or SPICE is enabled. If the node doesn't support a graphical framebuffer, # then it is valid to set this to False. # # Related options: # * ``[vnc]enabled``: If VNC is enabled, ``use_usb_tablet`` will have an effect. # * ``[spice]enabled`` + ``[spice].agent_enabled``: If SPICE is enabled and the # spice agent is disabled, the config value of ``use_usb_tablet`` will have # an effect. # (boolean value) # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: This option is being replaced by the 'pointer_model' option. #use_usb_tablet = true # # The IP address or hostname to be used as the target for live migration # traffic. # # If this option is set to None, the hostname of the migration target compute # node will be used. # # This option is useful in environments where the live-migration traffic can # impact the network plane significantly. A separate network for live-migration # traffic can then use this config option and avoids the impact on the # management network. # # Possible values: # # * A valid IP address or hostname, else None. # # Related options: # # * ``live_migration_tunnelled``: The live_migration_inbound_addr value is # ignored if tunneling is enabled. # (string value) #live_migration_inbound_addr = # DEPRECATED: # Live migration target URI to use. # # Override the default libvirt live migration target URI (which is dependent # on virt_type). Any included "%s" is replaced with the migration target # hostname. # # If this option is set to None (which is the default), Nova will automatically # generate the `live_migration_uri` value based on only 4 supported `virt_type` # in following list: # # * 'kvm': 'qemu+tcp://%s/system' # * 'qemu': 'qemu+tcp://%s/system' # * 'xen': 'xenmigr://%s/system' # * 'parallels': 'parallels+tcp://%s/system' # # Related options: # # * ``live_migration_inbound_addr``: If ``live_migration_inbound_addr`` value # is not None and ``live_migration_tunnelled`` is False, the ip/hostname # address of target compute node is used instead of ``live_migration_uri`` as # the uri for live migration. # * ``live_migration_scheme``: If ``live_migration_uri`` is not set, the scheme # used for live migration is taken from ``live_migration_scheme`` instead. # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # live_migration_uri is deprecated for removal in favor of two other options # that # allow to change live migration scheme and target URI: # ``live_migration_scheme`` # and ``live_migration_inbound_addr`` respectively. #live_migration_uri = # # URI scheme used for live migration. # # Override the default libvirt live migration scheme (which is dependent on # virt_type). If this option is set to None, nova will automatically choose a # sensible default based on the hypervisor. It is not recommended that you # change # this unless you are very sure that hypervisor supports a particular scheme. # # Related options: # # * ``virt_type``: This option is meaningful only when ``virt_type`` is set to # `kvm` or `qemu`. # * ``live_migration_uri``: If ``live_migration_uri`` value is not None, the # scheme used for live migration is taken from ``live_migration_uri`` instead. # (string value) #live_migration_scheme = # # Enable tunnelled migration. # # This option enables the tunnelled migration feature, where migration data is # transported over the libvirtd connection. If enabled, we use the # VIR_MIGRATE_TUNNELLED migration flag, avoiding the need to configure # the network to allow direct hypervisor to hypervisor communication. # If False, use the native transport. If not set, Nova will choose a # sensible default based on, for example the availability of native # encryption support in the hypervisor. Enabling this option will definitely # impact performance massively. # # Note that this option is NOT compatible with use of block migration. # # Related options: # # * ``live_migration_inbound_addr``: The live_migration_inbound_addr value is # ignored if tunneling is enabled. # (boolean value) #live_migration_tunnelled = false # # Maximum bandwidth(in MiB/s) to be used during migration. # # If set to 0, the hypervisor will choose a suitable default. Some hypervisors # do not support this feature and will return an error if bandwidth is not 0. # Please refer to the libvirt documentation for further details. # (integer value) #live_migration_bandwidth = 0 # # Maximum permitted downtime, in milliseconds, for live migration # switchover. # # Will be rounded up to a minimum of 100ms. You can increase this value # if you want to allow live-migrations to complete faster, or avoid # live-migration timeout errors by allowing the guest to be paused for # longer during the live-migration switch over. # # Related options: # # * live_migration_completion_timeout # (integer value) # Minimum value: 100 #live_migration_downtime = 500 # # Number of incremental steps to reach max downtime value. # # Will be rounded up to a minimum of 3 steps. # (integer value) # Minimum value: 3 #live_migration_downtime_steps = 10 # # Time to wait, in seconds, between each step increase of the migration # downtime. # # Minimum delay is 3 seconds. Value is per GiB of guest RAM + disk to be # transferred, with lower bound of a minimum of 2 GiB per device. # (integer value) # Minimum value: 3 #live_migration_downtime_delay = 75 # # Time to wait, in seconds, for migration to successfully complete transferring # data before aborting the operation. # # Value is per GiB of guest RAM + disk to be transferred, with lower bound of # a minimum of 2 GiB. Should usually be larger than downtime delay * downtime # steps. Set to 0 to disable timeouts. # # Related options: # # * live_migration_downtime # * live_migration_downtime_steps # * live_migration_downtime_delay # (integer value) # Note: This option can be changed without restarting. #live_migration_completion_timeout = 800 # DEPRECATED: # Time to wait, in seconds, for migration to make forward progress in # transferring data before aborting the operation. # # Set to 0 to disable timeouts. # # This is deprecated, and now disabled by default because we have found serious # bugs in this feature that caused false live-migration timeout failures. This # feature will be removed or replaced in a future release. # (integer value) # Note: This option can be changed without restarting. # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Serious bugs found in this feature. #live_migration_progress_timeout = 0 # # This option allows nova to switch an on-going live migration to post-copy # mode, i.e., switch the active VM to the one on the destination node before the # migration is complete, therefore ensuring an upper bound on the memory that # needs to be transferred. Post-copy requires libvirt>=1.3.3 and QEMU>=2.5.0. # # When permitted, post-copy mode will be automatically activated if a # live-migration memory copy iteration does not make percentage increase of at # least 10% over the last iteration. # # The live-migration force complete API also uses post-copy when permitted. If # post-copy mode is not available, force complete falls back to pausing the VM # to ensure the live-migration operation will complete. # # When using post-copy mode, if the source and destination hosts loose network # connectivity, the VM being live-migrated will need to be rebooted. For more # details, please see the Administration guide. # # Related options: # # * live_migration_permit_auto_converge # (boolean value) #live_migration_permit_post_copy = false # # This option allows nova to start live migration with auto converge on. # # Auto converge throttles down CPU if a progress of on-going live migration # is slow. Auto converge will only be used if this flag is set to True and # post copy is not permitted or post copy is unavailable due to the version # of libvirt and QEMU in use. # # Related options: # # * live_migration_permit_post_copy # (boolean value) #live_migration_permit_auto_converge = false # # Determine the snapshot image format when sending to the image service. # # If set, this decides what format is used when sending the snapshot to the # image service. # If not set, defaults to same type as source image. # # Possible values: # # * ``raw``: RAW disk format # * ``qcow2``: KVM default disk format # * ``vmdk``: VMWare default disk format # * ``vdi``: VirtualBox default disk format # * If not set, defaults to same type as source image. # (string value) # Possible values: # raw - # qcow2 - # vmdk - # vdi - #snapshot_image_format = # # Override the default disk prefix for the devices attached to an instance. # # If set, this is used to identify a free disk device name for a bus. # # Possible values: # # * Any prefix which will result in a valid disk device name like 'sda' or 'hda' # for example. This is only necessary if the device names differ to the # commonly known device name prefixes for a virtualization type such as: sd, # xvd, uvd, vd. # # Related options: # # * ``virt_type``: Influences which device type is used, which determines # the default disk prefix. # (string value) #disk_prefix = # Number of seconds to wait for instance to shut down after soft reboot request # is made. We fall back to hard reboot if instance does not shutdown within this # window. (integer value) #wait_soft_reboot_seconds = 120 # # Is used to set the CPU mode an instance should have. # # If virt_type="kvm|qemu", it will default to "host-model", otherwise it will # default to "none". # # Possible values: # # * ``host-model``: Clones the host CPU feature flags # * ``host-passthrough``: Use the host CPU model exactly # * ``custom``: Use a named CPU model # * ``none``: Don't set a specific CPU model. For instances with # ``virt_type`` as KVM/QEMU, the default CPU model from QEMU will be used, # which provides a basic set of CPU features that are compatible with most # hosts. # # Related options: # # * ``cpu_model``: This should be set ONLY when ``cpu_mode`` is set to # ``custom``. Otherwise, it would result in an error and the instance # launch will fail. # # (string value) # Possible values: # host-model - # host-passthrough - # custom - # none - #cpu_mode = # # Set the name of the libvirt CPU model the instance should use. # # Possible values: # # * The named CPU models listed in ``/usr/share/libvirt/cpu_map.xml`` # # Related options: # # * ``cpu_mode``: This should be set to ``custom`` ONLY when you want to # configure (via ``cpu_model``) a specific named CPU model. Otherwise, it # would result in an error and the instance launch will fail. # # * ``virt_type``: Only the virtualization types ``kvm`` and ``qemu`` use this. # (string value) #cpu_model = # # This allows specifying granular CPU feature flags when specifying CPU # models. For example, to explicitly specify the ``pcid`` # (Process-Context ID, an Intel processor feature) flag to the "IvyBridge" # virtual CPU model:: # # [libvirt] # cpu_mode = custom # cpu_model = IvyBridge # cpu_model_extra_flags = pcid # # Currently, the choice is restricted to a few options: ``pcid``, # ``ssbd``, ``virt-ssbd``, ``amd-ssbd``, and ``amd-no-ssb`` (the options # are case-insensitive, so ``PCID`` is also valid, for example). These # flags are now required to address the guest performance degradation as # a result of applying the "Meltdown" CVE fixes (``pcid``) and exposure # mitigation (``ssbd`` and related options) on affected CPU models. # # Note that when using this config attribute to set the 'PCID' and # related CPU flags, not all virtual (i.e. libvirt / QEMU) CPU models # need it: # # * The only virtual CPU models that include the 'PCID' capability are # Intel "Haswell", "Broadwell", and "Skylake" variants. # # * The libvirt / QEMU CPU models "Nehalem", "Westmere", "SandyBridge", # and "IvyBridge" will _not_ expose the 'PCID' capability by default, # even if the host CPUs by the same name include it. I.e. 'PCID' needs # to be explicitly specified when using the said virtual CPU models. # # For more information about ``ssbd`` and related options, # please refer to the following security updates: # # https://www.us-cert.gov/ncas/alerts/TA18-141A # # https://www.redhat.com/archives/libvir-list/2018-May/msg01562.html # # https://www.redhat.com/archives/libvir-list/2018-June/msg01111.html # # For now, the ``cpu_model_extra_flags`` config attribute is valid only in # combination with ``cpu_mode`` + ``cpu_model`` options. # # Besides ``custom``, the libvirt driver has two other CPU modes: The # default, ``host-model``, tells it to do the right thing with respect to # handling 'PCID' CPU flag for the guest -- *assuming* you are running # updated processor microcode, host and guest kernel, libvirt, and QEMU. # The other mode, ``host-passthrough``, checks if 'PCID' is available in # the hardware, and if so directly passes it through to the Nova guests. # Thus, in context of 'PCID', with either of these CPU modes # (``host-model`` or ``host-passthrough``), there is no need to use the # ``cpu_model_extra_flags``. # # Related options: # # * cpu_mode # * cpu_model # (list value) #cpu_model_extra_flags = # Location where libvirt driver will store snapshots before uploading them to # image service (string value) #snapshots_directory = $instances_path/snapshots # Location where the Xen hvmloader is kept (string value) #xen_hvmloader_path = /usr/lib/xen/boot/hvmloader # # Specific cache modes to use for different disk types. # # For example: file=directsync,block=none,network=writeback # # For local or direct-attached storage, it is recommended that you use # writethrough (default) mode, as it ensures data integrity and has acceptable # I/O performance for applications running in the guest, especially for read # operations. However, caching mode none is recommended for remote NFS storage, # because direct I/O operations (O_DIRECT) perform better than synchronous I/O # operations (with O_SYNC). Caching mode none effectively turns all guest I/O # operations into direct I/O operations on the host, which is the NFS client in # this environment. # # Possible cache modes: # # * default: Same as writethrough. # * none: With caching mode set to none, the host page cache is disabled, but # the disk write cache is enabled for the guest. In this mode, the write # performance in the guest is optimal because write operations bypass the host # page cache and go directly to the disk write cache. If the disk write cache # is battery-backed, or if the applications or storage stack in the guest # transfer data properly (either through fsync operations or file system # barriers), then data integrity can be ensured. However, because the host # page cache is disabled, the read performance in the guest would not be as # good as in the modes where the host page cache is enabled, such as # writethrough mode. Shareable disk devices, like for a multi-attachable block # storage volume, will have their cache mode set to 'none' regardless of # configuration. # * writethrough: writethrough mode is the default caching mode. With # caching set to writethrough mode, the host page cache is enabled, but the # disk write cache is disabled for the guest. Consequently, this caching mode # ensures data integrity even if the applications and storage stack in the # guest do not transfer data to permanent storage properly (either through # fsync operations or file system barriers). Because the host page cache is # enabled in this mode, the read performance for applications running in the # guest is generally better. However, the write performance might be reduced # because the disk write cache is disabled. # * writeback: With caching set to writeback mode, both the host page cache # and the disk write cache are enabled for the guest. Because of this, the # I/O performance for applications running in the guest is good, but the data # is not protected in a power failure. As a result, this caching mode is # recommended only for temporary data where potential data loss is not a # concern. # * directsync: Like "writethrough", but it bypasses the host page cache. # * unsafe: Caching mode of unsafe ignores cache transfer operations # completely. As its name implies, this caching mode should be used only for # temporary data where data loss is not a concern. This mode can be useful for # speeding up guest installations, but you should switch to another caching # mode in production environments. # (list value) #disk_cachemodes = # A path to a device that will be used as source of entropy on the host. # Permitted options are: /dev/random or /dev/hwrng (string value) #rng_dev_path = # For qemu or KVM guests, set this option to specify a default machine type per # host architecture. You can find a list of supported machine types in your # environment by checking the output of the "virsh capabilities"command. The # format of the value for this config option is host-arch=machine-type. For # example: x86_64=machinetype1,armv7l=machinetype2 (list value) #hw_machine_type = # The data source used to the populate the host "serial" UUID exposed to guest # in the virtual BIOS. (string value) # Possible values: # none - # os - # hardware - # auto - #sysinfo_serial = auto # A number of seconds to memory usage statistics period. Zero or negative value # mean to disable memory usage statistics. (integer value) #mem_stats_period_seconds = 10 # List of uid targets and ranges.Syntax is guest-uid:host-uid:countMaximum of 5 # allowed. (list value) #uid_maps = # List of guid targets and ranges.Syntax is guest-gid:host-gid:countMaximum of 5 # allowed. (list value) #gid_maps = # In a realtime host context vCPUs for guest will run in that scheduling # priority. Priority depends on the host kernel (usually 1-99) (integer value) #realtime_scheduler_priority = 1 # # This is a performance event list which could be used as monitor. These events # will be passed to libvirt domain xml while creating a new instances. # Then event statistics data can be collected from libvirt. The minimum # libvirt version is 2.0.0. For more information about `Performance monitoring # events`, refer https://libvirt.org/formatdomain.html#elementsPerf . # # Possible values: # * A string list. For example: ``enabled_perf_events = cmt, mbml, mbmt`` # The supported events list can be found in # https://libvirt.org/html/libvirt-libvirt-domain.html , # which you may need to search key words ``VIR_PERF_PARAM_*`` # (list value) #enabled_perf_events = # # VM Images format. # # If default is specified, then use_cow_images flag is used instead of this # one. # # Related options: # # * virt.use_cow_images # * images_volume_group # * [workarounds]/ensure_libvirt_rbd_instance_dir_cleanup # (string value) # Possible values: # raw - # flat - # qcow2 - # lvm - # rbd - # ploop - # default - #images_type = default # # LVM Volume Group that is used for VM images, when you specify images_type=lvm # # Related options: # # * images_type # (string value) #images_volume_group = # # Create sparse logical volumes (with virtualsize) if this flag is set to True. # (boolean value) #sparse_logical_volumes = false # The RADOS pool in which rbd volumes are stored (string value) #images_rbd_pool = rbd # Path to the ceph configuration file to use (string value) #images_rbd_ceph_conf = # # Discard option for nova managed disks. # # Requires: # # * Libvirt >= 1.0.6 # * Qemu >= 1.5 (raw format) # * Qemu >= 1.6 (qcow2 format) # (string value) # Possible values: # ignore - # unmap - #hw_disk_discard = # DEPRECATED: Allows image information files to be stored in non-standard # locations (string value) # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: Image info files are no longer used by the image cache #image_info_filename_pattern = $instances_path/$image_cache_subdirectory_name/%(image)s.info # Unused resized base images younger than this will not be removed (integer # value) #remove_unused_resized_minimum_age_seconds = 3600 # DEPRECATED: Write a checksum for files in _base to disk (boolean value) # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: The image cache no longer periodically calculates checksums of stored # images. Data integrity can be checked at the block or filesystem level. #checksum_base_images = false # DEPRECATED: How frequently to checksum base images (integer value) # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. # Reason: The image cache no longer periodically calculates checksums of stored # images. Data integrity can be checked at the block or filesystem level. #checksum_interval_seconds = 3600 # # Method used to wipe ephemeral disks when they are deleted. Only takes effect # if LVM is set as backing storage. # # Possible values: # # * none - do not wipe deleted volumes # * zero - overwrite volumes with zeroes # * shred - overwrite volume repeatedly # # Related options: # # * images_type - must be set to ``lvm`` # * volume_clear_size # (string value) # Possible values: # none - # zero - # shred - #volume_clear = zero # # Size of area in MiB, counting from the beginning of the allocated volume, # that will be cleared using method set in ``volume_clear`` option. # # Possible values: # # * 0 - clear whole volume # * >0 - clear specified amount of MiB # # Related options: # # * images_type - must be set to ``lvm`` # * volume_clear - must be set and the value must be different than ``none`` # for this option to have any impact # (integer value) # Minimum value: 0 #volume_clear_size = 0 # # Enable snapshot compression for ``qcow2`` images. # # Note: you can set ``snapshot_image_format`` to ``qcow2`` to force all # snapshots to be in ``qcow2`` format, independently from their original image # type. # # Related options: # # * snapshot_image_format # (boolean value) #snapshot_compression = false # Use virtio for bridge interfaces with KVM/QEMU (boolean value) #use_virtio_for_bridges = true # # Use multipath connection of the iSCSI or FC volume # # Volumes can be connected in the LibVirt as multipath devices. This will # provide high availability and fault tolerance. # (boolean value) # Deprecated group/name - [libvirt]/iscsi_use_multipath #volume_use_multipath = false # # Number of times to scan given storage protocol to find volume. # (integer value) # Deprecated group/name - [libvirt]/num_iscsi_scan_tries #num_volume_scan_tries = 5 # # Number of times to rediscover AoE target to find volume. # # Nova provides support for block storage attaching to hosts via AOE (ATA over # Ethernet). This option allows the user to specify the maximum number of retry # attempts that can be made to discover the AoE device. # (integer value) #num_aoe_discover_tries = 3 # # The iSCSI transport iface to use to connect to target in case offload support # is desired. # # Default format is of the form . where # is one of (be2iscsi, bnx2i, cxgb3i, cxgb4i, qla4xxx, ocs) and # is the MAC address of the interface and can be generated via the # iscsiadm -m iface command. Do not confuse the iscsi_iface parameter to be # provided here with the actual transport name. # (string value) # Deprecated group/name - [libvirt]/iscsi_transport #iscsi_iface = # # Number of times to scan iSER target to find volume. # # iSER is a server network protocol that extends iSCSI protocol to use Remote # Direct Memory Access (RDMA). This option allows the user to specify the # maximum # number of scan attempts that can be made to find iSER volume. # (integer value) #num_iser_scan_tries = 5 # # Use multipath connection of the iSER volume. # # iSER volumes can be connected as multipath devices. This will provide high # availability and fault tolerance. # (boolean value) #iser_use_multipath = false # # The RADOS client name for accessing rbd(RADOS Block Devices) volumes. # # Libvirt will refer to this user when connecting and authenticating with # the Ceph RBD server. # (string value) #rbd_user = # # The libvirt UUID of the secret for the rbd_user volumes. # (string value) #rbd_secret_uuid = # # Directory where the NFS volume is mounted on the compute node. # The default is 'mnt' directory of the location where nova's Python module # is installed. # # NFS provides shared storage for the OpenStack Block Storage service. # # Possible values: # # * A string representing absolute path of mount point. # (string value) #nfs_mount_point_base = $state_path/mnt # # Mount options passed to the NFS client. See section of the nfs man page # for details. # # Mount options controls the way the filesystem is mounted and how the # NFS client behaves when accessing files on this mount point. # # Possible values: # # * Any string representing mount options separated by commas. # * Example string: vers=3,lookupcache=pos # (string value) #nfs_mount_options = # # Directory where the Quobyte volume is mounted on the compute node. # # Nova supports Quobyte volume driver that enables storing Block Storage # service volumes on a Quobyte storage back end. This Option specifies the # path of the directory where Quobyte volume is mounted. # # Possible values: # # * A string representing absolute path of mount point. # (string value) #quobyte_mount_point_base = $state_path/mnt # Path to a Quobyte Client configuration file. (string value) #quobyte_client_cfg = # # Directory where the SMBFS shares are mounted on the compute node. # (string value) #smbfs_mount_point_base = $state_path/mnt # # Mount options passed to the SMBFS client. # # Provide SMBFS options as a single string containing all parameters. # See mount.cifs man page for details. Note that the libvirt-qemu ``uid`` # and ``gid`` must be specified. # (string value) #smbfs_mount_options = # # libvirt's transport method for remote file operations. # # Because libvirt cannot use RPC to copy files over network to/from other # compute nodes, other method must be used for: # # * creating directory on remote host # * creating file on remote host # * removing file from remote host # * copying file to remote host # (string value) # Possible values: # ssh - # rsync - #remote_filesystem_transport = ssh # # Directory where the Virtuozzo Storage clusters are mounted on the compute # node. # # This option defines non-standard mountpoint for Vzstorage cluster. # # Related options: # # * vzstorage_mount_* group of parameters # (string value) #vzstorage_mount_point_base = $state_path/mnt # # Mount owner user name. # # This option defines the owner user of Vzstorage cluster mountpoint. # # Related options: # # * vzstorage_mount_* group of parameters # (string value) #vzstorage_mount_user = stack # # Mount owner group name. # # This option defines the owner group of Vzstorage cluster mountpoint. # # Related options: # # * vzstorage_mount_* group of parameters # (string value) #vzstorage_mount_group = qemu # # Mount access mode. # # This option defines the access bits of Vzstorage cluster mountpoint, # in the format similar to one of chmod(1) utility, like this: 0770. # It consists of one to four digits ranging from 0 to 7, with missing # lead digits assumed to be 0's. # # Related options: # # * vzstorage_mount_* group of parameters # (string value) #vzstorage_mount_perms = 0770 # # Path to vzstorage client log. # # This option defines the log of cluster operations, # it should include "%(cluster_name)s" template to separate # logs from multiple shares. # # Related options: # # * vzstorage_mount_opts may include more detailed logging options. # (string value) #vzstorage_log_path = /var/log/vstorage/%(cluster_name)s/nova.log.gz # # Path to the SSD cache file. # # You can attach an SSD drive to a client and configure the drive to store # a local cache of frequently accessed data. By having a local cache on a # client's SSD drive, you can increase the overall cluster performance by # up to 10 and more times. # WARNING! There is a lot of SSD models which are not server grade and # may loose arbitrary set of data changes on power loss. # Such SSDs should not be used in Vstorage and are dangerous as may lead # to data corruptions and inconsistencies. Please consult with the manual # on which SSD models are known to be safe or verify it using # vstorage-hwflush-check(1) utility. # # This option defines the path which should include "%(cluster_name)s" # template to separate caches from multiple shares. # # Related options: # # * vzstorage_mount_opts may include more detailed cache options. # (string value) #vzstorage_cache_path = # # Extra mount options for pstorage-mount # # For full description of them, see # https://static.openvz.org/vz-man/man1/pstorage-mount.1.gz.html # Format is a python string representation of arguments list, like: # "['-v', '-R', '500']" # Shouldn't include -c, -l, -C, -u, -g and -m as those have # explicit vzstorage_* options. # # Related options: # # * All other vzstorage_* options # (list value) #vzstorage_mount_opts = [matchmaker_redis] # # From oslo.messaging # # DEPRECATED: Host to locate redis. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #host = 127.0.0.1 # DEPRECATED: Use this port to connect to redis host. (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #port = 6379 # DEPRECATED: Password for Redis server (optional). (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #password = # DEPRECATED: List of Redis Sentinel hosts (fault tolerance mode), e.g., # [host:port, host1:port ... ] (list value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #sentinel_hosts = # Redis replica set name. (string value) #sentinel_group_name = oslo-messaging-zeromq # Time in ms to wait between connection attempts. (integer value) #wait_timeout = 2000 # Time in ms to wait before the transaction is killed. (integer value) #check_timeout = 20000 # Timeout in ms on blocking socket operations. (integer value) #socket_timeout = 10000 [metrics] # # Configuration options for metrics # # Options under this group allow to adjust how values assigned to metrics are # calculated. # # From nova.conf # # # When using metrics to weight the suitability of a host, you can use this # option # to change how the calculated weight influences the weight assigned to a host # as # follows: # # * >1.0: increases the effect of the metric on overall weight # * 1.0: no change to the calculated weight # * >0.0,<1.0: reduces the effect of the metric on overall weight # * 0.0: the metric value is ignored, and the value of the # 'weight_of_unavailable' option is returned instead # * >-1.0,<0.0: the effect is reduced and reversed # * -1.0: the effect is reversed # * <-1.0: the effect is increased proportionally and reversed # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * An integer or float value, where the value corresponds to the multipler # ratio for this weigher. # # Related options: # # * weight_of_unavailable # (floating point value) #weight_multiplier = 1.0 # # This setting specifies the metrics to be weighed and the relative ratios for # each metric. This should be a single string value, consisting of a series of # one or more 'name=ratio' pairs, separated by commas, where 'name' is the name # of the metric to be weighed, and 'ratio' is the relative weight for that # metric. # # Note that if the ratio is set to 0, the metric value is ignored, and instead # the weight will be set to the value of the 'weight_of_unavailable' option. # # As an example, let's consider the case where this option is set to: # # ``name1=1.0, name2=-1.3`` # # The final weight will be: # # ``(name1.value * 1.0) + (name2.value * -1.3)`` # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * A list of zero or more key/value pairs separated by commas, where the key is # a string representing the name of a metric and the value is a numeric weight # for that metric. If any value is set to 0, the value is ignored and the # weight will be set to the value of the 'weight_of_unavailable' option. # # Related options: # # * weight_of_unavailable # (list value) #weight_setting = # # This setting determines how any unavailable metrics are treated. If this # option # is set to True, any hosts for which a metric is unavailable will raise an # exception, so it is recommended to also use the MetricFilter to filter out # those hosts before weighing. # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * True or False, where False ensures any metric being unavailable for a host # will set the host weight to 'weight_of_unavailable'. # # Related options: # # * weight_of_unavailable # (boolean value) #required = true # # When any of the following conditions are met, this value will be used in place # of any actual metric value: # # * One of the metrics named in 'weight_setting' is not available for a host, # and the value of 'required' is False # * The ratio specified for a metric in 'weight_setting' is 0 # * The 'weight_multiplier' option is set to 0 # # This option is only used by the FilterScheduler and its subclasses; if you use # a different scheduler, this option has no effect. # # Possible values: # # * An integer or float value, where the value corresponds to the multipler # ratio for this weigher. # # Related options: # # * weight_setting # * required # * weight_multiplier # (floating point value) #weight_of_unavailable = -10000.0 [mks] # # Nova compute node uses WebMKS, a desktop sharing protocol to provide # instance console access to VM's created by VMware hypervisors. # # Related options: # Following options must be set to provide console access. # * mksproxy_base_url # * enabled # # From nova.conf # # # Location of MKS web console proxy # # The URL in the response points to a WebMKS proxy which # starts proxying between client and corresponding vCenter # server where instance runs. In order to use the web based # console access, WebMKS proxy should be installed and configured # # Possible values: # # * Must be a valid URL of the form:``http://host:port/`` or # ``https://host:port/`` # (uri value) #mksproxy_base_url = http://127.0.0.1:6090/ # # Enables graphical console access for virtual machines. # (boolean value) #enabled = false [neutron] # # Configuration options for neutron (network connectivity as a service). # # From nova.conf # # DEPRECATED: # This option specifies the URL for connecting to Neutron. # # Possible values: # # * Any valid URL that points to the Neutron API service is appropriate here. # This typically matches the URL returned for the 'network' service type # from the Keystone service catalog. # (uri value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: Endpoint lookup uses the service catalog via common keystoneauth1 # Adapter configuration options. In the current release, "url" will override # this behavior, but will be ignored and/or removed in a future release. To # achieve the same result, use the endpoint_override option instead. #url = http://127.0.0.1:9696 # # Default name for the Open vSwitch integration bridge. # # Specifies the name of an integration bridge interface used by OpenvSwitch. # This option is only used if Neutron does not specify the OVS bridge name in # port binding responses. # (string value) #ovs_bridge = br-int # # Default name for the floating IP pool. # # Specifies the name of floating IP pool used for allocating floating IPs. This # option is only used if Neutron does not specify the floating IP pool name in # port binding reponses. # (string value) #default_floating_pool = nova # # Integer value representing the number of seconds to wait before querying # Neutron for extensions. After this number of seconds the next time Nova # needs to create a resource in Neutron it will requery Neutron for the # extensions that it has loaded. Setting value to 0 will refresh the # extensions with no wait. # (integer value) # Minimum value: 0 #extension_sync_interval = 600 # # When set to True, this option indicates that Neutron will be used to proxy # metadata requests and resolve instance ids. Otherwise, the instance ID must be # passed to the metadata request in the 'X-Instance-ID' header. # # Related options: # # * metadata_proxy_shared_secret # (boolean value) #service_metadata_proxy = false # # This option holds the shared secret string used to validate proxy requests to # Neutron metadata requests. In order to be used, the # 'X-Metadata-Provider-Signature' header must be supplied in the request. # # Related options: # # * service_metadata_proxy # (string value) #metadata_proxy_shared_secret = # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # Authentication type to load (string value) # Deprecated group/name - [neutron]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = # Authentication URL (string value) #auth_url = # Scope for system operations (string value) #system_scope = # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Project ID to scope to (string value) #project_id = # Project name to scope to (string value) #project_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = # Trust ID (string value) #trust_id = # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string # value) #default_domain_id = # Optional domain name to use with v3 API and v2 parameters. It will be used for # both the user and project domain in v3 and ignored in v2 authentication. # (string value) #default_domain_name = # User ID (string value) #user_id = # Username (string value) # Deprecated group/name - [neutron]/user_name #username = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # User's password (string value) #password = # Tenant ID (string value) #tenant_id = # Tenant Name (string value) #tenant_name = # The default service_type for endpoint URL discovery. (string value) #service_type = network # The default service_name for endpoint URL discovery. (string value) #service_name = # List of interfaces, in order of preference, for endpoint URL. (list value) #valid_interfaces = internal,public # The default region_name for endpoint URL discovery. (string value) #region_name = # Always use this endpoint URL for requests for this client. NOTE: The # unversioned endpoint should be specified here; to request a particular API # version, use the `version`, `min-version`, and/or `max-version` options. # (string value) #endpoint_override = [notifications] # # Most of the actions in Nova which manipulate the system state generate # notifications which are posted to the messaging component (e.g. RabbitMQ) and # can be consumed by any service outside the OpenStack. More technical details # at https://docs.openstack.org/nova/latest/reference/notifications.html # # From nova.conf # # # If set, send compute.instance.update notifications on # instance state changes. # # Please refer to # https://docs.openstack.org/nova/latest/reference/notifications.html for # additional information on notifications. # # Possible values: # # * None - no notifications # * "vm_state" - notifications are sent with VM state transition information in # the ``old_state`` and ``state`` fields. The ``old_task_state`` and # ``new_task_state`` fields will be set to the current task_state of the # instance. # * "vm_and_task_state" - notifications are sent with VM and task state # transition information. # (string value) # Possible values: # - # vm_state - # vm_and_task_state - #notify_on_state_change = # Default notification level for outgoing notifications. (string value) # Possible values: # DEBUG - # INFO - # WARN - # ERROR - # CRITICAL - # Deprecated group/name - [DEFAULT]/default_notification_level #default_level = INFO # DEPRECATED: # Default publisher_id for outgoing notifications. If you consider routing # notifications using different publisher, change this value accordingly. # # Possible values: # # * Defaults to the current hostname of this host, but it can be any valid # oslo.messaging publisher_id # # Related options: # # * host - Hostname, FQDN or IP address of this host. # (string value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: # This option is only used when ``monkey_patch=True`` and # ``monkey_patch_modules`` is configured to specify the legacy notify_decorator. # Since the monkey_patch and monkey_patch_modules options are deprecated, this # option is also deprecated. #default_publisher_id = $host # # Specifies which notification format shall be used by nova. # # The default value is fine for most deployments and rarely needs to be changed. # This value can be set to 'versioned' once the infrastructure moves closer to # consuming the newer format of notifications. After this occurs, this option # will be removed. # # Note that notifications can be completely disabled by setting ``driver=noop`` # in the ``[oslo_messaging_notifications]`` group. # # Possible values: # * unversioned: Only the legacy unversioned notifications are emitted. # * versioned: Only the new versioned notifications are emitted. # * both: Both the legacy unversioned and the new versioned notifications are # emitted. (Default) # # The list of versioned notifications is visible in # https://docs.openstack.org/nova/latest/reference/notifications.html # (string value) # Possible values: # unversioned - # versioned - # both - #notification_format = both # # Specifies the topics for the versioned notifications issued by nova. # # The default value is fine for most deployments and rarely needs to be changed. # However, if you have a third-party service that consumes versioned # notifications, it might be worth getting a topic for that service. # Nova will send a message containing a versioned notification payload to each # topic queue in this list. # # The list of versioned notifications is visible in # https://docs.openstack.org/nova/latest/reference/notifications.html # (list value) #versioned_notifications_topics = versioned_notifications # # If enabled, include block device information in the versioned notification # payload. Sending block device information is disabled by default as providing # that information can incur some overhead on the system since the information # may need to be loaded from the database. # (boolean value) #bdms_in_notifications = false [osapi_v21] # # From nova.conf # # DEPRECATED: # This option is a string representing a regular expression (regex) that matches # the project_id as contained in URLs. If not set, it will match normal UUIDs # created by keystone. # # Possible values: # # * A string representing any legal regular expression # (string value) # This option is deprecated for removal since 13.0.0. # Its value may be silently ignored in the future. # Reason: # Recent versions of nova constrain project IDs to hexadecimal characters and # dashes. If your installation uses IDs outside of this range, you should use # this option to provide your own regex and give you time to migrate offending # projects to valid IDs before the next release. #project_id_regex = [oslo_concurrency] #pavlos lock_path = /var/lib/nova/tmp # # From oslo.concurrency # # Enables or disables inter-process locks. (boolean value) #disable_process_locking = false # Directory to use for lock files. For security, the specified directory should # only be writable by the user running the processes that need locking. Defaults # to environment variable OSLO_LOCK_PATH. If OSLO_LOCK_PATH is not set in the # environment, use the Python tempfile.gettempdir function to find a suitable # location. If external locks are used, a lock path must be set. (string value) #lock_path = /tmp [oslo_messaging_amqp] # # From oslo.messaging # # Name for the AMQP container. must be globally unique. Defaults to a generated # UUID (string value) #container_name = # Timeout for inactive connections (in seconds) (integer value) #idle_timeout = 0 # Debug: dump AMQP frames to stdout (boolean value) #trace = false # Attempt to connect via SSL. If no other ssl-related parameters are given, it # will use the system's CA-bundle to verify the server's certificate. (boolean # value) #ssl = false # CA certificate PEM file used to verify the server's certificate (string value) #ssl_ca_file = # Self-identifying certificate PEM file for client authentication (string value) #ssl_cert_file = # Private key PEM file used to sign ssl_cert_file certificate (optional) (string # value) #ssl_key_file = # Password for decrypting ssl_key_file (if encrypted) (string value) #ssl_key_password = # By default SSL checks that the name in the server's certificate matches the # hostname in the transport_url. In some configurations it may be preferable to # use the virtual hostname instead, for example if the server uses the Server # Name Indication TLS extension (rfc6066) to provide a certificate per virtual # host. Set ssl_verify_vhost to True if the server's SSL certificate uses the # virtual host name instead of the DNS name. (boolean value) #ssl_verify_vhost = false # DEPRECATED: Accept clients using either SSL or plain TCP (boolean value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Not applicable - not a SSL server #allow_insecure_clients = false # Space separated list of acceptable SASL mechanisms (string value) #sasl_mechanisms = # Path to directory that contains the SASL configuration (string value) #sasl_config_dir = # Name of configuration file (without .conf suffix) (string value) #sasl_config_name = # SASL realm to use if no realm present in username (string value) #sasl_default_realm = # DEPRECATED: User name for message broker authentication (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Should use configuration option transport_url to provide the username. #username = # DEPRECATED: Password for message broker authentication (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Should use configuration option transport_url to provide the password. #password = # Seconds to pause before attempting to re-connect. (integer value) # Minimum value: 1 #connection_retry_interval = 1 # Increase the connection_retry_interval by this many seconds after each # unsuccessful failover attempt. (integer value) # Minimum value: 0 #connection_retry_backoff = 2 # Maximum limit for connection_retry_interval + connection_retry_backoff # (integer value) # Minimum value: 1 #connection_retry_interval_max = 30 # Time to pause between re-connecting an AMQP 1.0 link that failed due to a # recoverable error. (integer value) # Minimum value: 1 #link_retry_delay = 10 # The maximum number of attempts to re-send a reply message which failed due to # a recoverable error. (integer value) # Minimum value: -1 #default_reply_retry = 0 # The deadline for an rpc reply message delivery. (integer value) # Minimum value: 5 #default_reply_timeout = 30 # The deadline for an rpc cast or call message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_send_timeout = 30 # The deadline for a sent notification message delivery. Only used when caller # does not provide a timeout expiry. (integer value) # Minimum value: 5 #default_notify_timeout = 30 # The duration to schedule a purge of idle sender links. Detach link after # expiry. (integer value) # Minimum value: 1 #default_sender_link_timeout = 600 # Indicates the addressing mode used by the driver. # Permitted values: # 'legacy' - use legacy non-routable addressing # 'routable' - use routable addresses # 'dynamic' - use legacy addresses if the message bus does not support routing # otherwise use routable addressing (string value) #addressing_mode = dynamic # Enable virtual host support for those message buses that do not natively # support virtual hosting (such as qpidd). When set to true the virtual host # name will be added to all message bus addresses, effectively creating a # private 'subnet' per virtual host. Set to False if the message bus supports # virtual hosting using the 'hostname' field in the AMQP 1.0 Open performative # as the name of the virtual host. (boolean value) #pseudo_vhost = true # address prefix used when sending to a specific server (string value) #server_request_prefix = exclusive # address prefix used when broadcasting to all servers (string value) #broadcast_prefix = broadcast # address prefix when sending to any server in group (string value) #group_request_prefix = unicast # Address prefix for all generated RPC addresses (string value) #rpc_address_prefix = openstack.org/om/rpc # Address prefix for all generated Notification addresses (string value) #notify_address_prefix = openstack.org/om/notify # Appended to the address prefix when sending a fanout message. Used by the # message bus to identify fanout messages. (string value) #multicast_address = multicast # Appended to the address prefix when sending to a particular RPC/Notification # server. Used by the message bus to identify messages sent to a single # destination. (string value) #unicast_address = unicast # Appended to the address prefix when sending to a group of consumers. Used by # the message bus to identify messages that should be delivered in a round-robin # fashion across consumers. (string value) #anycast_address = anycast # Exchange name used in notification addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_notification_exchange if set # else control_exchange if set # else 'notify' (string value) #default_notification_exchange = # Exchange name used in RPC addresses. # Exchange name resolution precedence: # Target.exchange if set # else default_rpc_exchange if set # else control_exchange if set # else 'rpc' (string value) #default_rpc_exchange = # Window size for incoming RPC Reply messages. (integer value) # Minimum value: 1 #reply_link_credit = 200 # Window size for incoming RPC Request messages (integer value) # Minimum value: 1 #rpc_server_credit = 100 # Window size for incoming Notification messages (integer value) # Minimum value: 1 #notify_server_credit = 100 # Send messages of this type pre-settled. # Pre-settled messages will not receive acknowledgement # from the peer. Note well: pre-settled messages may be # silently discarded if the delivery fails. # Permitted values: # 'rpc-call' - send RPC Calls pre-settled # 'rpc-reply'- send RPC Replies pre-settled # 'rpc-cast' - Send RPC Casts pre-settled # 'notify' - Send Notifications pre-settled # (multi valued) #pre_settled = rpc-cast #pre_settled = rpc-reply [oslo_messaging_kafka] # # From oslo.messaging # # DEPRECATED: Default Kafka broker Host (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #kafka_default_host = localhost # DEPRECATED: Default Kafka broker Port (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #kafka_default_port = 9092 # Max fetch bytes of Kafka consumer (integer value) #kafka_max_fetch_bytes = 1048576 # Default timeout(s) for Kafka consumers (floating point value) #kafka_consumer_timeout = 1.0 # DEPRECATED: Pool Size for Kafka Consumers (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #pool_size = 10 # DEPRECATED: The pool size limit for connections expiration policy (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_min_size = 2 # DEPRECATED: The time-to-live in sec of idle connections in the pool (integer # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Driver no longer uses connection pool. #conn_pool_ttl = 1200 # Group id for Kafka consumer. Consumers in one group will coordinate message # consumption (string value) #consumer_group = oslo_messaging_consumer # Upper bound on the delay for KafkaProducer batching in seconds (floating point # value) #producer_batch_timeout = 0.0 # Size of batch for the producer async send (integer value) #producer_batch_size = 16384 [oslo_messaging_notifications] # # From oslo.messaging # # The Drivers(s) to handle sending notifications. Possible values are messaging, # messagingv2, routing, log, test, noop (multi valued) # Deprecated group/name - [DEFAULT]/notification_driver #driver = # A URL representing the messaging driver to use for notifications. If not set, # we fall back to the same configuration used for RPC. (string value) # Deprecated group/name - [DEFAULT]/notification_transport_url #transport_url = # AMQP topic used for OpenStack notifications. (list value) # Deprecated group/name - [rpc_notifier2]/topics # Deprecated group/name - [DEFAULT]/notification_topics #topics = notifications # The maximum number of attempts to re-send a notification message which failed # to be delivered due to a recoverable error. 0 - No retry, -1 - indefinite # (integer value) #retry = -1 [oslo_messaging_rabbit] # # From oslo.messaging # # Use durable queues in AMQP. (boolean value) # Deprecated group/name - [DEFAULT]/amqp_durable_queues # Deprecated group/name - [DEFAULT]/rabbit_durable_queues #amqp_durable_queues = false # Auto-delete queues in AMQP. (boolean value) #amqp_auto_delete = false # Enable SSL (boolean value) #ssl = # SSL version to use (valid only if SSL enabled). Valid values are TLSv1 and # SSLv23. SSLv2, SSLv3, TLSv1_1, and TLSv1_2 may be available on some # distributions. (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_version #ssl_version = # SSL key file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_keyfile #ssl_key_file = # SSL cert file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_certfile #ssl_cert_file = # SSL certification authority file (valid only if SSL enabled). (string value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_ssl_ca_certs #ssl_ca_file = # How long to wait before reconnecting in response to an AMQP consumer cancel # notification. (floating point value) #kombu_reconnect_delay = 1.0 # EXPERIMENTAL: Possible values are: gzip, bz2. If not set compression will not # be used. This option may not be available in future versions. (string value) #kombu_compression = # How long to wait a missing client before abandoning to send it its replies. # This value should not be longer than rpc_response_timeout. (integer value) # Deprecated group/name - [oslo_messaging_rabbit]/kombu_reconnect_timeout #kombu_missing_consumer_retry_timeout = 60 # Determines how the next RabbitMQ node is chosen in case the one we are # currently connected to becomes unavailable. Takes effect only if more than one # RabbitMQ node is provided in config. (string value) # Possible values: # round-robin - # shuffle - #kombu_failover_strategy = round-robin # DEPRECATED: The RabbitMQ broker address where a single node is used. (string # value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_host = localhost # DEPRECATED: The RabbitMQ broker port where a single node is used. (port value) # Minimum value: 0 # Maximum value: 65535 # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_port = 5672 # DEPRECATED: RabbitMQ HA cluster host:port pairs. (list value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_hosts = $rabbit_host:$rabbit_port # DEPRECATED: The RabbitMQ userid. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_userid = guest # DEPRECATED: The RabbitMQ password. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_password = guest # The RabbitMQ login method. (string value) # Possible values: # PLAIN - # AMQPLAIN - # RABBIT-CR-DEMO - #rabbit_login_method = AMQPLAIN # DEPRECATED: The RabbitMQ virtual host. (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. # Reason: Replaced by [DEFAULT]/transport_url #rabbit_virtual_host = / # How frequently to retry connecting with RabbitMQ. (integer value) #rabbit_retry_interval = 1 # How long to backoff for between retries when connecting to RabbitMQ. (integer # value) #rabbit_retry_backoff = 2 # Maximum interval of RabbitMQ connection retries. Default is 30 seconds. # (integer value) #rabbit_interval_max = 30 # DEPRECATED: Maximum number of RabbitMQ connection retries. Default is 0 # (infinite retry count). (integer value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #rabbit_max_retries = 0 # Try to use HA queues in RabbitMQ (x-ha-policy: all). If you change this # option, you must wipe the RabbitMQ database. In RabbitMQ 3.0, queue mirroring # is no longer controlled by the x-ha-policy argument when declaring a queue. If # you just want to make sure that all queues (except those with auto-generated # names) are mirrored across all nodes, run: "rabbitmqctl set_policy HA # '^(?!amq\.).*' '{"ha-mode": "all"}' " (boolean value) #rabbit_ha_queues = false # Positive integer representing duration in seconds for queue TTL (x-expires). # Queues which are unused for the duration of the TTL are automatically deleted. # The parameter affects only reply and fanout queues. (integer value) # Minimum value: 1 #rabbit_transient_queues_ttl = 1800 # Specifies the number of messages to prefetch. Setting to zero allows unlimited # messages. (integer value) #rabbit_qos_prefetch_count = 0 # Number of seconds after which the Rabbit broker is considered down if # heartbeat's keep-alive fails (0 disable the heartbeat). EXPERIMENTAL (integer # value) #heartbeat_timeout_threshold = 60 # How often times during the heartbeat_timeout_threshold we check the heartbeat. # (integer value) #heartbeat_rate = 2 # Deprecated, use rpc_backend=kombu+memory or rpc_backend=fake (boolean value) #fake_rabbit = false # Maximum number of channels to allow (integer value) #channel_max = # The maximum byte size for an AMQP frame (integer value) #frame_max = # How often to send heartbeats for consumer's connections (integer value) #heartbeat_interval = 3 # Arguments passed to ssl.wrap_socket (dict value) #ssl_options = # Set socket timeout in seconds for connection's socket (floating point value) #socket_timeout = 0.25 # Set TCP_USER_TIMEOUT in seconds for connection's socket (floating point value) #tcp_user_timeout = 0.25 # Set delay for reconnection to some host which has connection error (floating # point value) #host_connection_reconnect_delay = 0.25 # Connection factory implementation (string value) # Possible values: # new - # single - # read_write - #connection_factory = single # Maximum number of connections to keep queued. (integer value) #pool_max_size = 30 # Maximum number of connections to create above `pool_max_size`. (integer value) #pool_max_overflow = 0 # Default number of seconds to wait for a connections to available (integer # value) #pool_timeout = 30 # Lifetime of a connection (since creation) in seconds or None for no recycling. # Expired connections are closed on acquire. (integer value) #pool_recycle = 600 # Threshold at which inactive (since release) connections are considered stale # in seconds or None for no staleness. Stale connections are closed on acquire. # (integer value) #pool_stale = 60 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #default_serializer_type = json # Persist notification messages. (boolean value) #notification_persistence = false # Exchange name for sending notifications (string value) #default_notification_exchange = ${control_exchange}_notification # Max number of not acknowledged message which RabbitMQ can send to notification # listener. (integer value) #notification_listener_prefetch_count = 100 # Reconnecting retry count in case of connectivity problem during sending # notification, -1 means infinite retry. (integer value) #default_notification_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending # notification message (floating point value) #notification_retry_delay = 0.25 # Time to live for rpc queues without consumers in seconds. (integer value) #rpc_queue_expiration = 60 # Exchange name for sending RPC messages (string value) #default_rpc_exchange = ${control_exchange}_rpc # Exchange name for receiving RPC replies (string value) #rpc_reply_exchange = ${control_exchange}_rpc_reply # Max number of not acknowledged message which RabbitMQ can send to rpc # listener. (integer value) #rpc_listener_prefetch_count = 100 # Max number of not acknowledged message which RabbitMQ can send to rpc reply # listener. (integer value) #rpc_reply_listener_prefetch_count = 100 # Reconnecting retry count in case of connectivity problem during sending reply. # -1 means infinite retry during rpc_timeout (integer value) #rpc_reply_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending reply. # (floating point value) #rpc_reply_retry_delay = 0.25 # Reconnecting retry count in case of connectivity problem during sending RPC # message, -1 means infinite retry. If actual retry attempts in not 0 the rpc # request could be processed more than one time (integer value) #default_rpc_retry_attempts = -1 # Reconnecting retry delay in case of connectivity problem during sending RPC # message (floating point value) #rpc_retry_delay = 0.25 [oslo_messaging_zmq] # # From oslo.messaging # # ZeroMQ bind address. Should be a wildcard (*), an ethernet interface, or IP. # The "host" option should point or resolve to this address. (string value) #rpc_zmq_bind_address = * # MatchMaker driver. (string value) # Possible values: # redis - # sentinel - # dummy - #rpc_zmq_matchmaker = redis # Number of ZeroMQ contexts, defaults to 1. (integer value) #rpc_zmq_contexts = 1 # Maximum number of ingress messages to locally buffer per topic. Default is # unlimited. (integer value) #rpc_zmq_topic_backlog = # Directory for holding IPC sockets. (string value) #rpc_zmq_ipc_dir = /var/run/openstack # Name of this node. Must be a valid hostname, FQDN, or IP address. Must match # "host" option, if running Nova. (string value) #rpc_zmq_host = localhost # Number of seconds to wait before all pending messages will be sent after # closing a socket. The default value of -1 specifies an infinite linger period. # The value of 0 specifies no linger period. Pending messages shall be discarded # immediately when the socket is closed. Positive values specify an upper bound # for the linger period. (integer value) # Deprecated group/name - [DEFAULT]/rpc_cast_timeout #zmq_linger = -1 # The default number of seconds that poll should wait. Poll raises timeout # exception when timeout expired. (integer value) #rpc_poll_timeout = 1 # Expiration timeout in seconds of a name service record about existing target ( # < 0 means no timeout). (integer value) #zmq_target_expire = 300 # Update period in seconds of a name service record about existing target. # (integer value) #zmq_target_update = 180 # Use PUB/SUB pattern for fanout methods. PUB/SUB always uses proxy. (boolean # value) #use_pub_sub = false # Use ROUTER remote proxy. (boolean value) #use_router_proxy = false # This option makes direct connections dynamic or static. It makes sense only # with use_router_proxy=False which means to use direct connections for direct # message types (ignored otherwise). (boolean value) #use_dynamic_connections = false # How many additional connections to a host will be made for failover reasons. # This option is actual only in dynamic connections mode. (integer value) #zmq_failover_connections = 2 # Minimal port number for random ports range. (port value) # Minimum value: 0 # Maximum value: 65535 #rpc_zmq_min_port = 49153 # Maximal port number for random ports range. (integer value) # Minimum value: 1 # Maximum value: 65536 #rpc_zmq_max_port = 65536 # Number of retries to find free port number before fail with ZMQBindError. # (integer value) #rpc_zmq_bind_port_retries = 100 # Default serialization mechanism for serializing/deserializing # outgoing/incoming messages (string value) # Possible values: # json - # msgpack - #rpc_zmq_serialization = json # This option configures round-robin mode in zmq socket. True means not keeping # a queue when server side disconnects. False means to keep queue and messages # even if server is disconnected, when the server appears we send all # accumulated messages to it. (boolean value) #zmq_immediate = true # Enable/disable TCP keepalive (KA) mechanism. The default value of -1 (or any # other negative value) means to skip any overrides and leave it to OS default; # 0 and 1 (or any other positive value) mean to disable and enable the option # respectively. (integer value) #zmq_tcp_keepalive = -1 # The duration between two keepalive transmissions in idle condition. The unit # is platform dependent, for example, seconds in Linux, milliseconds in Windows # etc. The default value of -1 (or any other negative value and 0) means to skip # any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_idle = -1 # The number of retransmissions to be carried out before declaring that remote # end is not available. The default value of -1 (or any other negative value and # 0) means to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_cnt = -1 # The duration between two successive keepalive retransmissions, if # acknowledgement to the previous keepalive transmission is not received. The # unit is platform dependent, for example, seconds in Linux, milliseconds in # Windows etc. The default value of -1 (or any other negative value and 0) means # to skip any overrides and leave it to OS default. (integer value) #zmq_tcp_keepalive_intvl = -1 # Maximum number of (green) threads to work concurrently. (integer value) #rpc_thread_pool_size = 100 # Expiration timeout in seconds of a sent/received message after which it is not # tracked anymore by a client/server. (integer value) #rpc_message_ttl = 300 # Wait for message acknowledgements from receivers. This mechanism works only # via proxy without PUB/SUB. (boolean value) #rpc_use_acks = false # Number of seconds to wait for an ack from a cast/call. After each retry # attempt this timeout is multiplied by some specified multiplier. (integer # value) #rpc_ack_timeout_base = 15 # Number to multiply base ack timeout by after each retry attempt. (integer # value) #rpc_ack_timeout_multiplier = 2 # Default number of message sending attempts in case of any problems occurred: # positive value N means at most N retries, 0 means no retries, None or -1 (or # any other negative values) mean to retry forever. This option is used only if # acknowledgments are enabled. (integer value) #rpc_retry_attempts = 3 # List of publisher hosts SubConsumer can subscribe on. This option has higher # priority then the default publishers list taken from the matchmaker. (list # value) #subscribe_on = [oslo_middleware] # # From oslo.middleware # # The maximum body size for each request, in bytes. (integer value) # Deprecated group/name - [DEFAULT]/osapi_max_request_body_size # Deprecated group/name - [DEFAULT]/max_request_body_size #max_request_body_size = 114688 # DEPRECATED: The HTTP Header that will be used to determine what the original # request protocol scheme was, even if it was hidden by a SSL termination proxy. # (string value) # This option is deprecated for removal. # Its value may be silently ignored in the future. #secure_proxy_ssl_header = X-Forwarded-Proto # Whether the application is behind a proxy or not. This determines if the # middleware should parse the headers or not. (boolean value) #enable_proxy_headers_parsing = false [oslo_policy] # # From oslo.policy # # This option controls whether or not to enforce scope when evaluating policies. # If ``True``, the scope of the token used in the request is compared to the # ``scope_types`` of the policy being enforced. If the scopes do not match, an # ``InvalidScope`` exception will be raised. If ``False``, a message will be # logged informing operators that policies are being invoked with mismatching # scope. (boolean value) #enforce_scope = false # The file that defines policies. (string value) #policy_file = policy.json # Default rule. Enforced when a requested rule is not found. (string value) #policy_default_rule = default # Directories where policy configuration files are stored. They can be relative # to any directory in the search path defined by the config_dir option, or # absolute paths. The file defined by policy_file must exist for these # directories to be searched. Missing or empty directories are ignored. (multi # valued) #policy_dirs = policy.d # Content Type to send and receive data for REST based policy check (string # value) # Possible values: # application/x-www-form-urlencoded - # application/json - #remote_content_type = application/x-www-form-urlencoded # server identity verification for REST based policy check (boolean value) #remote_ssl_verify_server_crt = false # Absolute path to ca cert file for REST based policy check (string value) #remote_ssl_ca_crt_file = # Absolute path to client cert for REST based policy check (string value) #remote_ssl_client_crt_file = # Absolute path client key file REST based policy check (string value) #remote_ssl_client_key_file = [pci] # # From nova.conf # # # An alias for a PCI passthrough device requirement. # # This allows users to specify the alias in the extra specs for a flavor, # without # needing to repeat all the PCI property requirements. # # Possible Values: # # * A list of JSON values which describe the aliases. For example:: # # alias = { # "name": "QuickAssist", # "product_id": "0443", # "vendor_id": "8086", # "device_type": "type-PCI", # "numa_policy": "required" # } # # This defines an alias for the Intel QuickAssist card. (multi valued). Valid # key values are : # # ``name`` # Name of the PCI alias. # # ``product_id`` # Product ID of the device in hexadecimal. # # ``vendor_id`` # Vendor ID of the device in hexadecimal. # # ``device_type`` # Type of PCI device. Valid values are: ``type-PCI``, ``type-PF`` and # ``type-VF``. # # ``numa_policy`` # Required NUMA affinity of device. Valid values are: ``legacy``, # ``preferred`` and ``required``. # (multi valued) # Deprecated group/name - [DEFAULT]/pci_alias #alias = # # White list of PCI devices available to VMs. # # Possible values: # # * A JSON dictionary which describe a whitelisted PCI device. It should take # the following format: # # ["vendor_id": "",] ["product_id": "",] # ["address": "[[[[]:]]:][][.[]]" | # "devname": "",] # {"": "",} # # Where '[' indicates zero or one occurrences, '{' indicates zero or multiple # occurrences, and '|' mutually exclusive options. Note that any missing # fields are automatically wildcarded. # # Valid key values are : # # * "vendor_id": Vendor ID of the device in hexadecimal. # * "product_id": Product ID of the device in hexadecimal. # * "address": PCI address of the device. # * "devname": Device name of the device (for e.g. interface name). Not all # PCI devices have a name. # * "": Additional and used for matching PCI devices. # Supported : "physical_network". # # The address key supports traditional glob style and regular expression # syntax. Valid examples are: # # passthrough_whitelist = {"devname":"eth0", # "physical_network":"physnet"} # passthrough_whitelist = {"address":"*:0a:00.*"} # passthrough_whitelist = {"address":":0a:00.", # "physical_network":"physnet1"} # passthrough_whitelist = {"vendor_id":"1137", # "product_id":"0071"} # passthrough_whitelist = {"vendor_id":"1137", # "product_id":"0071", # "address": "0000:0a:00.1", # "physical_network":"physnet1"} # passthrough_whitelist = {"address":{"domain": ".*", # "bus": "02", "slot": "01", # "function": "[2-7]"}, # "physical_network":"physnet1"} # passthrough_whitelist = {"address":{"domain": ".*", # "bus": "02", "slot": "0[1-2]", # "function": ".*"}, # "physical_network":"physnet1"} # # The following are invalid, as they specify mutually exclusive options: # # passthrough_whitelist = {"devname":"eth0", # "physical_network":"physnet", # "address":"*:0a:00.*"} # # * A JSON list of JSON dictionaries corresponding to the above format. For # example: # # passthrough_whitelist = [{"product_id":"0001", "vendor_id":"8086"}, # {"product_id":"0002", "vendor_id":"8086"}] # (multi valued) # Deprecated group/name - [DEFAULT]/pci_passthrough_whitelist #passthrough_whitelist = [placement] # pavlos --start #os_region_name = openstack <-- i removed this one os_region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller/identity/v3 #auth_url = http://controller:5000/v3 #auth_url = http://192.168.40.184/identity username = placement password = linux # pavlos --end # # From nova.conf # # DEPRECATED: # Region name of this node. This is used when picking the URL in the service # catalog. # # Possible values: # # * Any string representing region name # (string value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: Endpoint lookup uses the service catalog via common keystoneauth1 # Adapter configuration options. Use the region_name option instead. #os_region_name = # DEPRECATED: # Endpoint interface for this node. This is used when picking the URL in the # service catalog. # (string value) # This option is deprecated for removal since 17.0.0. # Its value may be silently ignored in the future. # Reason: Endpoint lookup uses the service catalog via common keystoneauth1 # Adapter configuration options. Use the valid_interfaces option instead. #os_interface = # # If True, when limiting allocation candidate results, the results will be # a random sampling of the full result set. If False, allocation candidates # are returned in a deterministic but undefined order. That is, all things # being equal, two requests for allocation candidates will return the same # results in the same order; but no guarantees are made as to how that order # is determined. # (boolean value) #randomize_allocation_candidates = false # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # Authentication type to load (string value) # Deprecated group/name - [placement]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = # Authentication URL (string value) #auth_url = # Scope for system operations (string value) #system_scope = # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Project ID to scope to (string value) #project_id = # Project name to scope to (string value) #project_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = # Trust ID (string value) #trust_id = # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string # value) #default_domain_id = # Optional domain name to use with v3 API and v2 parameters. It will be used for # both the user and project domain in v3 and ignored in v2 authentication. # (string value) #default_domain_name = # User ID (string value) #user_id = # Username (string value) # Deprecated group/name - [placement]/user_name #username = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # User's password (string value) #password = # Tenant ID (string value) #tenant_id = # Tenant Name (string value) #tenant_name = # The default service_type for endpoint URL discovery. (string value) #service_type = placement # The default service_name for endpoint URL discovery. (string value) #service_name = # List of interfaces, in order of preference, for endpoint URL. (list value) # Deprecated group/name - [placement]/os_interface #valid_interfaces = internal,public # The default region_name for endpoint URL discovery. (string value) # Deprecated group/name - [placement]/os_region_name #region_name = # Always use this endpoint URL for requests for this client. NOTE: The # unversioned endpoint should be specified here; to request a particular API # version, use the `version`, `min-version`, and/or `max-version` options. # (string value) #endpoint_override = [quota] # # Quota options allow to manage quotas in openstack deployment. # # From nova.conf # # # The number of instances allowed per project. # # Possible Values # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_instances #instances = 10 # # The number of instance cores or vCPUs allowed per project. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_cores #cores = 20 # # The number of megabytes of instance RAM allowed per project. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_ram #ram = 51200 # DEPRECATED: # The number of floating IPs allowed per project. # # Floating IPs are not allocated to instances by default. Users need to select # them from the pool configured by the OpenStack administrator to attach to # their # instances. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_floating_ips # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #floating_ips = 10 # DEPRECATED: # The number of fixed IPs allowed per project. # # Unlike floating IPs, fixed IPs are allocated dynamically by the network # component when instances boot up. This quota value should be at least the # number of instances allowed # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_fixed_ips # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #fixed_ips = -1 # # The number of metadata items allowed per instance. # # Users can associate metadata with an instance during instance creation. This # metadata takes the form of key-value pairs. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_metadata_items #metadata_items = 128 # # The number of injected files allowed. # # File injection allows users to customize the personality of an instance by # injecting data into it upon boot. Only text file injection is permitted: # binary # or ZIP files are not accepted. During file injection, any existing files that # match specified files are renamed to include ``.bak`` extension appended with # a # timestamp. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_injected_files #injected_files = 5 # # The number of bytes allowed per injected file. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_injected_file_content_bytes #injected_file_content_bytes = 10240 # # The maximum allowed injected file path length. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_injected_file_path_length #injected_file_path_length = 255 # DEPRECATED: # The number of security groups per project. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_security_groups # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #security_groups = 10 # DEPRECATED: # The number of security rules per security group. # # The associated rules in each security group control the traffic to instances # in # the group. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_security_group_rules # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # nova-network is deprecated, as are any related configuration options. #security_group_rules = 20 # # The maximum number of key pairs allowed per user. # # Users can create at least one key pair for each project and use the key pair # for multiple instances that belong to that project. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_key_pairs #key_pairs = 100 # # The maxiumum number of server groups per project. # # Server groups are used to control the affinity and anti-affinity scheduling # policy for a group of servers or instances. Reducing the quota will not affect # any existing group, but new servers will not be allowed into groups that have # become over quota. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_server_groups #server_groups = 10 # # The maximum number of servers per server group. # # Possible values: # # * A positive integer or 0. # * -1 to disable the quota. # (integer value) # Minimum value: -1 # Deprecated group/name - [DEFAULT]/quota_server_group_members #server_group_members = 10 # # The number of seconds until a reservation expires. # # This quota represents the time period for invalidating quota reservations. # (integer value) #reservation_expire = 86400 # # The count of reservations until usage is refreshed. # # This defaults to 0 (off) to avoid additional load but it is useful to turn on # to help keep quota usage up-to-date and reduce the impact of out of sync usage # issues. # (integer value) # Minimum value: 0 #until_refresh = 0 # # The number of seconds between subsequent usage refreshes. # # This defaults to 0 (off) to avoid additional load but it is useful to turn on # to help keep quota usage up-to-date and reduce the impact of out of sync usage # issues. Note that quotas are not updated on a periodic task, they will update # on a new reservation if max_age has passed since the last reservation. # (integer value) # Minimum value: 0 #max_age = 0 # DEPRECATED: # The quota enforcer driver. # # Provides abstraction for quota checks. Users can configure a specific # driver to use for quota checks. # # Possible values: # # * nova.quota.DbQuotaDriver (default) or any string representing fully # qualified class name. # (string value) # Deprecated group/name - [DEFAULT]/quota_driver # This option is deprecated for removal since 14.0.0. # Its value may be silently ignored in the future. #driver = nova.quota.DbQuotaDriver # # Recheck quota after resource creation to prevent allowing quota to be # exceeded. # # This defaults to True (recheck quota after resource creation) but can be set # to # False to avoid additional load if allowing quota to be exceeded because of # racing requests is considered acceptable. For example, when set to False, if a # user makes highly parallel REST API requests to create servers, it will be # possible for them to create more servers than their allowed quota during the # race. If their quota is 10 servers, they might be able to create 50 during the # burst. After the burst, they will not be able to create any more servers but # they will be able to keep their 50 servers until they delete them. # # The initial quota check is done before resources are created, so if multiple # parallel requests arrive at the same time, all could pass the quota check and # create resources, potentially exceeding quota. When recheck_quota is True, # quota will be checked a second time after resources have been created and if # the resource is over quota, it will be deleted and OverQuota will be raised, # usually resulting in a 403 response to the REST API user. This makes it # impossible for a user to exceed their quota with the caveat that it will, # however, be possible for a REST API user to be rejected with a 403 response in # the event of a collision close to reaching their quota limit, even if the user # has enough quota available when they made the request. # (boolean value) #recheck_quota = true [rdp] # # Options under this group enable and configure Remote Desktop Protocol ( # RDP) related features. # # This group is only relevant to Hyper-V users. # # From nova.conf # # # Enable Remote Desktop Protocol (RDP) related features. # # Hyper-V, unlike the majority of the hypervisors employed on Nova compute # nodes, uses RDP instead of VNC and SPICE as a desktop sharing protocol to # provide instance console access. This option enables RDP for graphical # console access for virtual machines created by Hyper-V. # # **Note:** RDP should only be enabled on compute nodes that support the Hyper-V # virtualization platform. # # Related options: # # * ``compute_driver``: Must be hyperv. # # (boolean value) #enabled = false # # The URL an end user would use to connect to the RDP HTML5 console proxy. # The console proxy service is called with this token-embedded URL and # establishes the connection to the proper instance. # # An RDP HTML5 console proxy service will need to be configured to listen on the # address configured here. Typically the console proxy service would be run on a # controller node. The localhost address used as default would only work in a # single node environment i.e. devstack. # # An RDP HTML5 proxy allows a user to access via the web the text or graphical # console of any Windows server or workstation using RDP. RDP HTML5 console # proxy services include FreeRDP, wsgate. # See https://github.com/FreeRDP/FreeRDP-WebConnect # # Possible values: # # * ://:/ # # The scheme must be identical to the scheme configured for the RDP HTML5 # console proxy service. It is ``http`` or ``https``. # # The IP address must be identical to the address on which the RDP HTML5 # console proxy service is listening. # # The port must be identical to the port on which the RDP HTML5 console proxy # service is listening. # # Related options: # # * ``rdp.enabled``: Must be set to ``True`` for ``html5_proxy_base_url`` to be # effective. # (uri value) #html5_proxy_base_url = http://127.0.0.1:6083/ [remote_debug] # # From nova.conf # # # Debug host (IP or name) to connect to. This command line parameter is used # when # you want to connect to a nova service via a debugger running on a different # host. # # Note that using the remote debug option changes how Nova uses the eventlet # library to support async IO. This could result in failures that do not occur # under normal operation. Use at your own risk. # # Possible Values: # # * IP address of a remote host as a command line parameter # to a nova service. For Example: # # /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf # --remote_debug-host # (unknown value) #host = # # Debug port to connect to. This command line parameter allows you to specify # the port you want to use to connect to a nova service via a debugger running # on different host. # # Note that using the remote debug option changes how Nova uses the eventlet # library to support async IO. This could result in failures that do not occur # under normal operation. Use at your own risk. # # Possible Values: # # * Port number you want to use as a command line parameter # to a nova service. For Example: # # /usr/local/bin/nova-compute --config-file /etc/nova/nova.conf # --remote_debug-host # --remote_debug-port it's listening on>. # (port value) # Minimum value: 0 # Maximum value: 65535 #port = [scheduler] # # From nova.conf # # # The scheduler host manager to use. # # The host manager manages the in-memory picture of the hosts that the scheduler # uses. The options values are chosen from the entry points under the namespace # 'nova.scheduler.host_manager' in 'setup.cfg'. # # NOTE: The "ironic_host_manager" option is deprecated as of the 17.0.0 Queens # release. # (string value) # Possible values: # host_manager - # ironic_host_manager - # Deprecated group/name - [DEFAULT]/scheduler_host_manager #host_manager = host_manager # # The class of the driver used by the scheduler. This should be chosen from one # of the entrypoints under the namespace 'nova.scheduler.driver' of file # 'setup.cfg'. If nothing is specified in this option, the 'filter_scheduler' is # used. # # Other options are: # # * 'caching_scheduler' which aggressively caches the system state for better # individual scheduler performance at the risk of more retries when running # multiple schedulers. [DEPRECATED] # * 'chance_scheduler' which simply picks a host at random. [DEPRECATED] # * 'fake_scheduler' which is used for testing. # # Possible values: # # * Any of the drivers included in Nova: # ** filter_scheduler # ** caching_scheduler # ** chance_scheduler # ** fake_scheduler # * You may also set this to the entry point name of a custom scheduler driver, # but you will be responsible for creating and maintaining it in your # setup.cfg # file. # (string value) # Deprecated group/name - [DEFAULT]/scheduler_driver #driver = filter_scheduler # # Periodic task interval. # # This value controls how often (in seconds) to run periodic tasks in the # scheduler. The specific tasks that are run for each period are determined by # the particular scheduler being used. # # If this is larger than the nova-service 'service_down_time' setting, Nova may # report the scheduler service as down. This is because the scheduler driver is # responsible for sending a heartbeat and it will only do that as often as this # option allows. As each scheduler can work a little differently than the # others, # be sure to test this with your selected scheduler. # # Possible values: # # * An integer, where the integer corresponds to periodic task interval in # seconds. 0 uses the default interval (60 seconds). A negative value disables # periodic tasks. # # Related options: # # * ``nova-service service_down_time`` # (integer value) # Deprecated group/name - [DEFAULT]/scheduler_driver_task_period #periodic_task_interval = 60 # # This is the maximum number of attempts that will be made for a given instance # build/move operation. It limits the number of alternate hosts returned by the # scheduler. When that list of hosts is exhausted, a MaxRetriesExceeded # exception is raised and the instance is set to an error state. # # Possible values: # # * A positive integer, where the integer corresponds to the max number of # attempts that can be made when building or moving an instance. # (integer value) # Minimum value: 1 # Deprecated group/name - [DEFAULT]/scheduler_max_attempts #max_attempts = 3 # # Periodic task interval. # # This value controls how often (in seconds) the scheduler should attempt # to discover new hosts that have been added to cells. If negative (the # default), no automatic discovery will occur. # # Deployments where compute nodes come and go frequently may want this # enabled, where others may prefer to manually discover hosts when one # is added to avoid any overhead from constantly checking. If enabled, # every time this runs, we will select any unmapped hosts out of each # cell database on every run. # (integer value) # Minimum value: -1 #discover_hosts_in_cells_interval = -1 # # This setting determines the maximum limit on results received from the # placement service during a scheduling operation. It effectively limits # the number of hosts that may be considered for scheduling requests that # match a large number of candidates. # # A value of 1 (the minimum) will effectively defer scheduling to the placement # service strictly on "will it fit" grounds. A higher value will put an upper # cap on the number of results the scheduler will consider during the filtering # and weighing process. Large deployments may need to set this lower than the # total number of hosts available to limit memory consumption, network traffic, # etc. of the scheduler. # # This option is only used by the FilterScheduler; if you use a different # scheduler, this option has no effect. # (integer value) # Minimum value: 1 #max_placement_results = 1000 [serial_console] # # The serial console feature allows you to connect to a guest in case a # graphical console like VNC, RDP or SPICE is not available. This is only # currently supported for the libvirt, Ironic and hyper-v drivers. # # From nova.conf # # # Enable the serial console feature. # # In order to use this feature, the service ``nova-serialproxy`` needs to run. # This service is typically executed on the controller node. # (boolean value) #enabled = false # # A range of TCP ports a guest can use for its backend. # # Each instance which gets created will use one port out of this range. If the # range is not big enough to provide another port for an new instance, this # instance won't get launched. # # Possible values: # # * Each string which passes the regex ``\d+:\d+`` For example ``10000:20000``. # Be sure that the first port number is lower than the second port number # and that both are in range from 0 to 65535. # (string value) #port_range = 10000:20000 # # The URL an end user would use to connect to the ``nova-serialproxy`` service. # # The ``nova-serialproxy`` service is called with this token enriched URL # and establishes the connection to the proper instance. # # Related options: # # * The IP address must be identical to the address to which the # ``nova-serialproxy`` service is listening (see option ``serialproxy_host`` # in this section). # * The port must be the same as in the option ``serialproxy_port`` of this # section. # * If you choose to use a secured websocket connection, then start this option # with ``wss://`` instead of the unsecured ``ws://``. The options ``cert`` # and ``key`` in the ``[DEFAULT]`` section have to be set for that. # (uri value) #base_url = ws://127.0.0.1:6083/ # # The IP address to which proxy clients (like ``nova-serialproxy``) should # connect to get the serial console of an instance. # # This is typically the IP address of the host of a ``nova-compute`` service. # (string value) #proxyclient_address = 127.0.0.1 # # The IP address which is used by the ``nova-serialproxy`` service to listen # for incoming requests. # # The ``nova-serialproxy`` service listens on this IP address for incoming # connection requests to instances which expose serial console. # # Related options: # # * Ensure that this is the same IP address which is defined in the option # ``base_url`` of this section or use ``0.0.0.0`` to listen on all addresses. # (string value) #serialproxy_host = 0.0.0.0 # # The port number which is used by the ``nova-serialproxy`` service to listen # for incoming requests. # # The ``nova-serialproxy`` service listens on this port number for incoming # connection requests to instances which expose serial console. # # Related options: # # * Ensure that this is the same port number which is defined in the option # ``base_url`` of this section. # (port value) # Minimum value: 0 # Maximum value: 65535 #serialproxy_port = 6083 [service_user] # # Configuration options for service to service authentication using a service # token. These options allow sending a service token along with the user's token # when contacting external REST APIs. # # From nova.conf # # # When True, if sending a user token to a REST API, also send a service token. # # Nova often reuses the user token provided to the nova-api to talk to other # REST # APIs, such as Cinder, Glance and Neutron. It is possible that while the user # token was valid when the request was made to Nova, the token may expire before # it reaches the other service. To avoid any failures, and to make it clear it # is # Nova calling the service on the user's behalf, we include a service token # along # with the user token. Should the user's token have expired, a valid service # token ensures the REST API request will still be accepted by the keystone # middleware. # (boolean value) #send_service_user_token = false # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # Authentication type to load (string value) # Deprecated group/name - [service_user]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = # Authentication URL (string value) #auth_url = # Scope for system operations (string value) #system_scope = # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Project ID to scope to (string value) #project_id = # Project name to scope to (string value) #project_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = # Trust ID (string value) #trust_id = # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string # value) #default_domain_id = # Optional domain name to use with v3 API and v2 parameters. It will be used for # both the user and project domain in v3 and ignored in v2 authentication. # (string value) #default_domain_name = # User ID (string value) #user_id = # Username (string value) # Deprecated group/name - [service_user]/user_name #username = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # User's password (string value) #password = # Tenant ID (string value) #tenant_id = # Tenant Name (string value) #tenant_name = [spice] # # SPICE console feature allows you to connect to a guest virtual machine. # SPICE is a replacement for fairly limited VNC protocol. # # Following requirements must be met in order to use SPICE: # # * Virtualization driver must be libvirt # * spice.enabled set to True # * vnc.enabled set to False # * update html5proxy_base_url # * update server_proxyclient_address # # From nova.conf # # # Enable SPICE related features. # # Related options: # # * VNC must be explicitly disabled to get access to the SPICE console. Set the # enabled option to False in the [vnc] section to disable the VNC console. # (boolean value) #enabled = false # # Enable the SPICE guest agent support on the instances. # # The Spice agent works with the Spice protocol to offer a better guest console # experience. However, the Spice console can still be used without the Spice # Agent. With the Spice agent installed the following features are enabled: # # * Copy & Paste of text and images between the guest and client machine # * Automatic adjustment of resolution when the client screen changes - e.g. # if you make the Spice console full screen the guest resolution will adjust # to # match it rather than letterboxing. # * Better mouse integration - The mouse can be captured and released without # needing to click inside the console or press keys to release it. The # performance of mouse movement is also improved. # (boolean value) #agent_enabled = true # # Location of the SPICE HTML5 console proxy. # # End user would use this URL to connect to the `nova-spicehtml5proxy`` # service. This service will forward request to the console of an instance. # # In order to use SPICE console, the service ``nova-spicehtml5proxy`` should be # running. This service is typically launched on the controller node. # # Possible values: # # * Must be a valid URL of the form: ``http://host:port/spice_auto.html`` # where host is the node running ``nova-spicehtml5proxy`` and the port is # typically 6082. Consider not using default value as it is not well defined # for any real deployment. # # Related options: # # * This option depends on ``html5proxy_host`` and ``html5proxy_port`` options. # The access URL returned by the compute node must have the host # and port where the ``nova-spicehtml5proxy`` service is listening. # (uri value) #html5proxy_base_url = http://127.0.0.1:6082/spice_auto.html # # The address where the SPICE server running on the instances should listen. # # Typically, the ``nova-spicehtml5proxy`` proxy client runs on the controller # node and connects over the private network to this address on the compute # node(s). # # Possible values: # # * IP address to listen on. # (string value) #server_listen = 127.0.0.1 # # The address used by ``nova-spicehtml5proxy`` client to connect to instance # console. # # Typically, the ``nova-spicehtml5proxy`` proxy client runs on the # controller node and connects over the private network to this address on the # compute node(s). # # Possible values: # # * Any valid IP address on the compute node. # # Related options: # # * This option depends on the ``server_listen`` option. # The proxy client must be able to access the address specified in # ``server_listen`` using the value of this option. # (string value) #server_proxyclient_address = 127.0.0.1 # # A keyboard layout which is supported by the underlying hypervisor on this # node. # # Possible values: # * This is usually an 'IETF language tag' (default is 'en-us'). If you # use QEMU as hypervisor, you should find the list of supported keyboard # layouts at /usr/share/qemu/keymaps. # (string value) #keymap = en-us # # IP address or a hostname on which the ``nova-spicehtml5proxy`` service # listens for incoming requests. # # Related options: # # * This option depends on the ``html5proxy_base_url`` option. # The ``nova-spicehtml5proxy`` service must be listening on a host that is # accessible from the HTML5 client. # (unknown value) #html5proxy_host = 0.0.0.0 # # Port on which the ``nova-spicehtml5proxy`` service listens for incoming # requests. # # Related options: # # * This option depends on the ``html5proxy_base_url`` option. # The ``nova-spicehtml5proxy`` service must be listening on a port that is # accessible from the HTML5 client. # (port value) # Minimum value: 0 # Maximum value: 65535 #html5proxy_port = 6082 [upgrade_levels] # # upgrade_levels options are used to set version cap for RPC # messages sent between different nova services. # # By default all services send messages using the latest version # they know about. # # The compute upgrade level is an important part of rolling upgrades # where old and new nova-compute services run side by side. # # The other options can largely be ignored, and are only kept to # help with a possible future backport issue. # # From nova.conf # # # Compute RPC API version cap. # # By default, we always send messages using the most recent version # the client knows about. # # Where you have old and new compute services running, you should set # this to the lowest deployed version. This is to guarantee that all # services never send messages that one of the compute nodes can't # understand. Note that we only support upgrading from release N to # release N+1. # # Set this option to "auto" if you want to let the compute RPC module # automatically determine what version to use based on the service # versions in the deployment. # # Possible values: # # * By default send the latest version the client knows about # * 'auto': Automatically determines what version to use based on # the service versions in the deployment. # * A string representing a version number in the format 'N.N'; # for example, possible values might be '1.12' or '2.0'. # * An OpenStack release name, in lower case, such as 'mitaka' or # 'liberty'. # (string value) #compute = # Cells RPC API version cap (string value) #cells = # Intercell RPC API version cap (string value) #intercell = # Cert RPC API version cap (string value) #cert = # Scheduler RPC API version cap (string value) #scheduler = # Conductor RPC API version cap (string value) #conductor = # Console RPC API version cap (string value) #console = # Consoleauth RPC API version cap (string value) #consoleauth = # Network RPC API version cap (string value) #network = # Base API RPC API version cap (string value) #baseapi = [vault] # # From nova.conf # # root token for vault (string value) #root_token_id = # Use this endpoint to connect to Vault, for example: "http://127.0.0.1:8200" # (string value) #vault_url = http://127.0.0.1:8200 # Absolute path to ca cert file (string value) #ssl_ca_crt_file = # SSL Enabled/Disabled (boolean value) #use_ssl = false [vendordata_dynamic_auth] # # Options within this group control the authentication of the vendordata # subsystem of the metadata API server (and config drive) with external systems. # # From nova.conf # # PEM encoded Certificate Authority to use when verifying HTTPs connections. # (string value) #cafile = # PEM encoded client certificate cert file (string value) #certfile = # PEM encoded client certificate key file (string value) #keyfile = # Verify HTTPS connections. (boolean value) #insecure = false # Timeout value for http requests (integer value) #timeout = # Authentication type to load (string value) # Deprecated group/name - [vendordata_dynamic_auth]/auth_plugin #auth_type = # Config Section from which to load plugin specific options (string value) #auth_section = # Authentication URL (string value) #auth_url = # Scope for system operations (string value) #system_scope = # Domain ID to scope to (string value) #domain_id = # Domain name to scope to (string value) #domain_name = # Project ID to scope to (string value) #project_id = # Project name to scope to (string value) #project_name = # Domain ID containing project (string value) #project_domain_id = # Domain name containing project (string value) #project_domain_name = # Trust ID (string value) #trust_id = # Optional domain ID to use with v3 and v2 parameters. It will be used for both # the user and project domain in v3 and ignored in v2 authentication. (string # value) #default_domain_id = # Optional domain name to use with v3 API and v2 parameters. It will be used for # both the user and project domain in v3 and ignored in v2 authentication. # (string value) #default_domain_name = # User ID (string value) #user_id = # Username (string value) # Deprecated group/name - [vendordata_dynamic_auth]/user_name #username = # User's domain id (string value) #user_domain_id = # User's domain name (string value) #user_domain_name = # User's password (string value) #password = # Tenant ID (string value) #tenant_id = # Tenant Name (string value) #tenant_name = [vmware] # # Related options: # Following options must be set in order to launch VMware-based # virtual machines. # # * compute_driver: Must use vmwareapi.VMwareVCDriver. # * vmware.host_username # * vmware.host_password # * vmware.cluster_name # # From nova.conf # # # This option specifies the physical ethernet adapter name for VLAN # networking. # # Set the vlan_interface configuration option to match the ESX host # interface that handles VLAN-tagged VM traffic. # # Possible values: # # * Any valid string representing VLAN interface name # (string value) #vlan_interface = vmnic0 # # This option should be configured only when using the NSX-MH Neutron # plugin. This is the name of the integration bridge on the ESXi server # or host. This should not be set for any other Neutron plugin. Hence # the default value is not set. # # Possible values: # # * Any valid string representing the name of the integration bridge # (string value) #integration_bridge = # # Set this value if affected by an increased network latency causing # repeated characters when typing in a remote console. # (integer value) # Minimum value: 0 #console_delay_seconds = # # Identifies the remote system where the serial port traffic will # be sent. # # This option adds a virtual serial port which sends console output to # a configurable service URI. At the service URI address there will be # virtual serial port concentrator that will collect console logs. # If this is not set, no serial ports will be added to the created VMs. # # Possible values: # # * Any valid URI # (string value) #serial_port_service_uri = # # Identifies a proxy service that provides network access to the # serial_port_service_uri. # # Possible values: # # * Any valid URI (The scheme is 'telnet' or 'telnets'.) # # Related options: # This option is ignored if serial_port_service_uri is not specified. # * serial_port_service_uri # (uri value) #serial_port_proxy_uri = # # Specifies the directory where the Virtual Serial Port Concentrator is # storing console log files. It should match the 'serial_log_dir' config # value of VSPC. # (string value) #serial_log_dir = /opt/vmware/vspc # # Hostname or IP address for connection to VMware vCenter host. (unknown value) #host_ip = # Port for connection to VMware vCenter host. (port value) # Minimum value: 0 # Maximum value: 65535 #host_port = 443 # Username for connection to VMware vCenter host. (string value) #host_username = # Password for connection to VMware vCenter host. (string value) #host_password = # # Specifies the CA bundle file to be used in verifying the vCenter # server certificate. # (string value) #ca_file = # # If true, the vCenter server certificate is not verified. If false, # then the default CA truststore is used for verification. # # Related options: # * ca_file: This option is ignored if "ca_file" is set. # (boolean value) #insecure = false # Name of a VMware Cluster ComputeResource. (string value) #cluster_name = # # Regular expression pattern to match the name of datastore. # # The datastore_regex setting specifies the datastores to use with # Compute. For example, datastore_regex="nas.*" selects all the data # stores that have a name starting with "nas". # # NOTE: If no regex is given, it just picks the datastore with the # most freespace. # # Possible values: # # * Any matching regular expression to a datastore must be given # (string value) #datastore_regex = # # Time interval in seconds to poll remote tasks invoked on # VMware VC server. # (floating point value) #task_poll_interval = 0.5 # # Number of times VMware vCenter server API must be retried on connection # failures, e.g. socket error, etc. # (integer value) # Minimum value: 0 #api_retry_count = 10 # # This option specifies VNC starting port. # # Every VM created by ESX host has an option of enabling VNC client # for remote connection. Above option 'vnc_port' helps you to set # default starting port for the VNC client. # # Possible values: # # * Any valid port number within 5900 -(5900 + vnc_port_total) # # Related options: # Below options should be set to enable VNC client. # * vnc.enabled = True # * vnc_port_total # (port value) # Minimum value: 0 # Maximum value: 65535 #vnc_port = 5900 # # Total number of VNC ports. # (integer value) # Minimum value: 0 #vnc_port_total = 10000 # # This option enables/disables the use of linked clone. # # The ESX hypervisor requires a copy of the VMDK file in order to boot # up a virtual machine. The compute driver must download the VMDK via # HTTP from the OpenStack Image service to a datastore that is visible # to the hypervisor and cache it. Subsequent virtual machines that need # the VMDK use the cached version and don't have to copy the file again # from the OpenStack Image service. # # If set to false, even with a cached VMDK, there is still a copy # operation from the cache location to the hypervisor file directory # in the shared datastore. If set to true, the above copy operation # is avoided as it creates copy of the virtual machine that shares # virtual disks with its parent VM. # (boolean value) #use_linked_clone = true # # This option sets the http connection pool size # # The connection pool size is the maximum number of connections from nova to # vSphere. It should only be increased if there are warnings indicating that # the connection pool is full, otherwise, the default should suffice. # (integer value) # Minimum value: 10 #connection_pool_size = 10 # # This option enables or disables storage policy based placement # of instances. # # Related options: # # * pbm_default_policy # (boolean value) #pbm_enabled = false # # This option specifies the PBM service WSDL file location URL. # # Setting this will disable storage policy based placement # of instances. # # Possible values: # # * Any valid file path # e.g file:///opt/SDK/spbm/wsdl/pbmService.wsdl # (string value) #pbm_wsdl_location = # # This option specifies the default policy to be used. # # If pbm_enabled is set and there is no defined storage policy for the # specific request, then this policy will be used. # # Possible values: # # * Any valid storage policy such as VSAN default storage policy # # Related options: # # * pbm_enabled # (string value) #pbm_default_policy = # # This option specifies the limit on the maximum number of objects to # return in a single result. # # A positive value will cause the operation to suspend the retrieval # when the count of objects reaches the specified limit. The server may # still limit the count to something less than the configured value. # Any remaining objects may be retrieved with additional requests. # (integer value) # Minimum value: 0 #maximum_objects = 100 # # This option adds a prefix to the folder where cached images are stored # # This is not the full path - just a folder prefix. This should only be # used when a datastore cache is shared between compute nodes. # # Note: This should only be used when the compute nodes are running on same # host or they have a shared file system. # # Possible values: # # * Any string representing the cache prefix to the folder # (string value) #cache_prefix = [vnc] # # Virtual Network Computer (VNC) can be used to provide remote desktop # console access to instances for tenants and/or administrators. #pavlos --start enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html #pavlos --end # # From nova.conf # # # Enable VNC related features. # # Guests will get created with graphical devices to support this. Clients # (for example Horizon) can then establish a VNC connection to the guest. # (boolean value) # Deprecated group/name - [DEFAULT]/vnc_enabled #enabled = true # # Keymap for VNC. # # The keyboard mapping (keymap) determines which keyboard layout a VNC # session should use by default. # # Possible values: # # * A keyboard layout which is supported by the underlying hypervisor on # this node. This is usually an 'IETF language tag' (for example # 'en-us'). If you use QEMU as hypervisor, you should find the list # of supported keyboard layouts at ``/usr/share/qemu/keymaps``. # (string value) # Deprecated group/name - [DEFAULT]/vnc_keymap #keymap = en-us # # The IP address or hostname on which an instance should listen to for # incoming VNC connection requests on this node. # (unknown value) # Deprecated group/name - [DEFAULT]/vncserver_listen # Deprecated group/name - [vnc]/vncserver_listen #server_listen = 127.0.0.1 # # Private, internal IP address or hostname of VNC console proxy. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. # # This option sets the private address to which proxy clients, such as # ``nova-xvpvncproxy``, should connect to. # (unknown value) # Deprecated group/name - [DEFAULT]/vncserver_proxyclient_address # Deprecated group/name - [vnc]/vncserver_proxyclient_address #server_proxyclient_address = 127.0.0.1 # # Public address of noVNC VNC console proxy. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. noVNC provides # VNC support through a websocket-based client. # # This option sets the public base URL to which client systems will # connect. noVNC clients can use this address to connect to the noVNC # instance and, by extension, the VNC sessions. # # Related options: # # * novncproxy_host # * novncproxy_port # (uri value) #novncproxy_base_url = http://127.0.0.1:6080/vnc_auto.html # # IP address or hostname that the XVP VNC console proxy should bind to. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. Xen provides # the Xenserver VNC Proxy, or XVP, as an alternative to the # websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, # XVP clients are Java-based. # # This option sets the private address to which the XVP VNC console proxy # service should bind to. # # Related options: # # * xvpvncproxy_port # * xvpvncproxy_base_url # (unknown value) #xvpvncproxy_host = 0.0.0.0 # # Port that the XVP VNC console proxy should bind to. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. Xen provides # the Xenserver VNC Proxy, or XVP, as an alternative to the # websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, # XVP clients are Java-based. # # This option sets the private port to which the XVP VNC console proxy # service should bind to. # # Related options: # # * xvpvncproxy_host # * xvpvncproxy_base_url # (port value) # Minimum value: 0 # Maximum value: 65535 #xvpvncproxy_port = 6081 # # Public URL address of XVP VNC console proxy. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. Xen provides # the Xenserver VNC Proxy, or XVP, as an alternative to the # websocket-based noVNC proxy used by Libvirt. In contrast to noVNC, # XVP clients are Java-based. # # This option sets the public base URL to which client systems will # connect. XVP clients can use this address to connect to the XVP # instance and, by extension, the VNC sessions. # # Related options: # # * xvpvncproxy_host # * xvpvncproxy_port # (uri value) #xvpvncproxy_base_url = http://127.0.0.1:6081/console # # IP address that the noVNC console proxy should bind to. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. noVNC provides # VNC support through a websocket-based client. # # This option sets the private address to which the noVNC console proxy # service should bind to. # # Related options: # # * novncproxy_port # * novncproxy_base_url # (string value) #novncproxy_host = 0.0.0.0 # # Port that the noVNC console proxy should bind to. # # The VNC proxy is an OpenStack component that enables compute service # users to access their instances through VNC clients. noVNC provides # VNC support through a websocket-based client. # # This option sets the private port to which the noVNC console proxy # service should bind to. # # Related options: # # * novncproxy_host # * novncproxy_base_url # (port value) # Minimum value: 0 # Maximum value: 65535 #novncproxy_port = 6080 # # The authentication schemes to use with the compute node. # # Control what RFB authentication schemes are permitted for connections between # the proxy and the compute host. If multiple schemes are enabled, the first # matching scheme will be used, thus the strongest schemes should be listed # first. # # Possible values: # # * ``none``: allow connection without authentication # * ``vencrypt``: use VeNCrypt authentication scheme # # Related options: # # * ``[vnc]vencrypt_client_key``, ``[vnc]vencrypt_client_cert``: must also be # set # (list value) #auth_schemes = none # The path to the client certificate PEM file (for x509) # # The fully qualified path to a PEM file containing the private key which the # VNC # proxy server presents to the compute node during VNC authentication. # # Related options: # # * ``vnc.auth_schemes``: must include ``vencrypt`` # * ``vnc.vencrypt_client_cert``: must also be set # (string value) #vencrypt_client_key = # The path to the client key file (for x509) # # The fully qualified path to a PEM file containing the x509 certificate which # the VNC proxy server presents to the compute node during VNC authentication. # # Realted options: # # * ``vnc.auth_schemes``: must include ``vencrypt`` # * ``vnc.vencrypt_client_key``: must also be set # (string value) #vencrypt_client_cert = # The path to the CA certificate PEM file # # The fully qualified path to a PEM file containing one or more x509 # certificates # for the certificate authorities used by the compute node VNC server. # # Related options: # # * ``vnc.auth_schemes``: must include ``vencrypt`` # (string value) #vencrypt_ca_certs = [workarounds] # # A collection of workarounds used to mitigate bugs or issues found in system # tools (e.g. Libvirt or QEMU) or Nova itself under certain conditions. These # should only be enabled in exceptional circumstances. All options are linked # against bug IDs, where more information on the issue can be found. # # From nova.conf # # # Use sudo instead of rootwrap. # # Allow fallback to sudo for performance reasons. # # For more information, refer to the bug report: # # https://bugs.launchpad.net/nova/+bug/1415106 # # Possible values: # # * True: Use sudo instead of rootwrap # * False: Use rootwrap as usual # # Interdependencies to other options: # # * Any options that affect 'rootwrap' will be ignored. # (boolean value) #disable_rootwrap = false # # Disable live snapshots when using the libvirt driver. # # Live snapshots allow the snapshot of the disk to happen without an # interruption to the guest, using coordination with a guest agent to # quiesce the filesystem. # # When using libvirt 1.2.2 live snapshots fail intermittently under load # (likely related to concurrent libvirt/qemu operations). This config # option provides a mechanism to disable live snapshot, in favor of cold # snapshot, while this is resolved. Cold snapshot causes an instance # outage while the guest is going through the snapshotting process. # # For more information, refer to the bug report: # # https://bugs.launchpad.net/nova/+bug/1334398 # # Possible values: # # * True: Live snapshot is disabled when using libvirt # * False: Live snapshots are always used when snapshotting (as long as # there is a new enough libvirt and the backend storage supports it) # (boolean value) #disable_libvirt_livesnapshot = false # # Enable handling of events emitted from compute drivers. # # Many compute drivers emit lifecycle events, which are events that occur when, # for example, an instance is starting or stopping. If the instance is going # through task state changes due to an API operation, like resize, the events # are ignored. # # This is an advanced feature which allows the hypervisor to signal to the # compute service that an unexpected state change has occurred in an instance # and that the instance can be shutdown automatically. Unfortunately, this can # race in some conditions, for example in reboot operations or when the compute # service or when host is rebooted (planned or due to an outage). If such races # are common, then it is advisable to disable this feature. # # Care should be taken when this feature is disabled and # 'sync_power_state_interval' is set to a negative value. In this case, any # instances that get out of sync between the hypervisor and the Nova database # will have to be synchronized manually. # # For more information, refer to the bug report: # # https://bugs.launchpad.net/bugs/1444630 # # Interdependencies to other options: # # * If ``sync_power_state_interval`` is negative and this feature is disabled, # then instances that get out of sync between the hypervisor and the Nova # database will have to be synchronized manually. # (boolean value) #handle_virt_lifecycle_events = true # # Disable the server group policy check upcall in compute. # # In order to detect races with server group affinity policy, the compute # service attempts to validate that the policy was not violated by the # scheduler. It does this by making an upcall to the API database to list # the instances in the server group for one that it is booting, which violates # our api/cell isolation goals. Eventually this will be solved by proper # affinity # guarantees in the scheduler and placement service, but until then, this late # check is needed to ensure proper affinity policy. # # Operators that desire api/cell isolation over this check should # enable this flag, which will avoid making that upcall from compute. # # Related options: # # * [filter_scheduler]/track_instance_changes also relies on upcalls from the # compute service to the scheduler service. # (boolean value) #disable_group_policy_check_upcall = false # # Ensure the instance directory is removed during clean up when using rbd. # # When enabled this workaround will ensure that the instance directory is always # removed during cleanup on hosts using ``[libvirt]/images_type=rbd``. This # avoids the following bugs with evacuation and revert resize clean up that lead # to the instance directory remaining on the host: # # https://bugs.launchpad.net/nova/+bug/1414895 # # https://bugs.launchpad.net/nova/+bug/1761062 # # Both of these bugs can then result in ``DestinationDiskExists`` errors being # raised if the instances ever attempt to return to the host. # # .. warning:: Operators will need to ensure that the instance directory itself, # specified by ``[DEFAULT]/instances_path``, is not shared between computes # before enabling this workaround otherwise the console.log, kernels, ramdisks # and any additional files being used by the running instance will be lost. # # Related options: # # * ``compute_driver`` (libvirt) # * ``[libvirt]/images_type`` (rbd) # * ``instances_path`` # (boolean value) #ensure_libvirt_rbd_instance_dir_cleanup = false # # Enable live migration of instances with NUMA topologies. # # Live migration of instances with NUMA topologies is disabled by default # when using the libvirt driver. This includes live migration of instances with # CPU pinning or hugepages. CPU pinning and huge page information for such # instances is not currently re-calculated, as noted in bug #1289064. This # means that if instances were already present on the destination host, the # migrated instance could be placed on the same dedicated cores as these # instances or use hugepages allocated for another instance. Alternately, if the # host platforms were not homogeneous, the instance could be assigned to # non-existent cores or be inadvertently split across host NUMA nodes. # # Despite these known issues, there may be cases where live migration is # necessary. By enabling this option, operators that are aware of the issues and # are willing to manually work around them can enable live migration support for # these instances. # # Related options: # # * ``compute_driver``: Only the libvirt driver is affected. # (boolean value) #enable_numa_live_migration = false [wsgi] # # Options under this group are used to configure WSGI (Web Server Gateway # Interface). WSGI is used to serve API requests. # # From nova.conf # # # This option represents a file name for the paste.deploy config for nova-api. # # Possible values: # # * A string representing file name for the paste.deploy config. # (string value) #api_paste_config = api-paste.ini # DEPRECATED: # It represents a python format string that is used as the template to generate # log lines. The following values can be formatted into it: client_ip, # date_time, request_line, status_code, body_length, wall_seconds. # # This option is used for building custom request loglines when running # nova-api under eventlet. If used under uwsgi or apache, this option # has no effect. # # Possible values: # # * '%(client_ip)s "%(request_line)s" status: %(status_code)s' # 'len: %(body_length)s time: %(wall_seconds).7f' (default) # * Any formatted string formed by specific values. # (string value) # This option is deprecated for removal since 16.0.0. # Its value may be silently ignored in the future. # Reason: # This option only works when running nova-api under eventlet, and # encodes very eventlet specific pieces of information. Starting in Pike # the preferred model for running nova-api is under uwsgi or apache # mod_wsgi. #wsgi_log_format = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f # # This option specifies the HTTP header used to determine the protocol scheme # for the original request, even if it was removed by a SSL terminating proxy. # # Possible values: # # * None (default) - the request scheme is not influenced by any HTTP headers # * Valid HTTP header, like HTTP_X_FORWARDED_PROTO # # WARNING: Do not set this unless you know what you are doing. # # Make sure ALL of the following are true before setting this (assuming the # values from the example above): # * Your API is behind a proxy. # * Your proxy strips the X-Forwarded-Proto header from all incoming requests. # In other words, if end users include that header in their requests, the # proxy # will discard it. # * Your proxy sets the X-Forwarded-Proto header and sends it to API, but only # for requests that originally come in via HTTPS. # # If any of those are not true, you should keep this setting set to None. # # (string value) #secure_proxy_ssl_header = # # This option allows setting path to the CA certificate file that should be used # to verify connecting clients. # # Possible values: # # * String representing path to the CA certificate file. # # Related options: # # * enabled_ssl_apis # (string value) #ssl_ca_file = # # This option allows setting path to the SSL certificate of API server. # # Possible values: # # * String representing path to the SSL certificate. # # Related options: # # * enabled_ssl_apis # (string value) #ssl_cert_file = # # This option specifies the path to the file where SSL private key of API # server is stored when SSL is in effect. # # Possible values: # # * String representing path to the SSL private key. # # Related options: # # * enabled_ssl_apis # (string value) #ssl_key_file = # # This option sets the value of TCP_KEEPIDLE in seconds for each server socket. # It specifies the duration of time to keep connection active. TCP generates a # KEEPALIVE transmission for an application that requests to keep connection # active. Not supported on OS X. # # Related options: # # * keep_alive # (integer value) # Minimum value: 0 #tcp_keepidle = 600 # # This option specifies the size of the pool of greenthreads used by wsgi. # It is possible to limit the number of concurrent connections using this # option. # (integer value) # Minimum value: 0 # Deprecated group/name - [DEFAULT]/wsgi_default_pool_size #default_pool_size = 1000 # # This option specifies the maximum line size of message headers to be accepted. # max_header_line may need to be increased when using large tokens (typically # those generated by the Keystone v3 API with big service catalogs). # # Since TCP is a stream based protocol, in order to reuse a connection, the HTTP # has to have a way to indicate the end of the previous response and beginning # of the next. Hence, in a keep_alive case, all messages must have a # self-defined message length. # (integer value) # Minimum value: 0 #max_header_line = 16384 # # This option allows using the same TCP connection to send and receive multiple # HTTP requests/responses, as opposed to opening a new one for every single # request/response pair. HTTP keep-alive indicates HTTP connection reuse. # # Possible values: # # * True : reuse HTTP connection. # * False : closes the client socket connection explicitly. # # Related options: # # * tcp_keepidle # (boolean value) # Deprecated group/name - [DEFAULT]/wsgi_keep_alive #keep_alive = true # # This option specifies the timeout for client connections' socket operations. # If an incoming connection is idle for this number of seconds it will be # closed. It indicates timeout on individual read/writes on the socket # connection. To wait forever set to 0. # (integer value) # Minimum value: 0 #client_socket_timeout = 900 [xenserver] # # XenServer options are used when the compute_driver is set to use # XenServer (compute_driver=xenapi.XenAPIDriver). # # Must specify connection_url, connection_password and ovs_integration_bridge to # use compute_driver=xenapi.XenAPIDriver. # # From nova.conf # # # Number of seconds to wait for agent's reply to a request. # # Nova configures/performs certain administrative actions on a server with the # help of an agent that's installed on the server. The communication between # Nova and the agent is achieved via sharing messages, called records, over # xenstore, a shared storage across all the domains on a Xenserver host. # Operations performed by the agent on behalf of nova are: 'version',' # key_init', # 'password','resetnetwork','inject_file', and 'agentupdate'. # # To perform one of the above operations, the xapi 'agent' plugin writes the # command and its associated parameters to a certain location known to the # domain # and awaits response. On being notified of the message, the agent performs # appropriate actions on the server and writes the result back to xenstore. This # result is then read by the xapi 'agent' plugin to determine the # success/failure # of the operation. # # This config option determines how long the xapi 'agent' plugin shall wait to # read the response off of xenstore for a given request/command. If the agent on # the instance fails to write the result in this time period, the operation is # considered to have timed out. # # Related options: # # * ``agent_version_timeout`` # * ``agent_resetnetwork_timeout`` # # (integer value) # Minimum value: 0 #agent_timeout = 30 # # Number of seconds to wait for agent't reply to version request. # # This indicates the amount of time xapi 'agent' plugin waits for the agent to # respond to the 'version' request specifically. The generic timeout for agent # communication ``agent_timeout`` is ignored in this case. # # During the build process the 'version' request is used to determine if the # agent is available/operational to perform other requests such as # 'resetnetwork', 'password', 'key_init' and 'inject_file'. If the 'version' # call # fails, the other configuration is skipped. So, this configuration option can # also be interpreted as time in which agent is expected to be fully # operational. # (integer value) # Minimum value: 0 #agent_version_timeout = 300 # # Number of seconds to wait for agent's reply to resetnetwork # request. # # This indicates the amount of time xapi 'agent' plugin waits for the agent to # respond to the 'resetnetwork' request specifically. The generic timeout for # agent communication ``agent_timeout`` is ignored in this case. # (integer value) # Minimum value: 0 #agent_resetnetwork_timeout = 60 # # Path to locate guest agent on the server. # # Specifies the path in which the XenAPI guest agent should be located. If the # agent is present, network configuration is not injected into the image. # # Related options: # # For this option to have an effect: # * ``flat_injected`` should be set to ``True`` # * ``compute_driver`` should be set to ``xenapi.XenAPIDriver`` # # (string value) #agent_path = usr/sbin/xe-update-networking # # Disables the use of XenAPI agent. # # This configuration option suggests whether the use of agent should be enabled # or not regardless of what image properties are present. Image properties have # an effect only when this is set to ``True``. Read description of config option # ``use_agent_default`` for more information. # # Related options: # # * ``use_agent_default`` # # (boolean value) #disable_agent = false # # Whether or not to use the agent by default when its usage is enabled but not # indicated by the image. # # The use of XenAPI agent can be disabled altogether using the configuration # option ``disable_agent``. However, if it is not disabled, the use of an agent # can still be controlled by the image in use through one of its properties, # ``xenapi_use_agent``. If this property is either not present or specified # incorrectly on the image, the use of agent is determined by this configuration # option. # # Note that if this configuration is set to ``True`` when the agent is not # present, the boot times will increase significantly. # # Related options: # # * ``disable_agent`` # # (boolean value) #use_agent_default = false # Timeout in seconds for XenAPI login. (integer value) # Minimum value: 0 #login_timeout = 10 # # Maximum number of concurrent XenAPI connections. # # In nova, multiple XenAPI requests can happen at a time. # Configuring this option will parallelize access to the XenAPI # session, which allows you to make concurrent XenAPI connections. # (integer value) # Minimum value: 1 #connection_concurrent = 5 # # Cache glance images locally. # # The value for this option must be chosen from the choices listed # here. Configuring a value other than these will default to 'all'. # # Note: There is nothing that deletes these images. # # Possible values: # # * `all`: will cache all images. # * `some`: will only cache images that have the # image_property `cache_in_nova=True`. # * `none`: turns off caching entirely. # (string value) # Possible values: # all - # some - # none - #cache_images = all # # Compression level for images. # # By setting this option we can configure the gzip compression level. # This option sets GZIP environment variable before spawning tar -cz # to force the compression level. It defaults to none, which means the # GZIP environment variable is not set and the default (usually -6) # is used. # # Possible values: # # * Range is 1-9, e.g., 9 for gzip -9, 9 being most # compressed but most CPU intensive on dom0. # * Any values out of this range will default to None. # (integer value) # Minimum value: 1 # Maximum value: 9 #image_compression_level = # Default OS type used when uploading an image to glance (string value) #default_os_type = linux # Time in secs to wait for a block device to be created (integer value) # Minimum value: 1 #block_device_creation_timeout = 10 # # Maximum size in bytes of kernel or ramdisk images. # # Specifying the maximum size of kernel or ramdisk will avoid copying # large files to dom0 and fill up /boot/guest. # (integer value) #max_kernel_ramdisk_size = 16777216 # # Filter for finding the SR to be used to install guest instances on. # # Possible values: # # * To use the Local Storage in default XenServer/XCP installations # set this flag to other-config:i18n-key=local-storage. # * To select an SR with a different matching criteria, you could # set it to other-config:my_favorite_sr=true. # * To fall back on the Default SR, as displayed by XenCenter, # set this flag to: default-sr:true. # (string value) #sr_matching_filter = default-sr:true # # Whether to use sparse_copy for copying data on a resize down. # (False will use standard dd). This speeds up resizes down # considerably since large runs of zeros won't have to be rsynced. # (boolean value) #sparse_copy = true # # Maximum number of retries to unplug VBD. # If set to 0, should try once, no retries. # (integer value) # Minimum value: 0 #num_vbd_unplug_retries = 10 # # Name of network to use for booting iPXE ISOs. # # An iPXE ISO is a specially crafted ISO which supports iPXE booting. # This feature gives a means to roll your own image. # # By default this option is not set. Enable this option to # boot an iPXE ISO. # # Related Options: # # * `ipxe_boot_menu_url` # * `ipxe_mkisofs_cmd` # (string value) #ipxe_network_name = # # URL to the iPXE boot menu. # # An iPXE ISO is a specially crafted ISO which supports iPXE booting. # This feature gives a means to roll your own image. # # By default this option is not set. Enable this option to # boot an iPXE ISO. # # Related Options: # # * `ipxe_network_name` # * `ipxe_mkisofs_cmd` # (string value) #ipxe_boot_menu_url = # # Name and optionally path of the tool used for ISO image creation. # # An iPXE ISO is a specially crafted ISO which supports iPXE booting. # This feature gives a means to roll your own image. # # Note: By default `mkisofs` is not present in the Dom0, so the # package can either be manually added to Dom0 or include the # `mkisofs` binary in the image itself. # # Related Options: # # * `ipxe_network_name` # * `ipxe_boot_menu_url` # (string value) #ipxe_mkisofs_cmd = mkisofs # # URL for connection to XenServer/Xen Cloud Platform. A special value # of unix://local can be used to connect to the local unix socket. # # Possible values: # # * Any string that represents a URL. The connection_url is # generally the management network IP address of the XenServer. # * This option must be set if you chose the XenServer driver. # (string value) #connection_url = # Username for connection to XenServer/Xen Cloud Platform (string value) #connection_username = root # Password for connection to XenServer/Xen Cloud Platform (string value) #connection_password = # # The interval used for polling of coalescing vhds. # # This is the interval after which the task of coalesce VHD is # performed, until it reaches the max attempts that is set by # vhd_coalesce_max_attempts. # # Related options: # # * `vhd_coalesce_max_attempts` # (floating point value) # Minimum value: 0 #vhd_coalesce_poll_interval = 5.0 # # Ensure compute service is running on host XenAPI connects to. # This option must be set to false if the 'independent_compute' # option is set to true. # # Possible values: # # * Setting this option to true will make sure that compute service # is running on the same host that is specified by connection_url. # * Setting this option to false, doesn't perform the check. # # Related options: # # * `independent_compute` # (boolean value) #check_host = true # # Max number of times to poll for VHD to coalesce. # # This option determines the maximum number of attempts that can be # made for coalescing the VHD before giving up. # # Related opitons: # # * `vhd_coalesce_poll_interval` # (integer value) # Minimum value: 0 #vhd_coalesce_max_attempts = 20 # Base path to the storage repository on the XenServer host. (string value) #sr_base_path = /var/run/sr-mount # # The iSCSI Target Host. # # This option represents the hostname or ip of the iSCSI Target. # If the target host is not present in the connection information from # the volume provider then the value from this option is taken. # # Possible values: # # * Any string that represents hostname/ip of Target. # (unknown value) #target_host = # # The iSCSI Target Port. # # This option represents the port of the iSCSI Target. If the # target port is not present in the connection information from the # volume provider then the value from this option is taken. # (port value) # Minimum value: 0 # Maximum value: 65535 #target_port = 3260 # # Used to prevent attempts to attach VBDs locally, so Nova can # be run in a VM on a different host. # # Related options: # # * ``CONF.flat_injected`` (Must be False) # * ``CONF.xenserver.check_host`` (Must be False) # * ``CONF.default_ephemeral_format`` (Must be unset or 'ext3') # * Joining host aggregates (will error if attempted) # * Swap disks for Windows VMs (will error if attempted) # * Nova-based auto_configure_disk (will error if attempted) # (boolean value) #independent_compute = false # # Wait time for instances to go to running state. # # Provide an integer value representing time in seconds to set the # wait time for an instance to go to running state. # # When a request to create an instance is received by nova-api and # communicated to nova-compute, the creation of the instance occurs # through interaction with Xen via XenAPI in the compute node. Once # the node on which the instance(s) are to be launched is decided by # nova-schedule and the launch is triggered, a certain amount of wait # time is involved until the instance(s) can become available and # 'running'. This wait time is defined by running_timeout. If the # instances do not go to running state within this specified wait # time, the launch expires and the instance(s) are set to 'error' # state. # (integer value) # Minimum value: 0 #running_timeout = 60 # DEPRECATED: # The XenAPI VIF driver using XenServer Network APIs. # # Provide a string value representing the VIF XenAPI vif driver to use for # plugging virtual network interfaces. # # Xen configuration uses bridging within the backend domain to allow # all VMs to appear on the network as individual hosts. Bridge # interfaces are used to create a XenServer VLAN network in which # the VIFs for the VM instances are plugged. If no VIF bridge driver # is plugged, the bridge is not made available. This configuration # option takes in a value for the VIF driver. # # Possible values: # # * nova.virt.xenapi.vif.XenAPIOpenVswitchDriver (default) # * nova.virt.xenapi.vif.XenAPIBridgeDriver (deprecated) # # Related options: # # * ``vlan_interface`` # * ``ovs_integration_bridge`` # (string value) # This option is deprecated for removal since 15.0.0. # Its value may be silently ignored in the future. # Reason: # There are only two in-tree vif drivers for XenServer. XenAPIBridgeDriver is # for # nova-network which is deprecated and XenAPIOpenVswitchDriver is for Neutron # which is the default configuration for Nova since the 15.0.0 Ocata release. In # the future the "use_neutron" configuration option will be used to determine # which vif driver to use. #vif_driver = nova.virt.xenapi.vif.XenAPIOpenVswitchDriver # # Dom0 plugin driver used to handle image uploads. # # Provide a string value representing a plugin driver required to # handle the image uploading to GlanceStore. # # Images, and snapshots from XenServer need to be uploaded to the data # store for use. image_upload_handler takes in a value for the Dom0 # plugin driver. This driver is then called to uplaod images to the # GlanceStore. # (string value) #image_upload_handler = nova.virt.xenapi.image.glance.GlanceStore # # Number of seconds to wait for SR to settle if the VDI # does not exist when first introduced. # # Some SRs, particularly iSCSI connections are slow to see the VDIs # right after they got introduced. Setting this option to a # time interval will make the SR to wait for that time period # before raising VDI not found exception. # (integer value) # Minimum value: 0 #introduce_vdi_retry_wait = 20 # # The name of the integration Bridge that is used with xenapi # when connecting with Open vSwitch. # # Note: The value of this config option is dependent on the # environment, therefore this configuration value must be set # accordingly if you are using XenAPI. # # Possible values: # # * Any string that represents a bridge name. # (string value) #ovs_integration_bridge = # # When adding new host to a pool, this will append a --force flag to the # command, forcing hosts to join a pool, even if they have different CPUs. # # Since XenServer version 5.6 it is possible to create a pool of hosts that have # different CPU capabilities. To accommodate CPU differences, XenServer limited # features it uses to determine CPU compatibility to only the ones that are # exposed by CPU and support for CPU masking was added. # Despite this effort to level differences between CPUs, it is still possible # that adding new host will fail, thus option to force join was introduced. # (boolean value) #use_join_force = true # # Publicly visible name for this console host. # # Possible values: # # * Current hostname (default) or any string representing hostname. # (string value) #console_public_hostname = [xvp] # # Configuration options for XVP. # # xvp (Xen VNC Proxy) is a proxy server providing password-protected VNC-based # access to the consoles of virtual machines hosted on Citrix XenServer. # # From nova.conf # # XVP conf template (string value) #console_xvp_conf_template = $pybasedir/nova/console/xvp.conf.template # Generated XVP conf file (string value) #console_xvp_conf = /etc/xvp.conf # XVP master process pid file (string value) #console_xvp_pid = /var/run/xvp.pid # XVP log file (string value) #console_xvp_log = /var/log/xvp.log # Port for XVP to multiplex VNC connections on (port value) # Minimum value: 0 # Maximum value: 65535 #console_xvp_multiplex_port = 5900 From radoslaw.piliszek at gmail.com Wed Nov 11 17:37:15 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 11 Nov 2020 18:37:15 +0100 Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service In-Reply-To: References: Message-ID: Thank you for the heads up. I write to confirm that Kolla has, for some time, not provided database credentials to nova-compute. -yoctozepto On Wed, Nov 11, 2020 at 5:36 PM Balázs Gibizer wrote: > > Dear packagers and deployment engine developers, > > Since Icehouse nova-compute service does not need any database > configuration as it uses the message bus to access data in the database > via the conductor service. Also, the nova configuration guide states > that the nova-compute service should not have the > [api_database]connection config set. Having any DB credentials > configured for the nova-compute is a security risk as well since that > service runs close to the hypervisor. Since Rocky[1] nova-compute > service fails if you configure API DB credentials and set upgrade_level > config to 'auto'. > > Now we are proposing a patch[2] that makes nova-compute fail at startup > if the [database]connection or the [api_database]connection is > configured. We know that this breaks at least the rpm packaging, debian > packaging, and puppet-nova. The problem there is that in an all-in-on > deployment scenario the nova.conf file generated by these tools is > shared between all the nova services and therefore nova-compute sees DB > credentials. As a counter-example, devstack generates a separate > nova-cpu.conf and passes that to the nova-compute service even in an > all-in-on setup. > > The nova team would like to merge [2] during Wallaby but we are OK to > delay the patch until Wallaby Milestone 2 so that the packagers and > deployment tools can catch up. Please let us know if you are impacted > and provide a way to track when you are ready with the modification > that allows [2] to be merged. > > There was a long discussion on #openstack-nova today[3] around this > topic. So you can find more detailed reasoning there[3]. > > Cheers, > gibi > > [1] > https://github.com/openstack/nova/blob/dc93e3b510f53d5b2198c8edd22528f0c899617e/nova/compute/rpcapi.py#L441-L457 > [2] https://review.opendev.org/#/c/762176 > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-11.log.html#t2020-11-11T10:51:23 > -- > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-11.log.html#t2020-11-11T14:40:51 > > > From smooney at redhat.com Wed Nov 11 17:47:49 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Nov 2020 17:47:49 +0000 Subject: Question about the instance snapshot In-Reply-To: <20201111154117.Horde.rS6VTlLeua6yUDh3vuci3HY@webmail.nde.ag> References: <20201111154117.Horde.rS6VTlLeua6yUDh3vuci3HY@webmail.nde.ag> Message-ID: <338088f61d60846718c3df3f117dd9adfdb28d32.camel@redhat.com> On Wed, 2020-11-11 at 15:41 +0000, Eugen Block wrote: > Hi, > > taking an instance snapshot only applies to the root disk, other  > attached disks are not included. yes this is by design. snapshot have only ever applied to the root disks volume snap shots can be done via the cinder api in a much more efficent way. > I don't have an explanation but I  > think it would be quite difficult to merge all contents from different  > disks into one image. well not just diffuclt it would be very in efficnt to use one image and unexpected to have an image per volume. its nut not in the scope of the nova snapshot api to do anything other then the root disk. volume snapshot realy should jsut be done on the backing store without transfering any data to glance and just using glance to store teh metadata. if we were to have qemu snapshot the volume it would have to potentialy copy a lot of data or invoke the existign volume snapshot apis and we have said in the past that nova should not proxy calls to other service that can be orchestrated externally. > If this is about backup and not creating new  > base images and your instances are volume-based you could create  > cinder snapshots of those volumes, for example by creating  > consistency-groups to have all volumes in a consistent state. > > If this is more about creating new base images rather than backups and  > you have an instance with its filesystem distributed across multiple  > volumes it would probably be better to either move everything to a  > single volume (easy when you're using LVM) or resize that instance  > with a larger disk size. There are several ways, it depends on your  > actual requirement. > > Regards, > Eugen > > > > Zitat von Henry lol : > > > Hello, everyone > > > > I'm wondering whether the snapshot from the instance saves all disks > > attached to the instance or only the main(=first) disk, because I can't > > find any clear description for it. yes it is only for the root disk. > > > > If the latter is true, should I detach all disks except for the main from > > the instance before taking a snapshot, and why doesn't it support all > > attached disks? > > > > Thanks > > Sincerely, > > > > From zigo at debian.org Wed Nov 11 17:52:34 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 11 Nov 2020 18:52:34 +0100 Subject: [all] Fixtures & monkey patching issue with Python 3.9 Message-ID: <674bfdfb-495a-19e0-a306-b96aece85b14@debian.org> Hi, Please see these: https://bugs.debian.org/973239 https://github.com/testing-cabal/fixtures/issues/44 Is this affecting OpenStack? Could I just ignore these tests? Or is there an easy solution? Cheers, Thomas Goirand (zigo) From allison at openstack.org Wed Nov 11 20:06:52 2020 From: allison at openstack.org (Allison Price) Date: Wed, 11 Nov 2020 14:06:52 -0600 Subject: Victoria Release Community Meeting In-Reply-To: <20201111144618.ndckzezqgbb2xble@yuggoth.org> References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> <10079933.t8I2mb3pXR@p1> <20201111144618.ndckzezqgbb2xble@yuggoth.org> Message-ID: We can definitely consider using Jitsi for the next OIF community meeting. One significant feature that Zoom has is our ability to record the community meeting so that the rest of the community can watch on their own time, accommodating for different time zones. For those who do not wish to participate via Zoom tomorrow, the recording will be posted on YouTube and all of the PTLs share in their presentation how to connect with them via email or IRC. > On Nov 11, 2020, at 8:46 AM, Jeremy Stanley wrote: > > On 2020-11-11 13:24:25 +0100 (+0100), Slawek Kaplonski wrote: > [...] >> Actually foundation has Jitsii server also - see >> https://meetpad.opendev.org/ In Neutron team we were using it >> almost without problems during last PTG. > > It's not really "the foundation" though, it's run by the OpenDev > Collaboratory sysadmins and community. > >> But problem which we had with it was that it didn't work for >> people from China AFAIK so we had to switch to Zoom finally. > [...] > > This is unsubstantiated. We suspect people worldwide experience > problems with getting WebRTC traffic through corporate firewalls, > but on the whole people who live in mainland China have been > conditioned to assume anything which doesn't work is being blocked > by the government. We're working with some folks in China to attempt > to prove or disprove it, but coordinating between timezones has > slowed progress on that. We hope to have a better understanding of > the local access problems for China, if any, soon. > -- > Jeremy Stanley -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Wed Nov 11 20:08:47 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 11 Nov 2020 21:08:47 +0100 Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service In-Reply-To: References: Message-ID: On 11/11/20 5:35 PM, Balázs Gibizer wrote: > Dear packagers and deployment engine developers, > > Since Icehouse nova-compute service does not need any database > configuration as it uses the message bus to access data in the database > via the conductor service. Also, the nova configuration guide states > that the nova-compute service should not have the > [api_database]connection config set. Having any DB credentials > configured for the nova-compute is a security risk as well since that > service runs close to the hypervisor. Since Rocky[1] nova-compute > service fails if you configure API DB credentials and set upgrade_level > config to 'auto'. > > Now we are proposing a patch[2] that makes nova-compute fail at startup > if the [database]connection or the [api_database]connection is > configured. We know that this breaks at least the rpm packaging, debian > packaging, and puppet-nova. The problem there is that in an all-in-on > deployment scenario the nova.conf file generated by these tools is > shared between all the nova services and therefore nova-compute sees DB > credentials. As a counter-example, devstack generates a separate > nova-cpu.conf and passes that to the nova-compute service even in an > all-in-on setup. > > The nova team would like to merge [2] during Wallaby but we are OK to > delay the patch until Wallaby Milestone 2 so that the packagers and > deployment tools can catch up. Please let us know if you are impacted > and provide a way to track when you are ready with the modification that > allows [2] to be merged. > > There was a long discussion on #openstack-nova today[3] around this > topic. So you can find more detailed reasoning there[3]. > > Cheers, > gibi IMO, that's ok if, and only if, we all agree on how to implement it. Best would be if we (all downstream distro + config management) agree on how to implement this. How about, we all implement a /etc/nova/nova-db.conf, and get all services that need db access to use it (ie: starting them with --config-file=/etc/nova/nova-db.conf)? If I understand well, these services would need access to db: - conductor - scheduler - novncproxy - serialproxy - spicehtml5proxy - api - api-metadata Is this list correct? Or is there some services that also don't need it? Cheers, Thomas Goirand (zigo) From zigo at debian.org Wed Nov 11 20:58:12 2020 From: zigo at debian.org (Thomas Goirand) Date: Wed, 11 Nov 2020 21:58:12 +0100 Subject: Victoria Release Community Meeting In-Reply-To: References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> <10079933.t8I2mb3pXR@p1> <20201111144618.ndckzezqgbb2xble@yuggoth.org> Message-ID: On 11/11/20 9:06 PM, Allison Price wrote: > We can definitely consider using Jitsi for the next OIF community > meeting. One significant feature that Zoom has is our ability to record > the community meeting so that the rest of the community can watch on > their own time, accommodating for different time zones. For those who do > not wish to participate via Zoom tomorrow, the recording will be posted > on YouTube and all of the PTLs share in their presentation how to > connect with them via email or IRC. This can be done with Jitsi too (but not Jitsi alone). Again, if you don't know how, I can get you in touch with the Debconf video team. Cheers, Thomas Goirand (zigo) From smooney at redhat.com Wed Nov 11 21:06:43 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 11 Nov 2020 21:06:43 +0000 Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service In-Reply-To: References: Message-ID: On Wed, 2020-11-11 at 21:08 +0100, Thomas Goirand wrote: > On 11/11/20 5:35 PM, Balázs Gibizer wrote: > > Dear packagers and deployment engine developers, > > > > Since Icehouse nova-compute service does not need any database > > configuration as it uses the message bus to access data in the database > > via the conductor service. Also, the nova configuration guide states > > that the nova-compute service should not have the > > [api_database]connection config set. Having any DB credentials > > configured for the nova-compute is a security risk as well since that > > service runs close to the hypervisor. Since Rocky[1] nova-compute > > service fails if you configure API DB credentials and set upgrade_level > > config to 'auto'. > > > > Now we are proposing a patch[2] that makes nova-compute fail at startup > > if the [database]connection or the [api_database]connection is > > configured. We know that this breaks at least the rpm packaging, debian > > packaging, and puppet-nova. The problem there is that in an all-in-on > > deployment scenario the nova.conf file generated by these tools is > > shared between all the nova services and therefore nova-compute sees DB > > credentials. As a counter-example, devstack generates a separate > > nova-cpu.conf and passes that to the nova-compute service even in an > > all-in-on setup. > > > > The nova team would like to merge [2] during Wallaby but we are OK to > > delay the patch until Wallaby Milestone 2 so that the packagers and > > deployment tools can catch up. Please let us know if you are impacted > > and provide a way to track when you are ready with the modification that > > allows [2] to be merged. > > > > There was a long discussion on #openstack-nova today[3] around this > > topic. So you can find more detailed reasoning there[3]. > > > > Cheers, > > gibi > > IMO, that's ok if, and only if, we all agree on how to implement it. well not to be too blunt but im not sure we resonably can chose not to implement this. i has been raised by customer that have don a security audit in the past as a hardening issue that the db cred which are unused are present on the compute nodes. all agreeing to do it the same way woudl be a good thing but at the end of the day we should do this this cycle. we also shoudl come up with some upstream recomendation in the install docs on the best partice for this hopefully we can get agreement but if a specific config managment tool chosses to go a different way i dont think they should be force to chagne. for example kolla already dose the right thing and has for several releases. i dont think they shoudl need to change there approch. the have a singel nova.conf per container but a different one for each container https://github.com/openstack/kolla-ansible/blob/master/ansible/roles/nova-cell/templates/nova.conf.j2#L136-L151 the nova.conf for the compute contaienr dose not contian the db info which is the correct approch. openstack ansible also do the same thing https://github.com/openstack/openstack-ansible-os_nova/blob/master/templates/nova.conf.j2#L183-L199 ooo will do the same thing once https://review.opendev.org/#/c/718552/ is done more or less. so form a config managment point of view the pattern is to generate a configile with only the info tha tis needed. form a packager point of view hte only way to mimic that is to use multiple files with descreeet chunks or to not install any config by default nad require operators to copy thet samples. the install guides dont expclitly call this out https://docs.openstack.org/nova/latest/install/controller-install-ubuntu.html#install-and-configure-components https://docs.openstack.org/nova/latest/install/compute-install-ubuntu.html#install-and-configure-components but the compute node one does not tell you to add the db config either. i do thikn we shoudl update the comptue node install docs to use a different file however instad of nova.conf. > Best would be if we (all downstream distro + config management) agree on > how to implement this. > > How about, we all implement a /etc/nova/nova-db.conf, and get all > services that need db access to use it (ie: starting them with > --config-file=/etc/nova/nova-db.conf)? we might want two db files so nova-cell-db and nova-api-db really > > If I understand well, these services would need access to db: > - conductor the super conductor need the cell db and the api db form talking to dan the instance affinity/antiaffintiy late check would reuire the cell db to unfortually need api db access too. > - scheduler the scudler need db acess although im not sure if it needs both. > - novncproxy > - serialproxy > - spicehtml5proxy so im not sure if these 3 do. they might for some kind fo token storage although i did not think they really should need db acess, maybe to get the instance host, it would be nice if these could be restrcted to just use the api db but they might need both. there is a propsal too support passwords with consoles. https://review.opendev.org/#/c/759828/ i have not check but i assume that will require tehm to have db access if they do not already. although i think the password enforcement might be via the hypervisor e.g. libvirt so they may not need db acess for that. we should determin that. > - api > - api-metadata the api and metadata api both need db acess yes. i think we need both the api db and cell db acess in both cases. > > Is this list correct? Or is there some services that also don't need it? the only other nova service is the compute agent and it does not need db acess. im not sure if there is utility in > > Cheers, > > Thomas Goirand (zigo) > From allison at openstack.org Wed Nov 11 21:53:09 2020 From: allison at openstack.org (Allison Price) Date: Wed, 11 Nov 2020 15:53:09 -0600 Subject: Victoria Release Community Meeting In-Reply-To: References: Message-ID: <7A81F469-DE61-43FD-A0D2-A9779F9F023E@openstack.org> Let me chat with the OpenDev folks about their Jitsi instance to see what we can do for the next community meeting. > On Nov 11, 2020, at 2:58 PM, Thomas Goirand wrote: > > On 11/11/20 9:06 PM, Allison Price wrote: >> We can definitely consider using Jitsi for the next OIF community >> meeting. One significant feature that Zoom has is our ability to record >> the community meeting so that the rest of the community can watch on >> their own time, accommodating for different time zones. For those who do >> not wish to participate via Zoom tomorrow, the recording will be posted >> on YouTube and all of the PTLs share in their presentation how to >> connect with them via email or IRC. > > This can be done with Jitsi too (but not Jitsi alone). Again, if you > don't know how, I can get you in touch with the Debconf video team. > > Cheers, > > Thomas Goirand (zigo) > From fungi at yuggoth.org Wed Nov 11 21:53:57 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 11 Nov 2020 21:53:57 +0000 Subject: Victoria Release Community Meeting In-Reply-To: References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> <10079933.t8I2mb3pXR@p1> <20201111144618.ndckzezqgbb2xble@yuggoth.org> Message-ID: <20201111215357.q4s5y6klgup4d6zh@yuggoth.org> On 2020-11-11 21:58:12 +0100 (+0100), Thomas Goirand wrote: > On 11/11/20 9:06 PM, Allison Price wrote: > > We can definitely consider using Jitsi for the next OIF community > > meeting. One significant feature that Zoom has is our ability to record > > the community meeting so that the rest of the community can watch on > > their own time, accommodating for different time zones. For those who do > > not wish to participate via Zoom tomorrow, the recording will be posted > > on YouTube and all of the PTLs share in their presentation how to > > connect with them via email or IRC. > > This can be done with Jitsi too (but not Jitsi alone). Again, if you > don't know how, I can get you in touch with the Debconf video team. Well, client-side recording is always an option. Server-side recording could be added, the OpenDev sysadmins have discussed it, the challenges are mostly logistical (preallocating all the recording slot processes, deciding where to store and serve recordings, et cetera). The other significant feature we'd need to set up for uses like the OIF community meetings would be dial-in. We pay for a POTS trunk from a SIP broker with an assigned number in the USA for our Asterisk-based pbx.openstack.org service and have talked about reassigning that to meetpad.opendev.org but would need folks with available time to work on doing that. Adding brokers with numbers in other parts of the World may also be an option as long as we can get the okay from the folks paying the bill. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Nov 11 22:38:24 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 11 Nov 2020 16:38:24 -0600 Subject: [all] Fixtures & monkey patching issue with Python 3.9 In-Reply-To: <674bfdfb-495a-19e0-a306-b96aece85b14@debian.org> References: <674bfdfb-495a-19e0-a306-b96aece85b14@debian.org> Message-ID: <80a3d4dc-7c58-20bc-c6aa-1b711f610c8e@gmx.com> > Is this affecting OpenStack? Looks like maybe no? https://zuul.opendev.org/t/openstack/builds?job_name=openstack-tox-py39 There are a few py39 failures reported, but of the couple I took a quick look at (nova and ironic) it didn't look like the same signature as the issues referenced. Sean From yumeng_bao at yahoo.com Thu Nov 12 02:59:10 2020 From: yumeng_bao at yahoo.com (yumeng bao) Date: Thu, 12 Nov 2020 10:59:10 +0800 Subject: [cyborg] Propose Wenping Song for Cyborg core References: <4DFC0051-B2DF-40A8-AC1B-5E523681811C.ref@yahoo.com> Message-ID: <4DFC0051-B2DF-40A8-AC1B-5E523681811C@yahoo.com> Hi team, I would like to propose adding Wenping Song to the cyborg-core groups. Wenping did some great work on arq microversion and FPGA new driver support in the Victoria release, and has provided some helpful reviews. Cores - please reply +1/-1 before the end of Wednesday 18th November. Regards, Yumeng From zhangbailin at inspur.com Thu Nov 12 03:17:53 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Thu, 12 Nov 2020 03:17:53 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1bY3lib3JnXSBQ?= =?utf-8?Q?ropose_Wenping_Song_for_Cyborg_core?= In-Reply-To: <4DFC0051-B2DF-40A8-AC1B-5E523681811C@yahoo.com> References: <9a15c7540b4ca66d397ecaf2ce5f4f26@sslemail.net> <4DFC0051-B2DF-40A8-AC1B-5E523681811C@yahoo.com> Message-ID: <1e0fe839da29414d89189f731b6e0e65@inspur.com> +1, a hard working boy, and he will do better and better. brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: yumeng bao [mailto:yumeng_bao at yahoo.com] 发送时间: 2020年11月12日 10:59 收件人: openstack maillist 主题: [lists.openstack.org代发][cyborg] Propose Wenping Song for Cyborg core Hi team, I would like to propose adding Wenping Song to the cyborg-core groups. Wenping did some great work on arq microversion and FPGA new driver support in the Victoria release, and has provided some helpful reviews. Cores - please reply +1/-1 before the end of Wednesday 18th November. Regards, Yumeng From zhipengh512 at gmail.com Thu Nov 12 03:31:24 2020 From: zhipengh512 at gmail.com (Zhipeng Huang) Date: Thu, 12 Nov 2020 11:31:24 +0800 Subject: =?UTF-8?Q?Re=3A_=5Blists=2Eopenstack=2Eorg=E4=BB=A3=E5=8F=91=5D=5Bcyborg=5D_Propose_Wenp?= =?UTF-8?Q?ing_Song_for_Cyborg_core?= In-Reply-To: <1e0fe839da29414d89189f731b6e0e65@inspur.com> References: <9a15c7540b4ca66d397ecaf2ce5f4f26@sslemail.net> <4DFC0051-B2DF-40A8-AC1B-5E523681811C@yahoo.com> <1e0fe839da29414d89189f731b6e0e65@inspur.com> Message-ID: +1, Wenping has been working hard during recent releases :) On Thu, Nov 12, 2020 at 11:22 AM Brin Zhang(张百林) wrote: > +1, a hard working boy, and he will do better and better. > > brinzhang > Inspur Electronic Information Industry Co.,Ltd. > > > -----邮件原件----- > 发件人: yumeng bao [mailto:yumeng_bao at yahoo.com] > 发送时间: 2020年11月12日 10:59 > 收件人: openstack maillist > 主题: [lists.openstack.org代发][cyborg] Propose Wenping Song for Cyborg core > > Hi team, > > I would like to propose adding Wenping Song to > the cyborg-core groups. Wenping did some great work on arq microversion and > FPGA new driver support in the Victoria release, and has provided some > helpful reviews. > > Cores - please reply +1/-1 before the end of Wednesday 18th November. > > > Regards, > Yumeng > > -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From xin-ran.wang at intel.com Thu Nov 12 05:21:13 2020 From: xin-ran.wang at intel.com (Wang, Xin-ran) Date: Thu, 12 Nov 2020 05:21:13 +0000 Subject: =?utf-8?B?UkU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVtjeWJvcmddIFByb3Bv?= =?utf-8?Q?se_Wenping_Song_for_Cyborg_core?= In-Reply-To: References: <9a15c7540b4ca66d397ecaf2ce5f4f26@sslemail.net> <4DFC0051-B2DF-40A8-AC1B-5E523681811C@yahoo.com> <1e0fe839da29414d89189f731b6e0e65@inspur.com> Message-ID: +1, Wenping has been working hard, , thanks for wenping’s contribution. From: Zhipeng Huang Sent: Thursday, November 12, 2020 11:31 AM To: openstack-discuss at lists.openstack.org Cc: Brin Zhang(张百林) ; yumeng_bao at yahoo.com Subject: Re: [lists.openstack.org代发][cyborg] Propose Wenping Song for Cyborg core +1, Wenping has been working hard during recent releases :) On Thu, Nov 12, 2020 at 11:22 AM Brin Zhang(张百林) > wrote: +1, a hard working boy, and he will do better and better. brinzhang Inspur Electronic Information Industry Co.,Ltd. -----邮件原件----- 发件人: yumeng bao [mailto:yumeng_bao at yahoo.com] 发送时间: 2020年11月12日 10:59 收件人: openstack maillist > 主题: [lists.openstack.org代发][cyborg] Propose Wenping Song for Cyborg core Hi team, I would like to propose adding Wenping Song> to the cyborg-core groups. Wenping did some great work on arq microversion and FPGA new driver support in the Victoria release, and has provided some helpful reviews. Cores - please reply +1/-1 before the end of Wednesday 18th November. Regards, Yumeng -- Zhipeng (Howard) Huang Principle Engineer OpenStack, Kubernetes, CNCF, LF Edge, ONNX, Kubeflow, OpenSDS, Open Service Broker API, OCP, Hyperledger, ETSI, SNIA, DMTF, W3C -------------- next part -------------- An HTML attachment was scrubbed... URL: From themasch at gmx.net Thu Nov 12 05:51:02 2020 From: themasch at gmx.net (MaSch) Date: Thu, 12 Nov 2020 06:51:02 +0100 Subject: multiple nfs shares with cinder-backup In-Reply-To: References: Message-ID: <182661d1-e0d3-f48e-a874-5c520b76a2dc@gmx.net> Hy! Is it possible to set a default value for the container? So that the users don't have to specify them when they create a backup? Lets say, the default value directs to share a, for manually generated Backups, and all automated created Backups have the --container param set for share b. regards, MaSch On 2020-11-10 1:55 p.m., Radosław Piliszek wrote: > On Wed, Nov 4, 2020 at 11:45 AM MaSch wrote: >> Hello all! >> >> I'm currently using openstack queens with cinder 12.0.10. >> I would like to a backend I'm using a NFS-share. >> >> Now i would like to spit my backups up to two nfs-shares. >> I have seen that the cinder.volume.driver can handle multiple nfs shares. >> But it seems that cinder.backup.driver can not. >> >> Is there a way to use two nfs shares for backups? >> Or is it maybe possible with a later release of Cinder? >> >> regards, >> MaSch >> >> > You can use directories where you mount different NFS shares. > Cinder Backup allows you to specify the directory to use for backup so > this way you can segregate the backups. > Just specify --container in your commands. > > -yoctozepto From marios at redhat.com Thu Nov 12 07:44:19 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 12 Nov 2020 09:44:19 +0200 Subject: [tripleo] stein branch transition to extended maintenance Message-ID: Hello TripleOs The stein branch is about to transition to extended maintenance [1]. This will happen after the proposal at [2] is approved (by us) and merged. The latest release of stein for each of the repos in [2] will be tagged as extended maintenance. We have the option of first making another (final) release, if there is something newer in stein that we need to be included. We will continue to allow for bug fixes to be committed to stein however there will be no more releases made after this point. So, does anyone need a final release made for the stein branch for one of the repos in [3]. If no one speaks up then I will +1 the patch at [2] tomorrow thanks, marios [1] https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases [2] https://review.opendev.org/#/c/762411/ [3] https://releases.openstack.org/teams/tripleo.html#stein -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Nov 12 09:02:48 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 12 Nov 2020 10:02:48 +0100 Subject: multiple nfs shares with cinder-backup In-Reply-To: <182661d1-e0d3-f48e-a874-5c520b76a2dc@gmx.net> References: <182661d1-e0d3-f48e-a874-5c520b76a2dc@gmx.net> Message-ID: <20201112090248.e6a5e54jd67tyfqq@localhost> On 12/11, MaSch wrote: > Hy! > > Is it possible to set a default value for the container? > So that the users don't have to specify them when they create a backup? > Lets say, the default value directs to share a, for manually generated > Backups, > and all automated created Backups have the --container param set for > share b. > > regards, > MaSch Hi, The "backup_container" configuration option allows you to set the default container. Cheers, Gorka. > > On 2020-11-10 1:55 p.m., Radosław Piliszek wrote: > > On Wed, Nov 4, 2020 at 11:45 AM MaSch wrote: > > > Hello all! > > > > > > I'm currently using openstack queens with cinder 12.0.10. > > > I would like to a backend I'm using a NFS-share. > > > > > > Now i would like to spit my backups up to two nfs-shares. > > > I have seen that the cinder.volume.driver can handle multiple nfs shares. > > > But it seems that cinder.backup.driver can not. > > > > > > Is there a way to use two nfs shares for backups? > > > Or is it maybe possible with a later release of Cinder? > > > > > > regards, > > > MaSch > > > > > > > > You can use directories where you mount different NFS shares. > > Cinder Backup allows you to specify the directory to use for backup so > > this way you can segregate the backups. > > Just specify --container in your commands. > > > > -yoctozepto > From eblock at nde.ag Thu Nov 12 09:16:19 2020 From: eblock at nde.ag (Eugen Block) Date: Thu, 12 Nov 2020 09:16:19 +0000 Subject: Cannot deploly an instance at a specific host --(no such host found) [nova] In-Reply-To: References: <20201111132022.Horde.q9K8xVfq2klzzypYBltanTa@webmail.nde.ag> <20201111140328.Horde.e4UaBmIP6Q_GOnn4VutjAea@webmail.nde.ag> Message-ID: <20201112091619.Horde.16cjrNDJAI_5LCsfMauhMaK@webmail.nde.ag> Did you change the transport_url only in the nova.conf of the new compute node? Or did you also change the rabbitmq configuration? The transport_url should match the actual rabbitmq config, of course, and it should be the same on every node. Does that match your cell setup? To compare run: control:~ # nova-manage cell_v2 list_cells --verbose Do you have a vhost configuration? (I'm not familiar with devstack) Here's an example of a "nova" vhost with respective permissions: control:~ # rabbitmqctl list_vhosts Listing vhosts / nova control:~ # rabbitmqctl list_permissions -p nova Listing permissions in vhost "nova" nova .* .* .* If there's a vhost it should be also reflected in the transport_url. Maybe you should share all of the above information here. Zitat von Pavlos Basaras : > Hello, > > maybe there is stg wrong with the installation. > > Let me clarify a few things. > > I have devstack deployed in a VM (pre-installed: keystone, glance, nova, > placement, cinder, neutron, and horizon.) > I can deploy machines successfully at the devstack controller space > --everything seems to work fine. --> is seems to work fine with opensource > mano as well > > I am trying to add another pc as a compute host to be able to deploy vms at > this new compute host following ( > https://docs.openstack.org/nova/queens/install/compute-install-ubuntu.html) > > Also attached the nova.conf file. The only major differences are: > --the transport url for rabbitmq which i made according to the transport > url of the controller, i.e., instead of rabbit://openstack:linux at controller > i have rabbit://stackrabbit:linux at controller > -- i replaced the ports with the service, e.g., instead using the 5000 port > --> "http://controller/identity/v3" instead of "http://controller:5000/v3" > > please excuse (any) newbie questions > > all the best, > Pavlos. > > > On Wed, Nov 11, 2020 at 4:03 PM Eugen Block wrote: > >> There might be some mixup during the setup, I'm not sure how the other >> cell would be created. I'd probably delete the cell with UUID >> 1a0fde85-8906-46fb-b721-01a28c978439 and retry the discover_hosts with >> the right cell UUID: >> >> nova-manage cell_v2 delete_cell --cell_uuid >> 1a0fde85-8906-46fb-b721-01a28c978439 >> nova-manage cell_v2 discover_hosts --cell_uuid >> 1522c22f-64d4-4882-8ae8-ed0f9407407c >> >> Does that work? >> >> >> Zitat von Pavlos Basaras : >> >> > Hello, >> > >> > yes i have this discover_hosts_in_cells_interval = 300 >> > >> > >> > interestingly when i issued a combination of map_cell_and_hosts and >> > update_cell >> > >> > the output from: nova-manage cell_v2 list_hosts >> > +-----------+--------------------------------------+-------------+ >> > | Cell Name | Cell UUID | Hostname | >> > +-----------+--------------------------------------+-------------+ >> > | None | 1a0fde85-8906-46fb-b721-01a28c978439 | computenode | >> > | None | 1a0fde85-8906-46fb-b721-01a28c978439 | nrUE | >> > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | >> > +-----------+--------------------------------------+-------------+ >> > >> > when the new compute nodes seem to not have a cell mapped >> > >> > best, >> > P. >> > >> > >> > On Wed, Nov 11, 2020 at 3:26 PM Eugen Block wrote: >> > >> >> Hm, >> >> >> >> indeed rather strange to me. >> >> >> >> Do you see anything in the nova-scheduler.log? If you activated the >> >> >> >> discover_hosts_in_cells_interval = 300 >> >> >> >> it should query for new hosts every 5 minutes. >> >> >> >> >> >> >> >> Zitat von Pavlos Basaras : >> >> >> >> > Hello, >> >> > >> >> > thanks very much for your prompt reply. >> >> > >> >> > >> >> > regarding the first command "nova-manage cell_v2 list_hosts" the >> output >> >> is >> >> > the following (openstack is the host of the controller). I dont see >> any >> >> > other node here, even when i execute the discover_hosts command >> >> > >> >> > +-----------+--------------------------------------+-----------+ >> >> > | Cell Name | Cell UUID | Hostname | >> >> > +-----------+--------------------------------------+-----------+ >> >> > | cell1 | 1522c22f-64d4-4882-8ae8-ed0f9407407c | openstack | >> >> > +-----------+--------------------------------------+-----------+ >> >> > >> >> > >> >> > Also this is the output from my controller when i use the command: >> sudo >> >> -s >> >> > /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova (not >> >> sure >> >> > if this helps) >> >> > Found 2 cell mappings. >> >> > Skipping cell0 since it does not contain hosts. >> >> > Getting computes from cell 'cell1': >> 1522c22f-64d4-4882-8ae8-ed0f9407407c >> >> > Found 0 unmapped computes in cell: >> 1522c22f-64d4-4882-8ae8-ed0f9407407c >> >> > >> >> > any thoughts? >> >> > >> >> > >> >> > best, >> >> > Pavlos. >> >> >> >> >> >> >> >> >> >> >> >> >> >> From noonedeadpunk at ya.ru Thu Nov 12 09:18:04 2020 From: noonedeadpunk at ya.ru (Dmitriy Rabotyagov) Date: Thu, 12 Nov 2020 11:18:04 +0200 Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service In-Reply-To: References: Message-ID: <4211161605171577@mail.yandex.ru> An HTML attachment was scrubbed... URL: From jahson.babel at cc.in2p3.fr Thu Nov 12 09:24:30 2020 From: jahson.babel at cc.in2p3.fr (Babel Jahson) Date: Thu, 12 Nov 2020 10:24:30 +0100 Subject: [Manila] Manila user overwriting existing Ceph users Message-ID: <951ce1de-c915-5faa-b203-cb2b02f21b08@cc.in2p3.fr> Hello everyone, I'm currently testing manila with CephFS and I stumbled upon a behavior where manila is able to overwrite existing Ceph users. In my testing setup glance, nova, cinder and manila share the same Ceph cluster. However they have different users. In this situation when you create a share and allow acces via "manila access-allow cephshare1 cephx test" If the user "test" is already used to access some pools on the cluster, let's say cinder-volume or glance-images it will be overwritten with the permissions for the share. Which will break any resources that was using it. I've recheck the configuration files multiple times to see if I could set some properties to avoid this but I didn't find any. By quickly looking at the code here : https://opendev.org/openstack/manila/src/branch/master/manila/share/drivers/cephfs/driver.py A check is done but only for the manila user. I'm on Rocky version but this part doesn't seems to have changed since. That lead me to some questions : - Does manila must have his own dedicated Ceph cluster ? - Is there any workaroud to this ? Other than putting some gibberish names for services users ? - Is it possible to lock some users in the Ceph cluster to prevent this behavior ? If someone has some clues on this, thanks in advance. Jahson.B -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Thu Nov 12 10:25:09 2020 From: mrunge at matthias-runge.de (Matthias Runge) Date: Thu, 12 Nov 2020 11:25:09 +0100 Subject: [telemetry] wallaby cycle planning session Message-ID: Hi there, one of the biggest challenges for the telemetry stack is currently the state of gnocchi, which is... undefined/unfortunate/under-contributed/...? Telemetry started long time ago as a simple component ceilometer, which was split into several components ceilometer, aodh, panko, and gnocchi. Julien wrote a story about this some time ago[1]. There has also been an attempt to fork gnocchi back to OpenStack[2]. To my knowledge, the original contributors are not paid anymore to work on gnocchi, and at the same time, moving on to do something else is totally fine. However, I am not sure if we (in OpenStack Telemetry) should or could maintain in addition to the rest of the telemetry stack a time-series database. I'd like to discuss this during a call. Please select time(s) that suit you best in this poll[3]. If you have questions or hints, don't hesitate to contact me. Thank you, Matthias [1] https://julien.danjou.info/lessons-from-openstack-telemetry-deflation/ [2] https://review.opendev.org/#/c/744592/ [3] https://doodle.com/poll/uqq328x5shr43awy From thierry at openstack.org Thu Nov 12 10:29:11 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 12 Nov 2020 11:29:11 +0100 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> Message-ID: <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> Ghanshyam Mann wrote: > Yes, as part of the retirement process all deliverables under the project needs to be removed > and before removal we do: > 1. Remove all dependencies. > 2. Refactor/remove the gate job dependency also. > 3. Remove the code from the retiring repo. I think Thomas's point was that some of those retired deliverables are required by non-retired deliverables, like: - python-qinlingclient being required by mistral-extra - python-searchlightclient and python-karborclient being required by openstackclient and python-openstackclient We might need to remove those features/dependencies first, which might take time... -- Thierry Carrez (ttx) From gfidente at redhat.com Thu Nov 12 10:56:44 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 12 Nov 2020 11:56:44 +0100 Subject: [Manila] Manila user overwriting existing Ceph users In-Reply-To: <951ce1de-c915-5faa-b203-cb2b02f21b08@cc.in2p3.fr> References: <951ce1de-c915-5faa-b203-cb2b02f21b08@cc.in2p3.fr> Message-ID: On 11/12/20 10:24 AM, Babel Jahson wrote: > Hello everyone, > > I'm currently testing manila with CephFS and I stumbled upon a behavior > where manila is able to overwrite existing Ceph users. > In my testing setup glance, nova, cinder and manila share the same Ceph > cluster. However they have different users. > In this situation when you create a share and allow acces via "manila > access-allow cephshare1 cephx test" > If the user "test" is already used to access some pools on the cluster, > let's say cinder-volume or glance-images it will be overwritten with the > permissions for the share. > Which will break any resources that was using it. > I've recheck the configuration files multiple times to see if I could > set some properties to avoid this but I didn't find any. > By quickly looking at the code here : > https://opendev.org/openstack/manila/src/branch/master/manila/share/drivers/cephfs/driver.py > A check is done but only for the manila user. I'm on Rocky version but > this part doesn't seems to have changed since. > > That lead me to some questions : > - Does manila must have his own dedicated Ceph cluster ? > - Is there any workaroud to this ? Other than putting some gibberish > names for services users ? > - Is it possible to lock some users in the Ceph cluster to prevent this > behavior ? hi Jahnson, I am adding a few folks who can probably help us better but I also wanted to ask a question to understand better the use case the cephx user which cinder/glance/nova use has specific permissions to operate on their pools and this is configured in their respective config, not something you have access from the actual openstack guests; are you saying that "access-allow" is overwriting the cephx caps which were set for the cephx user which, for example, cinder is configured to use? in that case maybe better would be for the manila workflow to add/remove caps to existing users instead of overwriting the caps? is that be what you expected to happen? -- Giulio Fidente GPG KEY: 08D733BA From jpena at redhat.com Thu Nov 12 11:09:20 2020 From: jpena at redhat.com (Javier Pena) Date: Thu, 12 Nov 2020 06:09:20 -0500 (EST) Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service In-Reply-To: References: Message-ID: <864396842.58157343.1605179360197.JavaMail.zimbra@redhat.com> > On 11/11/20 5:35 PM, Balázs Gibizer wrote: > > Dear packagers and deployment engine developers, > > > > Since Icehouse nova-compute service does not need any database > > configuration as it uses the message bus to access data in the database > > via the conductor service. Also, the nova configuration guide states > > that the nova-compute service should not have the > > [api_database]connection config set. Having any DB credentials > > configured for the nova-compute is a security risk as well since that > > service runs close to the hypervisor. Since Rocky[1] nova-compute > > service fails if you configure API DB credentials and set upgrade_level > > config to 'auto'. > > > > Now we are proposing a patch[2] that makes nova-compute fail at startup > > if the [database]connection or the [api_database]connection is > > configured. We know that this breaks at least the rpm packaging, debian > > packaging, and puppet-nova. The problem there is that in an all-in-on > > deployment scenario the nova.conf file generated by these tools is > > shared between all the nova services and therefore nova-compute sees DB > > credentials. As a counter-example, devstack generates a separate > > nova-cpu.conf and passes that to the nova-compute service even in an > > all-in-on setup. > > > > The nova team would like to merge [2] during Wallaby but we are OK to > > delay the patch until Wallaby Milestone 2 so that the packagers and > > deployment tools can catch up. Please let us know if you are impacted > > and provide a way to track when you are ready with the modification that > > allows [2] to be merged. > > > > There was a long discussion on #openstack-nova today[3] around this > > topic. So you can find more detailed reasoning there[3]. > > > > Cheers, > > gibi > > IMO, that's ok if, and only if, we all agree on how to implement it. > Best would be if we (all downstream distro + config management) agree on > how to implement this. > > How about, we all implement a /etc/nova/nova-db.conf, and get all > services that need db access to use it (ie: starting them with > --config-file=/etc/nova/nova-db.conf)? > Hi, This is going to be an issue for those services we run as a WSGI app. Looking at [1], I see the app has a hardcoded list of config files to read (api-paste.ini and nova.conf), so we'd need to modify it at the installer level. Personally, I like the nova-db.conf way, since it looks like it reduces the amount of work required for all-in-one installers to adapt, but that requires some code change. Would the Nova team be happy with adding a nova-db.conf file to that list? Regards, Javier [1] - https://opendev.org/openstack/nova/src/branch/master/nova/api/openstack/wsgi_app.py#L30 > If I understand well, these services would need access to db: > - conductor > - scheduler > - novncproxy > - serialproxy > - spicehtml5proxy > - api > - api-metadata > > Is this list correct? Or is there some services that also don't need it? > > Cheers, > > Thomas Goirand (zigo) > > From ekuvaja at redhat.com Thu Nov 12 12:23:49 2020 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Thu, 12 Nov 2020 12:23:49 +0000 Subject: [swift]: issues with multi region (swift as backend for glance) In-Reply-To: <46812675.2700917.1604706323822@mail.yahoo.com> References: <46812675.2700917.1604706323822.ref@mail.yahoo.com> <46812675.2700917.1604706323822@mail.yahoo.com> Message-ID: On Fri, Nov 6, 2020 at 11:49 PM fsbiz at yahoo.com wrote: > Hi folks, > > We're on queens release. > > We have just setup a 2nd region. So, now we have two regions regionOne > and regionTwo > > We had a storage cluster in regionOne. Everything works good. > We also added another storage cluster in regionTwo and have created swift > endpoints for this. > Using swift API, everything works fine. The container is properly created > in regionOne or regionTwo as > desired. > > > > We are also using swift as the glance backend. We are seeing an issue > with this for regionTwo. > When I create an image in regionTwo, it seems like the glance storage > backend is not properly recognizing > the endpoint and wants to store the image in regionOne. > > This looks like a definite bug. > I can work around it by overriding the endpoint using swift_store_endpoint > but if there is a known bug > about this issue I would rather patch it than resort to overriding the URL > endpoint returned from "auth". > > Is this a known bug ? Again, we are on the latest Queens release. > > thanks, > Fred. > > Hi Fred, With these details I'm not exactly sure what the bug would be here. Glance has no concept for availability zones or regions per se. It expects consistent database across the deployment and multi-store not being a thing in Queens yet, there really is no meaningful way to configure multiple instances of the same store type. While there is no enforcement of homogenous configuration across the Glance nodes, the storage side really depends on it for the service to operate correctly. Forcing Glance to use different stores with the same database will likely cause lots of confusion and poor user experience. I'd assume that to achieve what you are looking for you'd either need to upgrade your openstack deployment to recent version to take advantage of the multi-store work or use separate Glance deployments per region. best, Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahson.babel at cc.in2p3.fr Thu Nov 12 12:32:39 2020 From: jahson.babel at cc.in2p3.fr (Babel Jahson) Date: Thu, 12 Nov 2020 13:32:39 +0100 Subject: [Manila] Manila user overwriting existing Ceph users In-Reply-To: References: <951ce1de-c915-5faa-b203-cb2b02f21b08@cc.in2p3.fr> Message-ID: Hi Giulio, Thank you for your response. > the cephx user which cinder/glance/nova use has specific permissions to > operate on their pools and this is configured in their respective > config, not something you have access from the actual openstack guests; > are you saying that "access-allow" is overwriting the cephx caps which > were set for the cephx user which, for example, cinder is configured to use? Yes a cinder user can be overwritten in the Ceph config cluster by the command "access-allow" to a share. Basically it goes from something like this : [client.cindertest]     key =     caps mon = "profile rbd"     caps osd = "profile rbd pool=some-pool, profile rbd pool=some-pool To something like that : [client.cindertest]     key =     caps mds = "allow rw path=/volumes/_nogroup/"     caps mon = "allow r"     caps osd = "allow rw pool= namespace=fsvolumens_" Which can be problematic. > in that case maybe better would be for the manila workflow to add/remove > caps to existing users instead of overwriting the caps? is that be what > you expected to happen? Not really, I mean it's a possibility but is it safe to just add those caps to an existing user ? Won't that interfere with something else ? A way to prevent the creation of a user like "cindertest" seems a better solution to me but I maybe wrong. It's behavior manila already has. If a user have been created with manila for a share in a project and you ask for that user in another project in openstack it wouldn't let you used it. Jahson On 12/11/2020 11:56, Giulio Fidente wrote: > On 11/12/20 10:24 AM, Babel Jahson wrote: >> Hello everyone, >> >> I'm currently testing manila with CephFS and I stumbled upon a behavior >> where manila is able to overwrite existing Ceph users. >> In my testing setup glance, nova, cinder and manila share the same Ceph >> cluster. However they have different users. >> In this situation when you create a share and allow acces via "manila >> access-allow cephshare1 cephx test" >> If the user "test" is already used to access some pools on the cluster, >> let's say cinder-volume or glance-images it will be overwritten with the >> permissions for the share. >> Which will break any resources that was using it. >> I've recheck the configuration files multiple times to see if I could >> set some properties to avoid this but I didn't find any. >> By quickly looking at the code here : >> https://opendev.org/openstack/manila/src/branch/master/manila/share/drivers/cephfs/driver.py >> A check is done but only for the manila user. I'm on Rocky version but >> this part doesn't seems to have changed since. >> >> That lead me to some questions : >> - Does manila must have his own dedicated Ceph cluster ? >> - Is there any workaroud to this ? Other than putting some gibberish >> names for services users ? >> - Is it possible to lock some users in the Ceph cluster to prevent this >> behavior ? > hi Jahnson, I am adding a few folks who can probably help us better but > I also wanted to ask a question to understand better the use case > > the cephx user which cinder/glance/nova use has specific permissions to > operate on their pools and this is configured in their respective > config, not something you have access from the actual openstack guests; > are you saying that "access-allow" is overwriting the cephx caps which > were set for the cephx user which, for example, cinder is configured to use? > > in that case maybe better would be for the manila workflow to add/remove > caps to existing users instead of overwriting the caps? is that be what > you expected to happen? From amotoki at gmail.com Thu Nov 12 12:42:48 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 12 Nov 2020 21:42:48 +0900 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> Message-ID: On Thu, Nov 12, 2020 at 7:31 PM Thierry Carrez wrote: > > Ghanshyam Mann wrote: > > Yes, as part of the retirement process all deliverables under the project needs to be removed > > and before removal we do: > > 1. Remove all dependencies. > > 2. Refactor/remove the gate job dependency also. > > 3. Remove the code from the retiring repo. > > I think Thomas's point was that some of those retired deliverables are > required by non-retired deliverables, like: > > - python-qinlingclient being required by mistral-extra > > - python-searchlightclient and python-karborclient being required by > openstackclient and python-openstackclient > > We might need to remove those features/dependencies first, which might > take time... Yeah, I think so too. Regarding OSC, python-openstackclient does not depend on python-searchlightclient. "openstackclient" is a wrapper project which allows us to install all OSC plugins along with the main OSC. We can drop retired OSC plugins from the "openstackclient" requirements. -- Akihiro Motoki (irc: amotoki) > > -- > Thierry Carrez (ttx) > From tpb at dyncloud.net Thu Nov 12 13:30:39 2020 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 12 Nov 2020 08:30:39 -0500 Subject: [Manila] Manila user overwriting existing Ceph users In-Reply-To: References: <951ce1de-c915-5faa-b203-cb2b02f21b08@cc.in2p3.fr> Message-ID: <20201112133039.ptcjspqgyxkkabn7@barron.net> On 12/11/20 13:32 +0100, Babel Jahson wrote: >Hi Giulio, > >Thank you for your response. > >>the cephx user which cinder/glance/nova use has specific permissions to >>operate on their pools and this is configured in their respective >>config, not something you have access from the actual openstack guests; >>are you saying that "access-allow" is overwriting the cephx caps which >>were set for the cephx user which, for example, cinder is configured to use? >Yes a cinder user can be overwritten in the Ceph config cluster by the >command "access-allow" to a share. >Basically it goes from something like this : >[client.cindertest] >    key = >    caps mon = "profile rbd" >    caps osd = "profile rbd pool=some-pool, profile rbd pool=some-pool > >To something like that : >[client.cindertest] >    key = >    caps mds = "allow rw path=/volumes/_nogroup/" >    caps mon = "allow r" >    caps osd = "allow rw pool= namespace=fsvolumens_" > >Which can be problematic. > >>in that case maybe better would be for the manila workflow to add/remove >>caps to existing users instead of overwriting the caps? is that be what >>you expected to happen? >Not really, I mean it's a possibility but is it safe to just add those >caps to an existing user ? Won't that interfere with something else ? >A way to prevent the creation of a user like "cindertest" seems a >better solution to me but I maybe wrong. >It's behavior manila already has. If a user have been created with >manila for a share in a project and you ask for that user in another >project in openstack it wouldn't let you used it. > >Jahson > >On 12/11/2020 11:56, Giulio Fidente wrote: >>On 11/12/20 10:24 AM, Babel Jahson wrote: >>>Hello everyone, >>> >>>I'm currently testing manila with CephFS and I stumbled upon a behavior >>>where manila is able to overwrite existing Ceph users. >>>In my testing setup glance, nova, cinder and manila share the same Ceph >>>cluster. However they have different users. >>>In this situation when you create a share and allow acces via "manila >>>access-allow cephshare1 cephx test" >>>If the user "test" is already used to access some pools on the cluster, >>>let's say cinder-volume or glance-images it will be overwritten with the >>>permissions for the share. >>>Which will break any resources that was using it. >>>I've recheck the configuration files multiple times to see if I could >>>set some properties to avoid this but I didn't find any. >>>By quickly looking at the code here : >>>https://opendev.org/openstack/manila/src/branch/master/manila/share/drivers/cephfs/driver.py >>>A check is done but only for the manila user. I'm on Rocky version but >>>this part doesn't seems to have changed since. >>> >>>That lead me to some questions : >>>- Does manila must have his own dedicated Ceph cluster ? >>>- Is there any workaroud to this ? Other than putting some gibberish >>>names for services users ? >>>- Is it possible to lock some users in the Ceph cluster to prevent this >>>behavior ? >>hi Jahnson, I am adding a few folks who can probably help us better but >>I also wanted to ask a question to understand better the use case >> >>the cephx user which cinder/glance/nova use has specific permissions to >>operate on their pools and this is configured in their respective >>config, not something you have access from the actual openstack guests; >>are you saying that "access-allow" is overwriting the cephx caps which >>were set for the cephx user which, for example, cinder is configured to use? >> >>in that case maybe better would be for the manila workflow to add/remove >>caps to existing users instead of overwriting the caps? is that be what >>you expected to happen? > Babel, Thanks for this report. Would you be so kind as to open a Launchpad bug against Manila for this issue? Manila, Nova, Cinder, etc. all use the same Ceph cluster but use different pools in that cluster. It does seem that we need the CephFS driver (or perhaps something in the Ceph cluster) to mark the service users as such and prevent changes to them by Manila. -- Tom Barron From ekuvaja at redhat.com Thu Nov 12 13:38:11 2020 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Thu, 12 Nov 2020 13:38:11 +0000 Subject: Victoria Release Community Meeting In-Reply-To: <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> Message-ID: On Wed, Nov 11, 2020 at 11:19 AM Thomas Goirand wrote: > On 11/11/20 12:23 AM, helena at openstack.org wrote: > > The meeting will be held via Zoom. > > Could we *PLEASE* stop the Zoom non-sense? > > Zoom is: > - known to have a poor security record > - imposes the install of the desktop non-free app (yes I know, in some > cases, it is supposed to work without it, but so far it didn't work for me) > - controlled by a 3rd party we cannot trust > > It's not as if we had no alternatives. Jitsi works perfectly and was > used successfully for the whole of debconf, with voctomix and stuff, so > viewers can read a normal live video stream... > > If the foundation doesn't know how to do it, I can put people in touch > with the Debian video team. I'm sure they will be helpful. > > Cheers, > > Thomas Goirand (zigo) > > Very much this, please. There are plenty of options out there and yet we _choose_ time after time to use the one that is not open source and has admitted neglecting their security and privacy issues; and effectively forces the usage of their application (I've yet to get into session without it like Thomas pointed out.) I know Zoom managed to put themselves on top of the hype wave when this COVID-19 mess started, but we should be able to do better than jumping on every single hype train out there. - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From jahson.babel at cc.in2p3.fr Thu Nov 12 13:44:41 2020 From: jahson.babel at cc.in2p3.fr (Babel Jahson) Date: Thu, 12 Nov 2020 14:44:41 +0100 Subject: [Manila] Manila user overwriting existing Ceph users In-Reply-To: <20201112133039.ptcjspqgyxkkabn7@barron.net> References: <951ce1de-c915-5faa-b203-cb2b02f21b08@cc.in2p3.fr> <20201112133039.ptcjspqgyxkkabn7@barron.net> Message-ID: <94bfd064-980e-9a6a-09d7-feb56071980d@cc.in2p3.fr> Hi Tom, No problem, I'll create a Launchpadd issue for this. Jahson On 12/11/2020 14:30, Tom Barron wrote: > On 12/11/20 13:32 +0100, Babel Jahson wrote: >> Hi Giulio, >> >> Thank you for your response. >> >>> the cephx user which cinder/glance/nova use has specific permissions to >>> operate on their pools and this is configured in their respective >>> config, not something you have access from the actual openstack guests; >>> are you saying that "access-allow" is overwriting the cephx caps which >>> were set for the cephx user which, for example, cinder is configured >>> to use? >> Yes a cinder user can be overwritten in the Ceph config cluster by >> the command "access-allow" to a share. >> Basically it goes from something like this : >> [client.cindertest] >>     key = >>     caps mon = "profile rbd" >>     caps osd = "profile rbd pool=some-pool, profile rbd pool=some-pool >> >> To something like that : >> [client.cindertest] >>     key = >>     caps mds = "allow rw path=/volumes/_nogroup/" >>     caps mon = "allow r" >>     caps osd = "allow rw pool= >> namespace=fsvolumens_" >> >> Which can be problematic. >> >>> in that case maybe better would be for the manila workflow to >>> add/remove >>> caps to existing users instead of overwriting the caps? is that be what >>> you expected to happen? >> Not really, I mean it's a possibility but is it safe to just add >> those caps to an existing user ? Won't that interfere with something >> else ? >> A way to prevent the creation of a user like "cindertest" seems a >> better solution to me but I maybe wrong. >> It's behavior manila already has. If a user have been created with >> manila for a share in a project and you ask for that user in another >> project in openstack it wouldn't let you used it. >> >> Jahson >> >> On 12/11/2020 11:56, Giulio Fidente wrote: >>> On 11/12/20 10:24 AM, Babel Jahson wrote: >>>> Hello everyone, >>>> >>>> I'm currently testing manila with CephFS and I stumbled upon a >>>> behavior >>>> where manila is able to overwrite existing Ceph users. >>>> In my testing setup glance, nova, cinder and manila share the same >>>> Ceph >>>> cluster. However they have different users. >>>> In this situation when you create a share and allow acces via "manila >>>> access-allow cephshare1 cephx test" >>>> If the user "test" is already used to access some pools on the >>>> cluster, >>>> let's say cinder-volume or glance-images it will be overwritten >>>> with the >>>> permissions for the share. >>>> Which will break any resources that was using it. >>>> I've recheck the configuration files multiple times to see if I could >>>> set some properties to avoid this but I didn't find any. >>>> By quickly looking at the code here : >>>> https://opendev.org/openstack/manila/src/branch/master/manila/share/drivers/cephfs/driver.py >>>> >>>> A check is done but only for the manila user. I'm on Rocky version but >>>> this part doesn't seems to have changed since. >>>> >>>> That lead me to some questions : >>>> - Does manila must have his own dedicated Ceph cluster ? >>>> - Is there any workaroud to this ? Other than putting some gibberish >>>> names for services users ? >>>> - Is it possible to lock some users in the Ceph cluster to prevent >>>> this >>>> behavior ? >>> hi Jahnson, I am adding a few folks who can probably help us better but >>> I also wanted to ask a question to understand better the use case >>> >>> the cephx user which cinder/glance/nova use has specific permissions to >>> operate on their pools and this is configured in their respective >>> config, not something you have access from the actual openstack guests; >>> are you saying that "access-allow" is overwriting the cephx caps which >>> were set for the cephx user which, for example, cinder is configured >>> to use? >>> >>> in that case maybe better would be for the manila workflow to >>> add/remove >>> caps to existing users instead of overwriting the caps? is that be what >>> you expected to happen? >> > > Babel, > > Thanks for this report.  Would you be so kind as to open a Launchpad > bug against Manila for this issue? > > Manila, Nova, Cinder, etc. all use the same Ceph cluster but use > different pools in that cluster.  It does seem that we need the CephFS > driver (or perhaps something in the Ceph cluster) to mark the service > users as such and prevent changes to them by Manila. > > -- Tom Barron > From laszlo.budai at gmail.com Thu Nov 12 13:57:04 2020 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Thu, 12 Nov 2020 15:57:04 +0200 Subject: Queens steal time [nova] Message-ID: <91f41182-bf21-659e-5204-ccd83f535694@gmail.com> Hello all, we are comparing the behavior of our queens  openstack with kilo. In queens we are observing an increase in the steal time reported in the guest along with the increase of the load averages. All this is happening while the host is not overloaded, and reports 80+ idle time Initially we have suspected that the overcommit might be the reason of the steal, so we have migrated vms, and now there are 42 vCPUs used out of the 48 pCPUs, but in the guest we still observe the steal time. with similar configuration in openstack kilo we see smaller load, and almost no steal time at all. what could be the reason of this steal time when there is no CPU overcommit? Thank you for any ideas. Kind regards, Laszlo -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Nov 12 14:26:24 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 12 Nov 2020 08:26:24 -0600 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> Message-ID: <175bcd9aff4.113c1c86d111248.6965557136678718673@ghanshyammann.com> ---- On Thu, 12 Nov 2020 04:29:11 -0600 Thierry Carrez wrote ---- > Ghanshyam Mann wrote: > > Yes, as part of the retirement process all deliverables under the project needs to be removed > > and before removal we do: > > 1. Remove all dependencies. > > 2. Refactor/remove the gate job dependency also. > > 3. Remove the code from the retiring repo. > > I think Thomas's point was that some of those retired deliverables are > required by non-retired deliverables, like: > > - python-qinlingclient being required by mistral-extra > > - python-searchlightclient and python-karborclient being required by > openstackclient and python-openstackclient > > We might need to remove those features/dependencies first, which might > take time... Yes, that is part of the retirement process, before we remove the project code all dependencies will be taken care same way we did in past (congress project etc). Few dependencies like from OSC it is easy but few like integrated features can be complex and will take time. -gmann > > -- > Thierry Carrez (ttx) > > From rosmaita.fossdev at gmail.com Thu Nov 12 14:57:20 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 12 Nov 2020 09:57:20 -0500 Subject: [cinder] python-cinderclient: API v2 support removed in Wallaby Message-ID: <73f3c4b6-4040-d2a0-ff78-228645b586af@gmail.com> As announced previously on this mailing list [0], at yesterday's Cinder meeting the Cinder team discussed the timeline for removing Block Storage API v2 support from the python-cinderclient. The Block Storage API v2, which has been deprecated since Pike, will not be present in the OpenStack Wallaby release. The team decided not to delay removing Block Storage API v2 support from the python-cinderclient. Thus, v2 support will *not* be available in the Wallaby python-cinderclient. Given the long deprecation period, we trust that this will not adversely impact too many users. We point out that microversion 3.0 of the Block Storage API v3 is functionally identical to v2; thus scripts that rely on v2 responses should continue to work with minimal changes. We also point out that v3 has been available since Mitaka and has reached microversion 3.62 with the Victoria release; thus it may be worth re-examining such scripts to see what functionality you are missing out on. This email serves both as an announcement and a final call for comments about this proposal. Please reply to this email thread with any comments before next week's Cinder meeting (Wednesday 18 November 2020). Thank you, brian [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-November/018531.html From ankelezhang at gmail.com Thu Nov 12 07:13:01 2020 From: ankelezhang at gmail.com (Ankele zhang) Date: Thu, 12 Nov 2020 15:13:01 +0800 Subject: Ironic use shellinabox doesn't work Message-ID: Hello~ I have a OpenStack platform( Rocky ) on CentOS7.6, and I installed Ironic components on it. I can manage bare metal node with Ironic and I plan to use shellinabox to manage the bare metal node remotely. I had config conductor ironic.conf: [DEFAULT] ... enabled_console_interfaces = ipmitool-shellinabox,no-console I had config shellinabox: USER=shellinabox GROUP=shellinabox CERTDIR=/var/lib/shellinabox PORT=4200 OPTS="--disable-ssl-menu -s /:LOGIN" My 'openstack baremetal node console show BM01': console_enabled: True and console_info.url: http://192.168.3.84:8023 [image: image.png] Now, I can access https://192.168.3.84:4200 is OK, and I can log in and manage it. But, when I access http://192.168.3.84:8023: [image: image.png] can not type anything into it. Look forward to hearing from you! Ankele -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9088 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 7799 bytes Desc: not available URL: From laurentfdumont at gmail.com Thu Nov 12 15:01:38 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 12 Nov 2020 10:01:38 -0500 Subject: Queens steal time [nova] In-Reply-To: <91f41182-bf21-659e-5204-ccd83f535694@gmail.com> References: <91f41182-bf21-659e-5204-ccd83f535694@gmail.com> Message-ID: Hi, Technically, I think that you always run the chance of "steal" time if you don't pin CPUs. I'm not sure if Openstack is "smart" enough to allocate CPUs that are not mapped to anyone in sequence (and start to overcommit once it's necessary. You might have 42 out of 48 free CPUs but I don't think that means that Openstack will prevent two VM from getting the same CPU scheduled (without CPU pinning). On Thu, Nov 12, 2020 at 9:09 AM Budai Laszlo wrote: > Hello all, > > we are comparing the behavior of our queens openstack with kilo. In > queens we are observing an increase in the steal time reported in the guest > along with the increase of the load averages. All this is happening while > the host is not overloaded, and reports 80+ idle time > > Initially we have suspected that the overcommit might be the reason of the > steal, so we have migrated vms, and now there are 42 vCPUs used out of the > 48 pCPUs, but in the guest we still observe the steal time. > > with similar configuration in openstack kilo we see smaller load, and > almost no steal time at all. > > what could be the reason of this steal time when there is no CPU > overcommit? > > Thank you for any ideas. > > Kind regards, > Laszlo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Nov 12 15:23:41 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 12 Nov 2020 15:23:41 +0000 Subject: [tc][all][qinling] Retiring the Qinling project In-Reply-To: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> References: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> Message-ID: <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> On Tue, 2020-11-10 at 13:16 -0600, Ghanshyam Mann wrote: Hello Everyone, As you know, Qinling is a leaderless project for the Wallaby cycle, which means there is no PTL candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for DPL model is one of the criteria which triggers TC to start checking the health, maintainers of the project for dropping the project from OpenStack Governance[1]. TC discussed the leaderless project in PTG[2] and checked if the project has maintainers and what activities are done in the Victoria development cycle. It seems no functional changes in Qinling repos except few gate fixes or community goal commits[3]. Based on all these checks and no maintainer for Qinling, TC decided to drop this project from OpenStack governance in the Wallaby cycle.  Ref: Mandatory Repository Retirement resolution [4] and the detailed process is in the project guide docs [5]. If your organization product/customer use/rely on this project then this is the right time to step forward to maintain it otherwise from the Wallaby cycle, Qinling will move out of OpenStack governance by keeping their repo under OpenStack namespace with an empty master branch with 'Not Maintained' message in README. If someone from old or new maintainers shows interest to continue its development then it can be re-added to OpenStack governance. With that thanks to Qinling contributors and PTLs (especially lxkong ) for maintaining this project. No comments on the actual retirement, but don't forget that someone from the Foundation will need to update the OpenStack Map available at https://www.openstack.org/openstack-map and included on https://www.openstack.org/software/. Stephen [1] https://governance.openstack.org/tc/reference/dropping-projects.html [2] https://etherpad.opendev.org/p/tc-wallaby-ptg [3] https://www.stackalytics.com/?release=victoria&module=qinling-group&metric=commits [4] https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html [5] https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository -gmann From stephenfin at redhat.com Thu Nov 12 15:26:50 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 12 Nov 2020 15:26:50 +0000 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> Message-ID: <34947f0729c38aaa4e1d3b1bc20e9153c6bececb.camel@redhat.com> On Thu, 2020-11-12 at 21:42 +0900, Akihiro Motoki wrote: On Thu, Nov 12, 2020 at 7:31 PM Thierry Carrez wrote: > > Ghanshyam Mann wrote: > > Yes, as part of the retirement process all deliverables under the > > project needs to be removed > > and before removal we do: > > 1. Remove all dependencies. > > 2. Refactor/remove the gate job dependency also. > > 3. Remove the code from the retiring repo. > > I think Thomas's point was that some of those retired deliverables > are > required by non-retired deliverables, like: > > - python-qinlingclient being required by mistral-extra > > - python-searchlightclient and python-karborclient being required by > openstackclient and python-openstackclient > > We might need to remove those features/dependencies first, which > might > take time... Yeah, I think so too. Regarding OSC, python-openstackclient does not depend on python-searchlightclient. "openstackclient" is a wrapper project which allows us to install all OSC plugins along with the main OSC. We can drop retired OSC plugins from the "openstackclient" requirements. We only depend on them for documentation purposes. We'll just remove those docs. Stephen From juliaashleykreger at gmail.com Thu Nov 12 15:38:34 2020 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 12 Nov 2020 07:38:34 -0800 Subject: [tc][all] openstack.org website Was: [qinling] Retiring the Qinling project In-Reply-To: <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> References: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> Message-ID: Out of curiosity, and surely someone from the foundation will need to answer this question, but is there any plan to begin to migrate the openstack.org website to more community control or edit community capability since it has already been updated to be much more about the project and not the foundation? For example, for ironicbaremetal.org, we're able to go update content by updating one of the template files the site is built with, in order to fix links, update the latest version, etc. The openstack.org site is far more complex though so maybe it is not really feasible in the short term. Anyway, just a thought. Julia On Thu, Nov 12, 2020 at 7:26 AM Stephen Finucane wrote: > [trim] > > No comments on the actual retirement, but don't forget that someone > from the Foundation will need to update the OpenStack Map available at > https://www.openstack.org/openstack-map and included on > https://www.openstack.org/software/. > > Stephen > [trim] From allison at openstack.org Thu Nov 12 15:50:04 2020 From: allison at openstack.org (Allison Price) Date: Thu, 12 Nov 2020 09:50:04 -0600 Subject: Victoria Release Community Meeting In-Reply-To: References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> Message-ID: <74913C4F-FD55-4B6A-A87E-694A3E48B103@openstack.org> Thank you all for the feedback as we continue to evolve the community meetings. As we plan the next community meeting, we will actively investigate using a Jitsi instance and communicate back on the ML about how that progress is going. Today’s meeting will be on Zoom (in about 10 minutes!), but like I mentioned earlier, the recordings will be available via the Project Navigator and YouTube for folks who would not like to login to the platform. I’ll start a new thread when we make progress towards the next community meeting. If there is any other feedback, please let me know. Allison Allison Price Open Infrastructure Foundation allison at openstack.org > On Nov 12, 2020, at 7:38 AM, Erno Kuvaja wrote: > > On Wed, Nov 11, 2020 at 11:19 AM Thomas Goirand > wrote: > On 11/11/20 12:23 AM, helena at openstack.org wrote: > > The meeting will be held via Zoom. > > Could we *PLEASE* stop the Zoom non-sense? > > Zoom is: > - known to have a poor security record > - imposes the install of the desktop non-free app (yes I know, in some > cases, it is supposed to work without it, but so far it didn't work for me) > - controlled by a 3rd party we cannot trust > > It's not as if we had no alternatives. Jitsi works perfectly and was > used successfully for the whole of debconf, with voctomix and stuff, so > viewers can read a normal live video stream... > > If the foundation doesn't know how to do it, I can put people in touch > with the Debian video team. I'm sure they will be helpful. > > Cheers, > > Thomas Goirand (zigo) > > > Very much this, please. > > There are plenty of options out there and yet we _choose_ time after time to use the one that is not open source and has admitted neglecting their security and privacy issues; and effectively forces the usage of their application (I've yet to get into session without it like Thomas pointed out.) > > I know Zoom managed to put themselves on top of the hype wave when this COVID-19 mess started, but we should be able to do better than jumping on every single hype train out there. > > - Erno "jokke" Kuvaja -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Nov 12 16:03:24 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 12 Nov 2020 16:03:24 +0000 Subject: Victoria Release Community Meeting In-Reply-To: References: <1605050603.25113268@apps.rackspace.com> <4c09adb4-0f97-f256-9827-01781643a9c3@debian.org> Message-ID: <20201112160324.xuo7zpumd47q6yx6@yuggoth.org> On 2020-11-12 13:38:11 +0000 (+0000), Erno Kuvaja wrote: [...] > effectively forces the usage of their application (I've yet to get > into session without it like Thomas pointed out.) [...] I'm no fan of Zoom either, for many of the aforementioned reasons, but here's how I've managed to get the Web-based client to work and not install their proprietary client/extension: 0. First, having a vanilla browser process seems to help, as I've seen privacy/security-oriented extensions and configuration options block the A/V stream connections. My normal locked-down browser is Firefox, so I have Chromium installed with no extensions and its default configuration which I run exclusively for accessing Zoom and similar videoconferencing sites. 1. Load the Zoom meeting URL and you will be prompted to "Open xdg-open" which you *don't* want (this is what will try to install the binary client). Instead click the "Cancel" button on that pop-up modal. 2. Now click on the "Launch Meeting" button on the page and the same xdg-open modal popup will appear again. "Cancel" it a second time. 3. Next you'll see that the page has suddenly added a small-print line which says "Having issues with Zoom Client? Join from Your Browser" so go ahead and click on the "Join from Your Browser" link. This is the "web client" Zoom would rather you didn't use (since I'm paranoid I assume it's because they prefer to have as much access to my system and my valuable data as possible). 4. Enter your name or nickname on the new page which loads. You'll probably also be presented with a "I'm not a robot" reCaptcha which you'll need to check and solve the subsequently presented puzzle to donate your mechanical turk time to train Google's AI algorithms to be able to recognize crosswalks, bicycles, chimneys, boats, traffic signals, palm trees, tractors, and other VERY IMPORTANT objects. 5. If you're good enough at solving puzzles, now you should be able to click the "Join" button on the page. 6. Enter the meeting passcode if prompted for one (likely buried somewhere in the invite). Simple, right? :/ -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kaifeng.w at gmail.com Thu Nov 12 16:03:53 2020 From: kaifeng.w at gmail.com (Kaifeng Wang) Date: Fri, 13 Nov 2020 00:03:53 +0800 Subject: Ironic use shellinabox doesn't work In-Reply-To: References: Message-ID: Hi, "SOL Session operational" is a BMC prompt, actually you have successfully connected to it, before you can interact with the terminal, the Serial Redirection (or something called that) needs to be enabled in the BIOS, also make sure you have matching baud rate set to the bootloader for the tty. P.S. SOL does not work for Windows systems. On Thu, Nov 12, 2020 at 11:05 PM Ankele zhang wrote: > Hello~ > > I have a OpenStack platform( Rocky ) on CentOS7.6, and I installed > Ironic components on it. I can manage bare metal node with Ironic and I > plan to use shellinabox to manage the bare metal node remotely. > I had config conductor ironic.conf: > [DEFAULT] > ... > enabled_console_interfaces = ipmitool-shellinabox,no-console > I had config shellinabox: > USER=shellinabox > GROUP=shellinabox > CERTDIR=/var/lib/shellinabox > PORT=4200 > OPTS="--disable-ssl-menu -s /:LOGIN" > My 'openstack baremetal node console show BM01': > console_enabled: True and console_info.url: http://192.168.3.84:8023 > [image: image.png] > Now, I can access https://192.168.3.84:4200 is OK, and I can log in > and manage it. > But, when I access http://192.168.3.84:8023: > [image: image.png] > can not type anything into it. > > Look forward to hearing from you! > > > Ankele > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 9088 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 7799 bytes Desc: not available URL: From kennelson11 at gmail.com Thu Nov 12 16:04:54 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 12 Nov 2020 08:04:54 -0800 Subject: Victoria Release Community Meeting In-Reply-To: <1605050603.25113268@apps.rackspace.com> References: <1605050603.25113268@apps.rackspace.com> Message-ID: Starting now! See you all there :) -Kendall Nelson (diablo_rojo) On Tue, Nov 10, 2020 at 3:24 PM helena at openstack.org wrote: > Hello, > > > > The community meeting for the Victoria release will be this Thursday, > November 12th at 16:00 UTC. The meeting will be held via Zoom. We will > show pre-recorded videos from our PTLs followed by live Q&A sessions. We > will have updates from Masakari, Telemetry, Neutron, and Cinder. > > > > Zoom Info: > https://zoom.us/j/2146866821?pwd=aDlpOXd5MXB3cExZWHlROEJURzh0QT09 > Meeting ID: 214 686 6821 > Passcode: Victoria > > Find your local number: https://zoom.us/u/8BRrV > > > > *Reminder to PTLs:* > > > > We would love for you to participate and give an update on your project. > > > > I have attached a template for the slides that you may use if you wish. The > video should be around 10 minutes. Please send in your video and slide for > the community meeting ASAP. I have only received content for the listed > above projects. If you are unable to make the meeting at the designated > time we can show your video and forward any questions for you. > > > > Let me know if you have any other questions! > > > > Thank you for your participation, > > Helena > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Nov 12 16:26:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 12 Nov 2020 10:26:58 -0600 Subject: [tc][all][qinling] Retiring the Qinling project In-Reply-To: <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> References: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> Message-ID: <175bd48122d.ee0d0ddb120064.3093467998927637704@ghanshyammann.com> ---- On Thu, 12 Nov 2020 09:23:41 -0600 Stephen Finucane wrote ---- > On Tue, 2020-11-10 at 13:16 -0600, Ghanshyam Mann wrote: > Hello Everyone, > > As you know, Qinling is a leaderless project for the Wallaby cycle, > which means there is no PTL > candidate to lead it in the Wallaby cycle. 'No PTL' and no liaisons for > DPL model is one of the criteria > which triggers TC to start checking the health, maintainers of the > project for dropping the project > from OpenStack Governance[1]. > > TC discussed the leaderless project in PTG[2] and checked if the > project has maintainers and what > activities are done in the Victoria development cycle. It seems no > functional changes in Qinling repos > except few gate fixes or community goal commits[3]. > > Based on all these checks and no maintainer for Qinling, TC decided to > drop this project from OpenStack > governance in the Wallaby cycle. Ref: Mandatory Repository Retirement > resolution [4] and the detailed process > is in the project guide docs [5]. > > If your organization product/customer use/rely on this project then > this is the right time to step forward to > maintain it otherwise from the Wallaby cycle, Qinling will move out of > OpenStack governance by keeping > their repo under OpenStack namespace with an empty master branch with > 'Not Maintained' message in README. > If someone from old or new maintainers shows interest to continue its > development then it can be re-added > to OpenStack governance. > > With that thanks to Qinling contributors and PTLs (especially lxkong ) > for maintaining this project. > > No comments on the actual retirement, but don't forget that someone > from the Foundation will need to update the OpenStack Map available at > https://www.openstack.org/openstack-map and included on > https://www.openstack.org/software/. Yes, that is part of removing the dependencies/usage of the retiring projects step. -gmann > > Stephen > > [1] > https://governance.openstack.org/tc/reference/dropping-projects.html > [2] https://etherpad.opendev.org/p/tc-wallaby-ptg > [3] > https://www.stackalytics.com/?release=victoria&module=qinling-group&metric=commits > [4] > https://governance.openstack.org/tc/resolutions/20190711-mandatory-repository-retirement.html > [5] > https://docs.openstack.org/project-team-guide/repository.html#retiring-a-repository > > -gmann > > > > > From gmann at ghanshyammann.com Thu Nov 12 16:30:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 12 Nov 2020 10:30:21 -0600 Subject: [tc][all] openstack.org website Was: [qinling] Retiring the Qinling project In-Reply-To: References: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> Message-ID: <175bd4b2c0d.126d14512120266.232243623967245442@ghanshyammann.com> ---- On Thu, 12 Nov 2020 09:38:34 -0600 Julia Kreger wrote ---- > Out of curiosity, and surely someone from the foundation will need to > answer this question, but is there any plan to begin to migrate the > openstack.org website to more community control or edit community > capability since it has already been updated to be much more about the > project and not the foundation? openstack-map repo is currently in OSF namespace but anyone can submit the changes and approval on changes is from Foundation. Example: Tricirlce retirement - https://review.opendev.org/#/c/735675/ -gmann > > For example, for ironicbaremetal.org, we're able to go update content > by updating one of the template files the site is built with, in order > to fix links, update the latest version, etc. The openstack.org site > is far more complex though so maybe it is not really feasible in the > short term. > > Anyway, just a thought. > > Julia > > On Thu, Nov 12, 2020 at 7:26 AM Stephen Finucane wrote: > > > [trim] > > > > No comments on the actual retirement, but don't forget that someone > > from the Foundation will need to update the OpenStack Map available at > > https://www.openstack.org/openstack-map and included on > > https://www.openstack.org/software/. > > > > Stephen > > > [trim] > > From owalsh at redhat.com Thu Nov 12 16:41:10 2020 From: owalsh at redhat.com (Oliver Walsh) Date: Thu, 12 Nov 2020 16:41:10 +0000 Subject: [nova][tripleo][rpm-packaging][kolla][puppet][debian][osa] Nova enforces that no DB credentials are allowed for the nova-compute service In-Reply-To: References: Message-ID: IIUC from Sean's reply most of the heavyweight configuration management frameworks already address the security concern. For tripleo the first issue to address is in puppet-nova where the dbs are currently configured for every nova service. The fix seems trivial - https://review.opendev.org/755689. I also think that should be completely safe to backport this to all stable branches. The tripleo changes in https://review.opendev.org/#/c/718552/ are not strictly necessary to remove the db creds from nova.conf. However I need to go further, also removing the hieradata that contains the db creds since completely removing the db creds from the compute hosts is the ultimate goal here. So when the configuration management frameworks are all good then what are we actually concerned about security-wise? Is it just operators that roll their own deployments? Cheers, Ollie On Wed, 11 Nov 2020 at 16:37, Balázs Gibizer wrote: > Dear packagers and deployment engine developers, > > Since Icehouse nova-compute service does not need any database > configuration as it uses the message bus to access data in the database > via the conductor service. Also, the nova configuration guide states > that the nova-compute service should not have the > [api_database]connection config set. Having any DB credentials > configured for the nova-compute is a security risk as well since that > service runs close to the hypervisor. Since Rocky[1] nova-compute > service fails if you configure API DB credentials and set upgrade_level > config to 'auto'. > > Now we are proposing a patch[2] that makes nova-compute fail at startup > if the [database]connection or the [api_database]connection is > configured. We know that this breaks at least the rpm packaging, debian > packaging, and puppet-nova. The problem there is that in an all-in-on > deployment scenario the nova.conf file generated by these tools is > shared between all the nova services and therefore nova-compute sees DB > credentials. As a counter-example, devstack generates a separate > nova-cpu.conf and passes that to the nova-compute service even in an > all-in-on setup. > > The nova team would like to merge [2] during Wallaby but we are OK to > delay the patch until Wallaby Milestone 2 so that the packagers and > deployment tools can catch up. Please let us know if you are impacted > and provide a way to track when you are ready with the modification > that allows [2] to be merged. > > There was a long discussion on #openstack-nova today[3] around this > topic. So you can find more detailed reasoning there[3]. > > Cheers, > gibi > > [1] > > https://github.com/openstack/nova/blob/dc93e3b510f53d5b2198c8edd22528f0c899617e/nova/compute/rpcapi.py#L441-L457 > [2] https://review.opendev.org/#/c/762176 > [3] > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-11.log.html#t2020-11-11T10:51:23 > -- > > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-11-11.log.html#t2020-11-11T14:40:51 > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zigo at debian.org Thu Nov 12 16:43:57 2020 From: zigo at debian.org (Thomas Goirand) Date: Thu, 12 Nov 2020 17:43:57 +0100 Subject: [tc][all][searchlight ] Retiring the Searchlight project In-Reply-To: <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> References: <175b3960913.120cf390316504.5945799248975474230@ghanshyammann.com> <297cf22b-2172-cb51-f282-4c0e674a9a07@debian.org> <175b44e80fd.f4c0ee8420790.4317890968302424842@ghanshyammann.com> <7e1e40f2-afeb-ebcb-6faa-b3b7534a8039@openstack.org> Message-ID: <8554915e-0ee1-7d69-1192-1a043b71f742@debian.org> On 11/12/20 11:29 AM, Thierry Carrez wrote: > Ghanshyam Mann wrote: >> Yes, as part of the retirement process all deliverables under the >> project needs to be removed >> and before removal we do: >> 1. Remove all dependencies. >> 2. Refactor/remove the gate job dependency also. >> 3. Remove the code from the retiring repo. > > I think Thomas's point was that some of those retired deliverables are > required by non-retired deliverables, like: > > - python-qinlingclient being required by mistral-extra > > - python-searchlightclient and python-karborclient being required by > openstackclient and python-openstackclient > > We might need to remove those features/dependencies first, which might > take time... Exactly, thanks for correcting my (very) poor wording. Cheers, Thomas Goirand (zigo) From thierry at openstack.org Thu Nov 12 16:48:25 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 12 Nov 2020 17:48:25 +0100 Subject: [tc][all] openstack.org website Was: [qinling] Retiring the Qinling project In-Reply-To: References: <175b3963a49.115662abe16508.6386586116324619581@ghanshyammann.com> <3dd69b5e9a3ac32c4f6f7bae633fa509a6230325.camel@redhat.com> Message-ID: <698503c9-1455-771b-6f9f-bdcd036b368d@openstack.org> Julia Kreger wrote: > Out of curiosity, and surely someone from the foundation will need to > answer this question, but is there any plan to begin to migrate the > openstack.org website to more community control or edit community > capability since it has already been updated to be much more about the > project and not the foundation? > > For example, for ironicbaremetal.org, we're able to go update content > by updating one of the template files the site is built with, in order > to fix links, update the latest version, etc. The openstack.org site > is far more complex though so maybe it is not really feasible in the > short term. That's definitely the general direction we want to take with openstack.org, but it will take a lot of time to separate out the various backends involved. Regarding the map, as Ghanshyam says, the requirements are driven from the osf/openstack-map repository so please submit any change there. I'll take the opportunity to remind people that the YAML data in that repository also directly drives the content for the openstack.org/software pages. So if you want to modify how each project appears there, that's already doable. -- Thierry From laszlo.budai at gmail.com Thu Nov 12 17:35:56 2020 From: laszlo.budai at gmail.com (Budai Laszlo) Date: Thu, 12 Nov 2020 19:35:56 +0200 Subject: Queens steal time [nova] In-Reply-To: References: <91f41182-bf21-659e-5204-ccd83f535694@gmail.com> Message-ID: <7ab6b03f-edf1-5fb2-0887-a2c3cc47f1b6@gmail.com> Hi Laurent, Thank you for your answer. I agree with you that without the pinning the steal time can appear anytime. What is strange to me that in openstack Kilo the steal is significantly smaller even when there is some overcommit. So I am wondering where to look for the difference? Kind regards, Laszlo On 11/12/20 5:01 PM, Laurent Dumont wrote: > Hi, > > Technically, I think that you always run the chance of "steal" time if you don't pin CPUs. I'm not sure if Openstack is "smart" enough to allocate CPUs that are not mapped to anyone in sequence (and start to overcommit once it's necessary. You might have 42 out of 48 free CPUs but I don't think that means that Openstack will prevent two VM from getting the same CPU scheduled (without CPU pinning). > > On Thu, Nov 12, 2020 at 9:09 AM Budai Laszlo > wrote: > > Hello all, > > we are comparing the behavior of our queens  openstack with kilo. In queens we are observing an increase in the steal time reported in the guest along with the increase of the load averages. All this is happening while the host is not overloaded, and reports 80+ idle time > > Initially we have suspected that the overcommit might be the reason of the steal, so we have migrated vms, and now there are 42 vCPUs used out of the 48 pCPUs, but in the guest we still observe the steal time. > > with similar configuration in openstack kilo we see smaller load, and almost no steal time at all. > > what could be the reason of this steal time when there is no CPU overcommit? > > Thank you for any ideas. > > Kind regards, > Laszlo > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Nov 12 19:03:37 2020 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 12 Nov 2020 19:03:37 +0000 Subject: [nova] How best to provide virtio-based input devices? Message-ID: Design discussion for interested parties. As discussed during the nova team meeting today, we're looking at providing support for virtio-based input devices in nova, given benefits w.r.t. performance and security (no emulated USB bus!). The current proposal I have for this is to introduce a new image metadata property, 'hw_input_bus', and extend the existing 'hw_pointer_model' image metadata property, which currently only accepts a 'usbtablet' value, to accept 'mouse' or 'tablet' values. This means you'd end up with a matrix something like so: +------------------+--------------+------------------------+ | hw_pointer_model | hw_input_bus | Result | +------------------|--------------+------------------------+ | - | - | PS2-based mouse [*] | | usbtablet | - | USB-based tablet | | usbtablet | usb | USB-based tablet | | usbtablet | virtio | **ERROR** | | mouse | - | USB-based mouse | | mouse | usb | USB-based mouse | | mouse | virtio | virtio-based mouse | | tablet | - | USB-based tablet | | tablet | usb | USB-based tablet | | tablet | virtio | virtio-based tablet | +------------------+--------------+------------------------+ [*] libvirt adds these by default on x86 hosts dansmith noted that the only reason to select the 'mouse' pointer model nowadays is for compatibility, and something like a virtio-based mouse didn't really make sense. That being the case, I agree that this change is likely more complex that it needs to be. We do, however, disagree on the remedy. dansmith's idea is to drop the 'hw_input_bus' image metadata property entirely and simply add a new 'virtiotablet' value to 'hw_pointer_model' instead. This still leaves the question of what bus we should use for keyboards, and his suggestion there is to extrapolate out and use virtio for keyboards if 'hw_pointer_model=virtiotablet' is specified and presumably USB if 'hw_pointer_model=usbtablet'. Needless to say, I don't like this idea and prefer we took another tack and kept 'hw_input_bus' but didn't build on 'hw_pointer_model' and instead "deprecated" it. We can't really ever remove the an image metadata property, since that would be a breaking upgrade, which means we'll eventually be left with effectively dead code to maintain forever. However, I don't think that's a big deal. 'hw_pointer_model=usbtablet' is already on a path to obsolescence as the Q35 machine type slowly becomes the default on x86 hosts and the use of non-x86 hosts grows since neither support PS2 and must use a USB-based input device. In addition, I think the extrapolation of 'virtiotablet' to mean also virtio-based keyboard is unclear and leaves a gaping hole w.r.t. requesting USB-based keyboards on non-AArch64 hosts (where it's currently added by default), since we don't currently do this extrapolation and introducing it would be breaking change on x86 hosts (instances would suddenly switch from PS2-based keyboards to USB-based ones). We need to decide what approach to go for before I rework this. If anyone has input, particularly operators that think they'd use this feature, I'd love to hear it so that I can t̶e̶l̶l̶ ̶d̶a̶n̶s̶m̶i̶t̶h̶ ̶t̶o̶ ̶s̶h̶o̶v̶e̶ ̶i̶t̶ come to the best possible solution ;-) Feel free to either reply here or on the review [1]. Cheers, Stephen [1] https://review.opendev.org/#/c/756552/ From laurentfdumont at gmail.com Thu Nov 12 19:10:30 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 12 Nov 2020 14:10:30 -0500 Subject: Queens steal time [nova] In-Reply-To: <7ab6b03f-edf1-5fb2-0887-a2c3cc47f1b6@gmail.com> References: <91f41182-bf21-659e-5204-ccd83f535694@gmail.com> <7ab6b03f-edf1-5fb2-0887-a2c3cc47f1b6@gmail.com> Message-ID: Is the vCPU placement the same between the two? (if you do a virsh dumpxml of all the VMs on the compute, you should be able to see the which vCPU we're mapped to which VM CPU). The same VMs under a Kilo compute are seeing different Steal % when the compute is upgraded to Queens? I guess that overall % Steal will also be impacted by how busy and noisy the VMs are. My knowledge is far from exhaustive but I would be surprised at a loss of CPU performance between Kilo and Queens. On Thu, Nov 12, 2020 at 12:35 PM Budai Laszlo wrote: > Hi Laurent, > > Thank you for your answer. > I agree with you that without the pinning the steal time can appear > anytime. What is strange to me that in openstack Kilo the steal is > significantly smaller even when there is some overcommit. So I am wondering > where to look for the difference? > > Kind regards, > Laszlo > > On 11/12/20 5:01 PM, Laurent Dumont wrote: > > Hi, > > Technically, I think that you always run the chance of "steal" time if you > don't pin CPUs. I'm not sure if Openstack is "smart" enough to allocate > CPUs that are not mapped to anyone in sequence (and start to overcommit > once it's necessary. You might have 42 out of 48 free CPUs but I don't > think that means that Openstack will prevent two VM from getting the same > CPU scheduled (without CPU pinning). > > On Thu, Nov 12, 2020 at 9:09 AM Budai Laszlo > wrote: > >> Hello all, >> >> we are comparing the behavior of our queens openstack with kilo. In >> queens we are observing an increase in the steal time reported in the guest >> along with the increase of the load averages. All this is happening while >> the host is not overloaded, and reports 80+ idle time >> >> Initially we have suspected that the overcommit might be the reason of >> the steal, so we have migrated vms, and now there are 42 vCPUs used out of >> the 48 pCPUs, but in the gues