From mnaser at vexxhost.com Sat Feb 1 08:25:17 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 1 Feb 2020 09:25:17 +0100 Subject: [Watcher] confused about meeting schedule In-Reply-To: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: On Wed, Jan 22, 2020 at 9:10 AM info at dantalion.nl wrote: > > Hello everyone, > > The documentation for Watcher states that meetings will be held on a > bi-weekly basis on odd-weeks, however, last meeting was held on the 8th > of January which is not an odd-week. > > Today I was expecting a meeting as the meetings are held bi-weekly and > the last one was held on the 8th of January, however, there was none. > > Can someone clarify when the next meeting will be held and the > subsequent one after that? Have you managed to hear back on this, Corne? > If these are on even weeks we should also update Watcher's documentation. > > Kind regards, > Corne lukken > From liang.a.fang at intel.com Sat Feb 1 10:10:39 2020 From: liang.a.fang at intel.com (Fang, Liang A) Date: Sat, 1 Feb 2020 10:10:39 +0000 Subject: [cinder] last call for ussuri spec comments In-Reply-To: References: <92eaa3f2-3bfe-3c94-292b-3d91c8256753@gmail.com> Message-ID: Hi Sean Thanks for your comment. Currently only rbd and sheepdog are mounted directly by qemu. Others (including nvmeof) are mounted to host OS first. See: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L169 rbd is popular today. It's a pity that rbd would not be supported by volume local cache. The advantage to mount directly by qemu is security, right? The volume data would not be exposed to host OS. But rbd latency is not good (more than 1 millisecond). On the other hand, if Optane SSD (~10us) is used as volume local cache, RandRead latency would be ~50us when cache hit rate raised up to ~95%. If persistent memory (with latency ~0.x us) is used as volume local cache, latency would be very much small (I have no data on hand, would measure after Chinese new year). I believe the super performance boost would attract Operators. It is not impossible for some operators to change back rbd to mount to host os. At least we can give them the infrastructure-ready. Regards Liang -----Original Message----- From: Sean Mooney Sent: Friday, January 31, 2020 8:19 AM To: Brian Rosmaita ; openstack-discuss at lists.openstack.org Subject: Re: [cinder] last call for ussuri spec comments On Thu, 2020-01-30 at 17:18 -0500, Brian Rosmaita wrote: > On 1/30/20 11:27 AM, Brian Rosmaita wrote: > > The following specs have two +2s. I believe that all expressed > > concerns have been addressed. I intend to merge them at 22:00 UTC > > today unless a serious issue is raised before then. > > > > https://review.opendev.org/#/c/684556/ - support volume-local-cache > > Some concerns were raised with the above patch. Liang, please address > them. Don't worry if you can't get them done before the Friday > deadline, I'm willing to give you a spec freeze exception. I think > the concerns raised will be useful in making clarifications to the > spec, but also in pointing out things that reviewers should keep in > mind when reviewing the implementation. They also point out some > testing directions that will be useful in validating the feature. the one thing i want to raise related to this spec is that the design direction form the nova side is problematic. when reviewing https://review.opendev.org/#/c/689070/ it was noted that the nova libvirt driver has been moving away form mounting cinder volumes on the host and then passing that block device to qemu, in favor of using qemu's nataive ablity to connect directly to remote storage. looking at the latest version of the nova spec https://review.opendev.org/#/c/689070/8/specs/ussuri/approved/support-volume-local-cache.rst at 49 i notes that this feature will be only capable of caching volums that have already been mounted on the host. while keeping the management of the volumes in os-bricks means that the over all impact on nova is minimal considering that this feature would no longer work if we moved to useing qemu native isci support, and that it will not work with NVMEoF volume or ceph im not sure that the nova side will be approved. when i first review the nova spec i mention that i believed local cacheing could a useful feature but this really feels like a capability that should be developed in qemu, specificly the applity to provide a second device as a cache for any disk deivce assgiend to an instance. that would allow local caching to be done regardless of the storage backend used. qemu cannot do that today so i understand that this approch is in the short to medium term likely the only workable solution but i am concerned that the cinder side will be completed in ussuri and the nova side will not. > > With respect to the other spec: > > > https://review.opendev.org/#/c/700977 - add backup id to volume > > metadata > > Rajat had a few vocabulary clarifications that can be addressed in a > follow-up patch. Conceptually, this spec is fine, so I went ahead and > merged it. > > > > > cheers, > > brian > > > From amy at demarco.com Sat Feb 1 15:58:46 2020 From: amy at demarco.com (Amy Marrich) Date: Sat, 1 Feb 2020 09:58:46 -0600 Subject: [horizon] [keystone] Re: [User-committee] Help In-Reply-To: References: Message-ID: <4D16338C-47A4-4EFA-9A99-8ADD61943F63@demarco.com> Pradeep, I am sending this to the OpenStack discuss mailing list where you might be able to receive more help. I have tagged both the Horizon and Keystone teams as the error seems to be from Horizon and concerning Keystone. In order to provide more assistance, information as to what you were doing at the time will be needed. Please confirm this is OpenStack Rocky on CentOS and how you installed, from scratch, TripleO, OpenStack-Ansible, etc. Thanks, Amy (spotz) > On Feb 1, 2020, at 7:11 AM, pradeep pal wrote: > >  > Rocky+Centos 7.7 64bit, > > 2020-02-01 18:26:38.063938 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option > 2020-02-01 18:26:38.549453 mod_wsgi (pid=82228): Target WSGI script '/usr/bin/keystone-wsgi-public' cannot be loaded as Python module. > 2020-02-01 18:26:38.549516 mod_wsgi (pid=82228): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-public'. > 2020-02-01 18:26:38.549558 Traceback (most recent call last): > 2020-02-01 18:26:38.549600 File "/usr/bin/keystone-wsgi-public", line 54, in > 2020-02-01 18:26:38.549666 application = initialize_public_application() > 2020-02-01 18:26:38.549695 File "/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 24, in initialize_public_application > 2020-02-01 18:26:38.549763 name='public', config_files=flask_core._get_config_files()) > 2020-02-01 18:26:38.549799 File "/usr/lib/python2.7/site-packages/keystone/server/flask/core.py", line 149, in initialize_application > 2020-02-01 18:26:38.549862 keystone.server.configure(config_files=config_files) > 2020-02-01 18:26:38.549897 File "/usr/lib/python2.7/site-packages/keystone/server/__init__.py", line 28, in configure > 2020-02-01 18:26:38.549958 keystone.conf.configure() > 2020-02-01 18:26:38.549988 File "/usr/lib/python2.7/site-packages/keystone/conf/__init__.py", line 125, in configure > 2020-02-01 18:26:38.550040 help='Do not monkey-patch threading system modules.')) > 2020-02-01 18:26:38.550084 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2501, in __inner > 2020-02-01 18:26:38.550137 result = f(self, *args, **kwargs) > 2020-02-01 18:26:38.550164 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2776, in register_cli_opt > 2020-02-01 18:26:38.550225 raise ArgsAlreadyParsedError("cannot register CLI option") > 2020-02-01 18:26:38.550276 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option > > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Feb 2 00:36:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 01 Feb 2020 18:36:58 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-15 2nd Update (# 2 weeks left to complete) Message-ID: <17003544244.b4776cd0166376.7089453842624552657@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-15 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * 2 weeks left to finish the work. * QA tooling: ** Tempest is dropping py3.5[1]. Tempest plugins can drop py3.5 now if they still support it. ** Updating tox with basepython python3 [2]. ** Pining stable/rocky testing with 23.0.0[3]. ** Updating neutron-tempest-plugins rocky jobs to run with py3 on master and py2 on stable/rocky gate. ** Ironic-tempest-plugin jobs are failing on uploading image on glance. Debugging in progress. * zipp failure fix on py3.5 job is merged. * 5 services listed below still not merged the patches, I request PTLs to review it on priority. Project wise status and need reviews: ============================ Phase-1 status: The OpenStack services have not merged the py2 drop patches: NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). * Adjutant * ec2-api * Karbor * Masakari * Qinling * Tricircle Phase-2 status: This is ongoing work and I think most of the repo have patches up to review. Try to review them on priority. If any query and I missed to respond on review, feel free to ping me in irc. * Most of the tempest plugins and python client patches are good to merge. * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open How you can help: ============== - Review the patches. Push the patches if I missed any repo. [1] https://review.opendev.org/#/c/704840/ [2] https://review.opendev.org/#/c/704688/ [3] https://review.opendev.org/#/c/705098/ -gmann From kiseok7 at gmail.com Sun Feb 2 02:26:15 2020 From: kiseok7 at gmail.com (Kim, Kiseok) Date: Sun, 2 Feb 2020 11:26:15 +0900 Subject: [nova] I would like to add another option for cross_az_attach In-Reply-To: References: Message-ID: Hello all, Can I add the code I used above to the NOVA upstream code? Two changes: * add option "enable_az_attach_list" in nove/[cinder] * passing checking availability zone in check_availability_zone function if there is enable_az_attach_list Thanks, Kiseok Kim On Wed, Jan 22, 2020 at 3:09 PM Kim KS wrote: > Hello, Brin and Matt. > and Thank you. > > I'll tell you more about my use case: > > * First, I create an instance(I'll call it NODE01) and a volume in same > AZ. (so I use 'cross_az_attach = False' option) > * and I create a cinder volume(I'll call it PV01) in different Volume > Zone(I'll call it KubePVZone) > * and then I would like to attach PV01 volume to NODE01 instance. > > KubePVZone is volume zone for kubernetes's persistent volume and NODE01 is > a kubernetes' node. > KubePVZone's volumes can be attached to the other kubernetes's nodes. > > So I would like to use options like: > > [cinder] > cross_az_attach = False > enable_az_attach_list = KubePVZone > > Let me know if there is a lack of explanation. > > I currently use the code by adding in to check_availability_zone method: > > > https://github.com/openstack/nova/blob/058e77e26c1b52ab7d3a79a2b2991ca772318105/nova/volume/cinder.py#L534 > > + if volume['availability_zone'] in > CONF.cinder.enable_az_attach_list: > + LOG.info("allowed AZ for attaching in different availability > zone: %s", > + volume['availability_zone']) > + return > > Best, > Kiseok Kim > > > > 2020. 1. 21. 오전 11:35, Brin Zhang(张百林) 작성: > > > > Hi, Kim KS: > > "cross_az_attach"'s default value is True, that means a llow attach > between instance and volume in different availability zones. > > If False, volumes attached to an instance must be in the same > availability zone in Cinder as the instance availability zone in Nova. > Another thing is, you should care booting an BFV instance from "image", and > this should interact the " allow_availability_zone_fallback" in Cinder, if > " allow_availability_zone_fallback=False" and *that* request AZ does not in > Cinder, the request will be fail. > > > > > > About specify AZ to unshelve a shelved_offloaded server, about the > cross_az_attach something you can know > > > https://github.com/openstack/nova/blob/master/releasenotes/notes/bp-specifying-az-to-unshelve-server-aa355fef1eab2c02.yaml > > > > Availability Zones docs, that contains some description with > cinder.cross_az_attach > > > https://docs.openstack.org/nova/latest/admin/availability-zones.html#implications-for-moving-servers > > > > cross_az_attach configuration: > https://docs.openstack.org/nova/train/configuration/config.html#cinder.cross_az_attach > > > > And cross_az_attach with the server is in > > > https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L523-L545 > > > > I am not sure why you are need " enable_az_attach_list = AZ1,AZ2" > configuration? > > > > brinzhang > > > > > >> cross_az_attach > >> > >> Hello all, > >> > >> In nova with setting [cinder]/ cross_az_attach option to false, nova > creates > >> instance and volume in same AZ. > >> > >> but some of usecase (in my case), we need to attach new volume in > different > >> AZ to the instance. > >> > >> so I need two options. > >> > >> one is for nova block device mapping and attaching volume and another > is for > >> attaching volume in specified AZ. > >> > >> [cinder] > >> cross_az_attach = False > >> enable_az_attach_list = AZ1,AZ2 > >> > >> how do you all think of it? > >> > >> Best, > >> Kiseok > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pspal83 at hotmail.com Sun Feb 2 06:58:04 2020 From: pspal83 at hotmail.com (pradeep pal) Date: Sun, 2 Feb 2020 06:58:04 +0000 Subject: [horizon] [keystone] Re: [User-committee] Help In-Reply-To: <4D16338C-47A4-4EFA-9A99-8ADD61943F63@demarco.com> References: , <4D16338C-47A4-4EFA-9A99-8ADD61943F63@demarco.com> Message-ID: Hi, Thanks for your reply. I have investigate and found that the issue was comes due to wrong IP use in admin/public/internal on keystone-mange Regards Pradeep Get Outlook for iOS ________________________________ From: Amy Marrich Sent: Saturday, February 1, 2020 9:28:46 PM To: pradeep pal ; openstack-discuss Cc: user-committee at lists.openstack.org Subject: [horizon] [keystone] Re: [User-committee] Help Pradeep, I am sending this to the OpenStack discuss mailing list where you might be able to receive more help. I have tagged both the Horizon and Keystone teams as the error seems to be from Horizon and concerning Keystone. In order to provide more assistance, information as to what you were doing at the time will be needed. Please confirm this is OpenStack Rocky on CentOS and how you installed, from scratch, TripleO, OpenStack-Ansible, etc. Thanks, Amy (spotz) On Feb 1, 2020, at 7:11 AM, pradeep pal wrote:  Rocky+Centos 7.7 64bit, 2020-02-01 18:26:38.063938 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option 2020-02-01 18:26:38.549453 mod_wsgi (pid=82228): Target WSGI script '/usr/bin/keystone-wsgi-public' cannot be loaded as Python module. 2020-02-01 18:26:38.549516 mod_wsgi (pid=82228): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-public'. 2020-02-01 18:26:38.549558 Traceback (most recent call last): 2020-02-01 18:26:38.549600 File "/usr/bin/keystone-wsgi-public", line 54, in 2020-02-01 18:26:38.549666 application = initialize_public_application() 2020-02-01 18:26:38.549695 File "/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 24, in initialize_public_application 2020-02-01 18:26:38.549763 name='public', config_files=flask_core._get_config_files()) 2020-02-01 18:26:38.549799 File "/usr/lib/python2.7/site-packages/keystone/server/flask/core.py", line 149, in initialize_application 2020-02-01 18:26:38.549862 keystone.server.configure(config_files=config_files) 2020-02-01 18:26:38.549897 File "/usr/lib/python2.7/site-packages/keystone/server/__init__.py", line 28, in configure 2020-02-01 18:26:38.549958 keystone.conf.configure() 2020-02-01 18:26:38.549988 File "/usr/lib/python2.7/site-packages/keystone/conf/__init__.py", line 125, in configure 2020-02-01 18:26:38.550040 help='Do not monkey-patch threading system modules.')) 2020-02-01 18:26:38.550084 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2501, in __inner 2020-02-01 18:26:38.550137 result = f(self, *args, **kwargs) 2020-02-01 18:26:38.550164 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2776, in register_cli_opt 2020-02-01 18:26:38.550225 raise ArgsAlreadyParsedError("cannot register CLI option") 2020-02-01 18:26:38.550276 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Mon Feb 3 02:02:22 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Mon, 3 Feb 2020 10:02:22 +0800 Subject: [kolla] ujson issue affected to few containers. In-Reply-To: References: <801b30a3-62a1-a1e9-c0ef-973baa19b4a0@binero.se> Message-ID: We tested the Rocky deployment based on Ubuntu binary, and confirm that the binary one was not affect the issue. So it means that it will need to use binary type on Rocky if the user going to use Gnocchi + Ceilometer. - Eddie Radosław Piliszek 於 2020年1月31日 週五 下午4:18寫道: > I checked ceilometer and it seems they dropped ujson in queens, it's > only the gnocchi client that still uses it, unfortunately. > > -yoctozepto > > pt., 31 sty 2020 o 09:01 Radosław Piliszek > napisał(a): > > > > Well, the release does not look it's going to happen ever. > > Ubuntu binary rocky probably froze in time so it has a higher chance > > of working, though a potential rebuild will probably kill it as well. > > > > Let's start a general thread about ujson. > > > > -yoctozepto > > > > pt., 31 sty 2020 o 03:26 Eddie Yen napisał(a): > > > > > > In summary, looks like we have to wait the project release the fixed > code on PyPI or compile the source code from its git project. > > > Otherwise these containers may still affected this issue and can't > deploy or working. > > > > > > We may going to try the Ubuntu binary deployment to see it also affect > of not. Perhaps the user may going to deploy with binary on Ubuntu before > the fix release to PyPI. > > > > > > - Eddie > > > > > > Tobias Urdin 於 2020年1月30日 週四 下午4:29寫道: > > >> > > >> Seeing this issue when messing around with Gnocchi on Ubuntu 18.04 as > well. > > >> Temp solved it by installing ujson from master as suggested in [1] > instead of pypi. > > >> > > >> [1] https://github.com/esnme/ultrajson/issues/346 > > >> > > >> On 1/30/20 9:10 AM, Eddie Yen wrote: > > >> > > >> Hi Radosław, > > >> > > >> Sorry about lost distro information, the distro we're using is Ubuntu. > > >> > > >> We have an old copy of ceilometer container image, the ujson.so > version between old and latest are both 1.35 > > >> But only latest one affected this issue. > > >> > > >> BTW, I read the last reply on issue page. Since he said the python 3 > with newer GCC is OK, I think it may caused by python version issue or GCC > compiler versioning. > > >> It may become a huge architect if it really caused by compiling > issue, if Ubuntu updated GCC or python. > > >> > > >> Radosław Piliszek 於 2020年1月30日 週四 > 下午3:48寫道: > > >>> > > >>> Hi Eddie, > > >>> > > >>> the issue is that the project did *not* do a release. > > >>> The latest is still 1.35 from Jan 20, *2016*... [1] > > >>> > > >>> You said only Rocky source - but is this ubuntu or centos? > > >>> > > >>> Also, by the looks of [2] master ceilometer is no longer affected, > but > > >>> monasca and mistral might still be if they call affected paths. > > >>> > > >>> The project looks dead so we are fried unless we override and start > > >>> using its sources from git (hacky hacky). > > >>> > > >>> [1] https://pypi.org/project/ujson/#history > > >>> [2] http://codesearch.openstack.org/?q=ujson&i=nope&files=&repos= > > >>> > > >>> -yoctozepto > > >>> > > >>> > > >>> czw., 30 sty 2020 o 03:31 Eddie Yen > napisał(a): > > >>> > > > >>> > Hi everyone, > > >>> > > > >>> > I'm not sure it should be bug report or not. So I email out about > this issue. > > >>> > > > >>> > In these days, I found the Rocky source deployment always failed > at Ceilometer bootstrapping. Then I found it failed at ceilometer-upgrade. > > >>> > So I tried to looking at ceilometer-upgrade.log and the error > shows it failed to import ujson. > > >>> > > > >>> > https://pastebin.com/nGqsM0uf > > >>> > > > >>> > Then I googled it and found this issue is already happened and > released fixes. > > >>> > https://github.com/esnme/ultrajson/issues/346 > > >>> > > > >>> > But it seems like the container still using the questionable one, > even today (Jan 30 UTC+8). > > >>> > And this not only affected to Ceilometer, but may also Gnocchi. > > >>> > > > >>> > I think we have to patch it, but not sure about the workaround. > > >>> > Does anyone have good idea? > > >>> > > > >>> > Many thanks, > > >>> > Eddie. > > >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Mon Feb 3 06:11:50 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Mon, 3 Feb 2020 06:11:50 +0000 (UTC) Subject: [openstack-dev][kuryr] How to add new worker node References: <420285313.629880.1580710310910.ref@mail.yahoo.com> Message-ID: <420285313.629880.1580710310910@mail.yahoo.com> Hi , I successfully install kuryr-kubernetes using below linkhttps://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/basic.html How to add a new external worker node to existing controller node. Regards, Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Mon Feb 3 07:17:04 2020 From: licanwei_cn at 163.com (licanwei) Date: Mon, 3 Feb 2020 15:17:04 +0800 (GMT+08:00) Subject: [Watcher] confused about meeting schedule In-Reply-To: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: Hi Corne, Sorry for the confusion, it's my fault. i don't realised that the week changed with the new year. There were the Chinese new year holiday in the last two weeks, so we cancelled the irc meeting. Because of the 2019-nCov, we had told to stay at home, i don't know if i can go to work next week, if i can, we will have the irc meeting on 12th Febr. and if not, i will send a notification mail. Thanks, licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 On 01/22/2020 16:07, info at dantalion.nl wrote: Hello everyone, The documentation for Watcher states that meetings will be held on a bi-weekly basis on odd-weeks, however, last meeting was held on the 8th of January which is not an odd-week. Today I was expecting a meeting as the meetings are held bi-weekly and the last one was held on the 8th of January, however, there was none. Can someone clarify when the next meeting will be held and the subsequent one after that? If these are on even weeks we should also update Watcher's documentation. Kind regards, Corne lukken -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Feb 3 07:30:54 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 3 Feb 2020 08:30:54 +0100 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: Message-ID: One comment inline, On Fri, Jan 24, 2020 at 8:21 PM Alfredo Moralejo Alonso wrote: > > Hi, > > We were given access to CBS to build centos8 dependencies a couple of days > ago and we are still in the process of re-bootstraping it. I hope we'll > have all that is missing in the next days. > > See my comments below. > > Best regards, > > Alfredo > > > On Fri, Jan 24, 2020 at 7:21 PM Wesley Hayutin > wrote: > >> Greetings, >> >> I know the ceph repo is in progress. >> TripleO / RDO is not releasing opendaylight >> >> Can the RDO team comment on the rest of the missing packages here please? >> >> Thank you!! >> >> https://review.opendev.org/#/c/699414/9/kolla/image/build.py >> >> NOTE(mgoddard): Mark images with missing dependencies as unbuildable for >> # CentOS 8. >> 'centos8': { >> "barbican-api", # Missing uwsgi-plugin-python3 >> > We'll take care of uwsgi. > >> "ceph-base", # Missing Ceph repo >> "cinder-base", # Missing Ceph repo >> "collectd", # Missing collectd-ping and >> # collectd-sensubility packages >> > About collectd and sensu, Matthias already replied from OpsTools side > >> "elasticsearch", # Missing elasticsearch repo >> "etcd", # Missing etcd package >> > Given that etcd is not longer in CentOS base (it was in 7), I guess we'll > take care of etcd unless some other sig is building it as part of k8s > family. > >> "fluentd", # Missing td-agent repo >> > See Matthias reply. > >> "glance-base", # Missing Ceph repo >> "gnocchi-base", # Missing Ceph repo >> "hacluster-base", # Missing hacluster repo >> > > That's an alternative repo for HA related packages for CentOS: > > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > > Which still does not provide packages for centos8. > > Note that centos8.1 includes pacemaker, corosync and pcs in > HighAvailability repo. Maybe it could be used instead of the current one. > > >> "ironic-conductor", # Missing shellinabox package >> > > shellinabox is epel. It was never used in tripleo containers, it's really > required? > It's a part of an optional ironic feature. TripleO doesn't use it by default [1] so probably fine to remove. However, there may be people using it outside of RH products. [1] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/deployment/ironic/ironic-conductor-container-puppet.yaml#L125 > > >> "kibana", # Missing elasticsearch repo >> > > We never provided elasticsearch in the past, is consumed from > elasticsearch repo iirc > > >> "manila-share", # Missing Ceph repo >> "mongodb", # Missing mongodb and mongodb-server >> packages >> > > Mongodb was retired from RDO time ago as it was not longer the recommended > backend for any service. In CentOS7 is pulled from EPEL. > > >> "monasca-grafana", # Using python2 >> "nova-compute", # Missing Ceph repo >> "nova-libvirt", # Missing Ceph repo >> "nova-spicehtml5proxy", # Missing spicehtml5 package >> > > spice-html5 is pulled from epel7 was never part of RDO. Not used in > TripleO. > > >> "opendaylight", # Missing opendaylight repo >> "ovsdpdk", # Not supported on CentOS >> "sensu-base", # Missing sensu package >> > > See Matthias reply. > > >> "tgtd", # Not supported on CentOS 8 >> > > tgtd was replace by scsi-target-utils. It's was never provided in RDO, in > kolla was pulled from epel for 7 > > >> }, >> >> 'centos8+source': { >> "barbican-base", # Missing uwsgi-plugin-python3 >> "bifrost-base", # Bifrost does not support CentOS 8 >> "cyborg-agent", # opae-sdk does not support CentOS 8 >> "freezer-base", # Missing package trickle >> "masakari-monitors", # Missing hacluster repo >> "zun-compute", # Missing Ceph repo >> _______________________________________________ >> dev mailing list >> dev at lists.rdoproject.org >> http://lists.rdoproject.org/mailman/listinfo/dev >> >> To unsubscribe: dev-unsubscribe at lists.rdoproject.org >> > _______________________________________________ > dev mailing list > dev at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngompa13 at gmail.com Sun Feb 2 21:06:21 2020 From: ngompa13 at gmail.com (Neal Gompa) Date: Sun, 2 Feb 2020 16:06:21 -0500 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Wed, Jan 29, 2020 at 9:37 AM Mark Goddard wrote: > > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > wrote: > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > >> > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > >> > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > >> > wrote: > >> > > > >> > > I know it was for masakari. > >> > > Gaëtan had to grab crmsh from opensuse: > >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > >> > > > >> > > -yoctozepto > >> > > >> > Thanks Wes for getting this discussion going. I've been looking at > >> > CentOS 8 today and trying to assess where we are. I created an > >> > Etherpad to track status: > >> > https://etherpad.openstack.org/p/kolla-centos8 > >> > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. > > I found them, thanks. > > > > >> > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > >> code when installing packages. It often happens on the rabbitmq and > >> grafana images. There is a prompt about importing GPG keys prior to > >> the error. > >> > >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > >> > >> Related bug report? https://github.com/containers/libpod/issues/4431 > >> > >> Anyone familiar with it? > >> > > > > Didn't know about this issue. > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > > It seems to be due to the use of a GPG check on the repo (as opposed > to packages). DNF doesn't use keys imported via rpm --import for this > (I'm not sure what it uses), and prompts to add the key. This breaks > without a terminal. More explanation here: > https://review.opendev.org/#/c/704782. > librepo has its own keyring for repo signature verification. -- 真実はいつも一つ!/ Always, there's only one truth! From mnaser at vexxhost.com Mon Feb 3 08:20:33 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 3 Feb 2020 09:20:33 +0100 Subject: [Watcher] confused about meeting schedule In-Reply-To: References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: Thanks for updating licanwei :) On Mon, Feb 3, 2020 at 8:23 AM licanwei wrote: > Hi Corne, > Sorry for the confusion, it's my fault. i don't realised that the week > changed with the new year. > There were the Chinese new year holiday in the last two weeks, so we > cancelled the irc meeting. > Because of the 2019-nCov, we had told to stay at home, i don't know if i > can go to work next week, if i can, we will have the irc meeting on 12th > Febr. and if not, i will send a notification mail. > > Thanks, > licanwei > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > > > > 签名由 网易邮箱大师 定制 > > On 01/22/2020 16:07, info at dantalion.nl wrote: > Hello everyone, > > The documentation for Watcher states that meetings will be held on a > bi-weekly basis on odd-weeks, however, last meeting was held on the 8th > of January which is not an odd-week. > > Today I was expecting a meeting as the meetings are held bi-weekly and > the last one was held on the 8th of January, however, there was none. > > Can someone clarify when the next meeting will be held and the > subsequent one after that? > > If these are on even weeks we should also update Watcher's documentation. > > Kind regards, > Corne lukken > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Feb 3 08:26:48 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 3 Feb 2020 08:26:48 +0000 Subject: Kayobe Openstack deployment In-Reply-To: References: Message-ID: On Fri, 31 Jan 2020 at 06:06, Tony Pearce wrote: > > Thanks again Mark for your support and patience yesterday. I dont think I would have been able to go beyond that hurdle alone. > > I have gone back to the universe from nothing this morning. The issue I had there was actually the same issue that you had helped me with; so I have now moved past that point. I am running this in a VM and I did not have nested virtualisation enabled on the hosts so I've had to side step to get that implemented. I am sticking with the stable/train. I was not sure about this, but I had figured that as I want to deploy Openstack Train, I'd need the Kayobe stable/train. That's correct - use stable/train for kayobe, kayobe-config and a-universe-from-nothing to deploy Train. > > In terms of the docs - I may be in a good position to help here. I'm not a coder by any means, so I may be in a position to contribute back in this doc sense. That would be a great help. These first experiences with a project's documentation are always valuable when determining where the gaps are. Get in touch if you need help with the tooling. > > Teething issues aside, I really like what I am seeing from Kayobe etc. compared to my previous experience with different deployment tool this seems much more user-friendly. Glad to hear it :) > > Thanks again > > Regards > > Tony > > > On Thu, 30 Jan 2020 at 21:40, Mark Goddard wrote: >> >> On Thu, 30 Jan 2020 at 08:22, Tony Pearce wrote: >> > >> > Hi all - I wanted to ask if there was such a reference architecture / step-by-step deployment guide for Openstack / Kayobe that I could follow to get a better understanding of the components and how to go about deploying it? >> >> Hi Tony, we spoke in the #openstack-kolla IRC channel [1], but I >> thought I'd reply here for the benefit of anyone reading this. >> >> > >> > The documentation is not that great so I'm hitting various issues when trying to follow what is there on the Openstack site. There's a lot of technical things like information on variables - which is fantastic, but there's no context about them. For example, the architecture page is pretty small, when you get further on in the guide it's difficult to contextually link detail back to the architecture. >> >> As discussed in IRC, we are missing some architecture and from scratch >> walkthrough documentation in Kayobe. I've been focusing on the >> configuration reference mostly, but I think it is time to move onto >> these other areas to help new starters. >> >> > >> > I tried to do the all-in-one deployment as well as the "universe from nothing approach" but hit some issues there as well. Plus it's kind of like trying to learn how to drive a bus by riding a micro-scooter :) >> >> I would definitely recommend persevering with the universe from >> nothing demo [2], as it offers the quickest way to get a system up and >> running that you can poke at. It's also a fairly good example of a >> 'bare minimum' configuration. Could you share the issues you had with >> it? For an even simpler setup, you could try [3], which gets you an >> all-in-one control plane/compute host quite quickly. I'd suggest using >> the stable/train branch for a more stable environment. >> >> > >> > Also, the "report bug" bug link on the top of all the pages is going to an error "page does not exist" - not sure that had been realised yet. >> >> Andreas Jaeger kindly proposed a fix for this. Here's the storyboard >> link: https://storyboard.openstack.org/#!/project/openstack/kayobe. >> >> [1] http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-01-30.log.html#t2020-01-30T04:07:14 >> [2] https://github.com/stackhpc/a-universe-from-nothing >> [3] https://docs.openstack.org/kayobe/latest/development/automated.html#overcloud >> >> > >> > Regards, >> > >> > >> > Tony Pearce | Senior Network Engineer / Infrastructure Lead >> > Cinglevue International >> > >> > Email: tony.pearce at cinglevue.com >> > Web: http://www.cinglevue.com >> > >> > Australia >> > 1 Walsh Loop, Joondalup, WA 6027 Australia. >> > >> > Direct: +61 8 6202 0036 | Main: +61 8 6202 0024 >> > >> > Note: This email and all attachments are the sole property of Cinglevue International Pty Ltd. (or any of its subsidiary entities), and the information contained herein must be considered confidential, unless specified otherwise. If you are not the intended recipient, you must not use or forward the information contained in these documents. If you have received this message in error, please delete the email and notify the sender. >> > >> > From mdulko at redhat.com Mon Feb 3 09:08:35 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 03 Feb 2020 10:08:35 +0100 Subject: [openstack-dev][kuryr] How to add new worker node In-Reply-To: <420285313.629880.1580710310910@mail.yahoo.com> References: <420285313.629880.1580710310910.ref@mail.yahoo.com> <420285313.629880.1580710310910@mail.yahoo.com> Message-ID: On Mon, 2020-02-03 at 06:11 +0000, VeeraReddy wrote: > Hi , > > I successfully install kuryr-kubernetes using below link > https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/basic.html > > How to add a new external worker node to existing controller node. Hi, If you used default DevStack's setting for the VIF driver - that is neutron_vif, then on the node you need kubelet, kuryr-daemon and neutron-agent. Besides that if you're using containerized Kuryr, then you need to set KURYR_FORCE_IMAGE_BUILD=true in local.conf. Also you need to set K8s API endpoint using KURYR_K8S_API_URL var. We maintain a multinode configuration that we use in the gate at [1]. The settings in `vars` are for the controller node and the settings in `group-vars.subnode` are for the node. Please note that Zuul has inheritance mechanisms, meaning that more settings are inherited from kuryr-kubernetes-tempest-base [2] and devstack [3] jobs definitions. [1] https://github.com/openstack/kuryr-kubernetes/blob/master/.zuul.d/multinode.yaml [2] https://github.com/openstack/kuryr-kubernetes/blob/28b27c5de2ae10c88295a44312ec1a3d1449f99c/.zuul.d/base.yaml#L16 [3] https://github.com/openstack/devstack/blob/5c6b3c32791f6a1b6e3646e739d41ae86d866d45/.zuul.yaml#L342 Thanks, Michał > > Regards, > Veera. From info at dantalion.nl Mon Feb 3 09:28:11 2020 From: info at dantalion.nl (info at dantalion.nl) Date: Mon, 3 Feb 2020 10:28:11 +0100 Subject: [Watcher] confused about meeting schedule In-Reply-To: References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: <528f0c31-9ce4-38dd-c312-6f95a6d681bd@dantalion.nl> Hello Licanwei, I understand, we will continue irc meetings when it is possible. Your and other contributors health is naturally more important. Stay safe, Kind regards, Corne Lukken On 2/3/20 8:17 AM, licanwei wrote: > Hi Corne, > Sorry for the confusion, it's my fault. i don't realised that the week changed with the new year. > There were the Chinese new year holiday in the last two weeks, so we cancelled the irc meeting. > Because of the 2019-nCov, we had told to stay at home, i don't know if i can go to work next week, if i can, we will have the irc meeting on 12th Febr. and if not, i will send a notification mail. > > Thanks, > licanwei > > > > > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > > On 01/22/2020 16:07, info at dantalion.nl wrote: > Hello everyone, > > The documentation for Watcher states that meetings will be held on a > bi-weekly basis on odd-weeks, however, last meeting was held on the 8th > of January which is not an odd-week. > > Today I was expecting a meeting as the meetings are held bi-weekly and > the last one was held on the 8th of January, however, there was none. > > Can someone clarify when the next meeting will be held and the > subsequent one after that? > > If these are on even weeks we should also update Watcher's documentation. > > Kind regards, > Corne lukken > From mdulko at redhat.com Mon Feb 3 11:53:19 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 03 Feb 2020 12:53:19 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core Message-ID: Hi, I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes project. Maysa shown numerous examples of diligent and valuable work in terms of code contribution (e.g. in network policy support), project maintenance and reviews [1]. Please express support or objections by replying to this email. Assuming that there will be no pushback, I'll proceed with granting Maysa core powers by the end of this week. Thanks, Michał [1] https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri From ltomasbo at redhat.com Mon Feb 3 12:15:16 2020 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Mon, 3 Feb 2020 13:15:16 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core In-Reply-To: References: Message-ID: Truly deserved! +2!! She has been doing an amazing work both implementing new features as well as chasing down bugs. On Mon, Feb 3, 2020 at 12:58 PM wrote: > Hi, > > I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes > project. > > Maysa shown numerous examples of diligent and valuable work in terms of > code contribution (e.g. in network policy support), project maintenance > and reviews [1]. > > Please express support or objections by replying to this email. > Assuming that there will be no pushback, I'll proceed with granting > Maysa core powers by the end of this week. > > Thanks, > Michał > > [1] https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri > > > -- LUIS TOMÁS BOLÍVAR Senior Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 3 13:47:08 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Feb 2020 07:47:08 -0600 Subject: =?UTF-8?Q?Re:_[qa]_Proposing_Rados=C5=82aw_Piliszek__to_devstack_core?= In-Reply-To: <41ca2fc4729f25b5f1073f45ecbfe502f4426e24.camel@offenerstapel.de> References: <16fe7556c9c.c1957cd473318.237960248883865388@ghanshyammann.com> <41ca2fc4729f25b5f1073f45ecbfe502f4426e24.camel@offenerstapel.de> Message-ID: <1700b4e08f5.cb1eb2ab203056.6499055620210766435@ghanshyammann.com> Added in the core group. Welcome, Radosław to the team. -gmann ---- On Wed, 29 Jan 2020 08:57:43 -0600 Jens Harbott wrote ---- > On Mon, 2020-01-27 at 08:08 -0600, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Radosław Piliszek (yoctozepto) has been doing nice work in devstack > > from code as well as review perspective. > > He has been helping for many bugs fixes nowadays and having him as > > Core will help us to speed up the things. > > > > I would like to propose him for Devstack Core. You can vote/feedback > > on this email. If no objection by end of this week, I will add him to > > the list. > > Big +2 from me. > > Jens (frickler) > > > From paye600 at gmail.com Mon Feb 3 15:36:45 2020 From: paye600 at gmail.com (Roman Gorshunov) Date: Mon, 3 Feb 2020 16:36:45 +0100 Subject: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx In-Reply-To: References: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> <44C6FA46-6529-453D-9AE1-8908F9E16839@windriver.com> Message-ID: Hello Bin, Yes, that's correct. When you are connecting to the GitHub, your username should be git. GitHub recognizes you by your SSH key [0]. Example: [roman at pc ~]$ ssh git at github.com PTY allocation request failed on channel 0 Hi gorshunovr! You've successfully authenticated, but GitHub does not provide shell access. Connection to github.com closed. [roman at pc ~]$ As you may see, GitHub has recognized me as 'gorshunovr', despite I was using 'git' as a username for SSH. [0] https://help.github.com/en/github/authenticating-to-github/testing-your-ssh-connection Please, use mailing list for the communication, not direct e-mail. Thank you. Best regards, -- Roman Gorshunov On Mon, Feb 3, 2020 at 3:53 PM Qian, Bin wrote: > > Hi Roman, > > Thank you for the information about GitHub mirroring. > Based on the info below, I think what we want to do is to: > 1. create a GitHub account, which has the privilege to commit to our repos, > 2. with the account, create ssh key on zuul server and upload the ssh key to GitHub > 3. use zuul tool [3] to create zuul job secret, embed it to the upload job in zuul.yaml > 4. add the upload job a post job > > But I am not very sure about the step 1, as in the reference [4] below, it states, > "For GitHub, the user parameter is git, not your personal username." > > Would you please let me know if my steps above is correct? > > Thanks, > > Bin > > ________________________________ > From: Waines, Greg > Sent: Thursday, January 30, 2020 10:21 AM > To: Qian, Bin > Cc: Eslimi, Dariush; Khalil, Ghada > Subject: FW: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > fyi > > > > From: Roman Gorshunov > Date: Thursday, January 30, 2020 at 11:38 AM > To: "openstack-discuss at lists.openstack.org" > Subject: Re: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > > > Hello Greg, > > > > - Create a GitHub account for the starlingx, you will have URL on > > GitHub like [0]. You may try to contact GitHub admins and ask to > > release starlingx name, as it seems to be unused. > > - Then create SSH key and upload onto GitHub for that account in > > GitHub interface > > - Create empty repositories under [1] to match your existing > > repositories names here [2] > > - Encrypt SSH private key as described here [3] or here [4] using this tool [5] > > - Create patch to all your projects under [2] to '/.zuul.yaml' file, > > similar to what is listed here [6] > > - Change job name shown on line 407 via URL above, description (line > > 409), git_mirror_repository variable (line 411), secret name (line 414 > > and 418), and SSH key starting from line 424 to 463, to match your > > project's name, repo path on GitHub, and SSH key > > - Submit changes to Gerrit with patches for all your project and get > > them merged. If all goes good, the next change merged would trigger > > your repositories to be synced to GitHub. Status could be seen here > > [7] - search for your newly created job manes, they should be nested > > under "upload-git-mirrorMirrors a tested project repository to a > > remote git server." > > > > Hope it helps. > > > > [0] https://github.com/starlingxxxx > > [1] https://github.com/starlingxxxx/... > > [2] https://opendev.org/starlingx/... > > [3] https://docs.openstack.org/infra/manual/zuulv3.html#secret-variables > > [4] https://docs.openstack.org/infra/manual/creators.html#mirroring-projects-to-git-mirrors > > [5] https://opendev.org/zuul/zuul/src/branch/master/tools/encrypt_secret.py > > [6] https://opendev.org/airship/deckhand/src/commit/51dcea4fa12b0bcce65c381c286e61378a0826e2/.zuul.yaml#L406-L463 > > [7] https://zuul.openstack.org/jobs > > > > Sample content of your addition (patch) to your '/.zuul.yaml' files: > > =================================================== > > - job: > > name: starlingx-compile-upload-git-mirror > > parent: upload-git-mirror > > description: Mirrors starlingx/compile to starlingxxxx/compile > > vars: > > git_mirror_repository: starlingxxxx/compile > > secrets: > > - name: git_mirror_credentials > > secret: starlingx-compile-github-secret > > pass-to-parent: true > > > > - secret: > > name: starlingx-compile-github-secret > > data: > > user: git > > host: github.com > > host_key: github.com ssh-rsa > > AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== > > ssh_key: !encrypted/pkcs1-oaep > > - > > =================================================== > > > > Best regards, > > -- > > Roman Gorshunov > > > > From srinivasd.ctr at kaminario.com Mon Feb 3 13:50:03 2020 From: srinivasd.ctr at kaminario.com (Srinivas Dasthagiri) Date: Mon, 3 Feb 2020 13:50:03 +0000 Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... In-Reply-To: References: , Message-ID: Hi Jay, We are working on Kaminario CI fresh configuration(since it is too old, it has broken). We have communicated with OpenStack-infra community for the suggestions and documentation. One of the community member suggested us to go with manual CI configuration(Not third party CI) instead of CI configuration with puppet architecture(Since it is moving to Ansible). But we did not get documents for all other CI components except ZuulV3 from community. Can you please confirm us that we have to go with manual CI components configuration. If YES, can you please provide us the necessary document to configure openstack CI without using puppet modules. If NO, Shall we configure the CI with puppet modules in order to make Kaminario CI up and running for now and later we will upgrade the CI to use Zuul V3. Shall we go with this approach? Thanks & Regards Srinivas & Venkata Krishna ________________________________ From: Ido Benda Sent: 23 January 2020 18:11 To: jsbryant at electronicjungle.net ; openstack-discuss at lists.openstack.org ; inspur.ci at inspur.com ; wangyong2017 at inspur.com ; Chengwei.Chou at infortrend.com ; Bill.Sung at infortrend.com ; Kuirong.Chen(陳奎融) ; Srinivas Dasthagiri ; nec-cinder-ci at istorage.jp.nec.com ; silvan at quobyte.com ; robert at quobyte.com ; felix at quobyte.com ; bjoern at quobyte.com ; OpenStack Development ; Shlomi Avihou | Zadara ; msdu-openstack at groups.ext.hpe.com Subject: RE: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... Hi Jay, Kaminario’s CI is broken since the drop of Xenial support. We are working to resolve the these issues. Ido Benda www.kaminario.com Mobile: +(972)-52-4799393 E-Mail: ido.benda at kaminario.com From: Jay Bryant Sent: Wednesday, January 22, 2020 21:51 To: openstack-discuss at lists.openstack.org; inspur.ci at inspur.com; wangyong2017 at inspur.com; Chengwei.Chou at infortrend.com; Bill.Sung at infortrend.com; Kuirong.Chen(陳奎融) ; Ido Benda ; Srinivas Dasthagiri ; nec-cinder-ci at istorage.jp.nec.com; silvan at quobyte.com; robert at quobyte.com; felix at quobyte.com; bjoern at quobyte.com; OpenStack Development ; Shlomi Avihou | Zadara ; msdu-openstack at groups.ext.hpe.com Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... All, We once again are at the point in the release where we are talking about 3rd Party CI and what is going on for Cinder. At the moment I have analyzed drivers that have not successfully reported results on a Cinder patch in 30 or more days and have put together the following list of drivers to be unsupported in the Ussuri release: * Inspur Drivers * Infortrend * Kaminario * NEC * Quobyte * Zadara * HPE Drivers If your name is in the list above you are receiving this e-mail directly, not just through the mailing list. If you are working on resolving CI issues please let me know so we can discuss how to proceed. In addition to the fact that we will be pushing up unsupported patches for the drivers above, we have already unsupported and removed a number of drivers during this release. They are as follows: * Unsupported: * MacroSAN Driver * Removed: * ProphetStor Driver * Nimble Storage Driver * Veritas Access Driver * Veritas CNFS Driver * Virtuozzo Storage Driver * Huawei FusionStorage Driver * Sheepdog Storage Driver Obviously we are reaching the point that the number of drivers leaving the community is concerning and it has sparked discussions around the fact that maybe our 3rd Party CI approach isn't working as intended. So what do we do? Just mark drivers unsupported and no longer remove drivers? Do we restore drivers that have recently been removed? We are planning to have further discussion around these questions at our next Cinder meeting in #openstack-meeting-4 on Wednesday, 1/29/20 at 14:00 UTC. If you have thoughts or strong opinions around this topic please join us. Thank you! Jay Bryant jsbryant at electronicjungle.net IRC: jungleboyj -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Feb 3 15:49:43 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 3 Feb 2020 16:49:43 +0100 Subject: =?UTF-8?Q?Re=3A_=5Bqa=5D_Proposing_Rados=C5=82aw_Piliszek_to_devstack_co?= =?UTF-8?Q?re?= In-Reply-To: <1700b4e08f5.cb1eb2ab203056.6499055620210766435@ghanshyammann.com> References: <16fe7556c9c.c1957cd473318.237960248883865388@ghanshyammann.com> <41ca2fc4729f25b5f1073f45ecbfe502f4426e24.camel@offenerstapel.de> <1700b4e08f5.cb1eb2ab203056.6499055620210766435@ghanshyammann.com> Message-ID: Thank you, Ghanshyam. I will do my best. -yoctozepto pon., 3 lut 2020 o 15:03 Ghanshyam Mann napisał(a): > > Added in the core group. Welcome, Radosław to the team. > > -gmann > > > ---- On Wed, 29 Jan 2020 08:57:43 -0600 Jens Harbott wrote ---- > > On Mon, 2020-01-27 at 08:08 -0600, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > Radosław Piliszek (yoctozepto) has been doing nice work in devstack > > > from code as well as review perspective. > > > He has been helping for many bugs fixes nowadays and having him as > > > Core will help us to speed up the things. > > > > > > I would like to propose him for Devstack Core. You can vote/feedback > > > on this email. If no objection by end of this week, I will add him to > > > the list. > > > > Big +2 from me. > > > > Jens (frickler) > > > > > > > > From amy at demarco.com Mon Feb 3 15:50:50 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 3 Feb 2020 09:50:50 -0600 Subject: UC Nominations now open Message-ID: The nomination period for the February User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the two sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations can be made by sending an email to the user-committee at lists.openstack.org mailing-list[0], with the subject: “UC candidacy” by February 16, 23:59 UTC aa voting will begin on February 17. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. Criteria for AUC status can be found at https://superuser.openstack.org/articles/auc-community/. If you are still not sure of your status and would like to verify in advance please email myself(amy at demarco.com) and Rain Leander(rleander at redhat.com) as we are serving as the Election Officials. Thanks, Amy Marrich (spotz) 0 - Please make sure you are subscribed to this list before sending in your nomination. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Feb 3 16:02:36 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 Feb 2020 17:02:36 +0100 Subject: [oslo] drop 2.7 support - track releases Message-ID: Hello, FYI you can track the dropping of py2.7 support in oslo by using: https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) Topic: oslo_drop_py2_support We release a major version each time an oslo projects drop the py2.7 support. -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Feb 3 16:32:08 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 Feb 2020 17:32:08 +0100 Subject: [oslo] drop 2.7 support - track releases In-Reply-To: References: Message-ID: The goal of this thread is to track oslo releases related to drop of py2.7 support. By creating this thread I want to isolate the oslo status from the " drop-py27-support " to help us to track our advancement internally in oslo. Le lun. 3 févr. 2020 à 17:02, Herve Beraud a écrit : > Hello, > > FYI you can track the dropping of py2.7 support in oslo by using: > > https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) > > Topic: oslo_drop_py2_support > > We release a major version each time an oslo projects drop the py2.7 > support. > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Qian at windriver.com Mon Feb 3 16:33:43 2020 From: Bin.Qian at windriver.com (Qian, Bin) Date: Mon, 3 Feb 2020 16:33:43 +0000 Subject: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx In-Reply-To: References: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> <44C6FA46-6529-453D-9AE1-8908F9E16839@windriver.com> , Message-ID: Roman, That works on my account too with specifying my private key. Thanks, Bin ________________________________________ From: Roman Gorshunov [paye600 at gmail.com] Sent: Monday, February 03, 2020 7:36 AM To: openstack-discuss at lists.openstack.org Cc: Qian, Bin Subject: Re: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx Hello Bin, Yes, that's correct. When you are connecting to the GitHub, your username should be git. GitHub recognizes you by your SSH key [0]. Example: [roman at pc ~]$ ssh git at github.com PTY allocation request failed on channel 0 Hi gorshunovr! You've successfully authenticated, but GitHub does not provide shell access. Connection to github.com closed. [roman at pc ~]$ As you may see, GitHub has recognized me as 'gorshunovr', despite I was using 'git' as a username for SSH. [0] https://help.github.com/en/github/authenticating-to-github/testing-your-ssh-connection Please, use mailing list for the communication, not direct e-mail. Thank you. Best regards, -- Roman Gorshunov On Mon, Feb 3, 2020 at 3:53 PM Qian, Bin wrote: > > Hi Roman, > > Thank you for the information about GitHub mirroring. > Based on the info below, I think what we want to do is to: > 1. create a GitHub account, which has the privilege to commit to our repos, > 2. with the account, create ssh key on zuul server and upload the ssh key to GitHub > 3. use zuul tool [3] to create zuul job secret, embed it to the upload job in zuul.yaml > 4. add the upload job a post job > > But I am not very sure about the step 1, as in the reference [4] below, it states, > "For GitHub, the user parameter is git, not your personal username." > > Would you please let me know if my steps above is correct? > > Thanks, > > Bin > > ________________________________ > From: Waines, Greg > Sent: Thursday, January 30, 2020 10:21 AM > To: Qian, Bin > Cc: Eslimi, Dariush; Khalil, Ghada > Subject: FW: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > fyi > > > > From: Roman Gorshunov > Date: Thursday, January 30, 2020 at 11:38 AM > To: "openstack-discuss at lists.openstack.org" > Subject: Re: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > > > Hello Greg, > > > > - Create a GitHub account for the starlingx, you will have URL on > > GitHub like [0]. You may try to contact GitHub admins and ask to > > release starlingx name, as it seems to be unused. > > - Then create SSH key and upload onto GitHub for that account in > > GitHub interface > > - Create empty repositories under [1] to match your existing > > repositories names here [2] > > - Encrypt SSH private key as described here [3] or here [4] using this tool [5] > > - Create patch to all your projects under [2] to '/.zuul.yaml' file, > > similar to what is listed here [6] > > - Change job name shown on line 407 via URL above, description (line > > 409), git_mirror_repository variable (line 411), secret name (line 414 > > and 418), and SSH key starting from line 424 to 463, to match your > > project's name, repo path on GitHub, and SSH key > > - Submit changes to Gerrit with patches for all your project and get > > them merged. If all goes good, the next change merged would trigger > > your repositories to be synced to GitHub. Status could be seen here > > [7] - search for your newly created job manes, they should be nested > > under "upload-git-mirrorMirrors a tested project repository to a > > remote git server." > > > > Hope it helps. > > > > [0] https://github.com/starlingxxxx > > [1] https://github.com/starlingxxxx/... > > [2] https://opendev.org/starlingx/... > > [3] https://docs.openstack.org/infra/manual/zuulv3.html#secret-variables > > [4] https://docs.openstack.org/infra/manual/creators.html#mirroring-projects-to-git-mirrors > > [5] https://opendev.org/zuul/zuul/src/branch/master/tools/encrypt_secret.py > > [6] https://opendev.org/airship/deckhand/src/commit/51dcea4fa12b0bcce65c381c286e61378a0826e2/.zuul.yaml#L406-L463 > > [7] https://zuul.openstack.org/jobs > > > > Sample content of your addition (patch) to your '/.zuul.yaml' files: > > =================================================== > > - job: > > name: starlingx-compile-upload-git-mirror > > parent: upload-git-mirror > > description: Mirrors starlingx/compile to starlingxxxx/compile > > vars: > > git_mirror_repository: starlingxxxx/compile > > secrets: > > - name: git_mirror_credentials > > secret: starlingx-compile-github-secret > > pass-to-parent: true > > > > - secret: > > name: starlingx-compile-github-secret > > data: > > user: git > > host: github.com > > host_key: github.com ssh-rsa > > AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== > > ssh_key: !encrypted/pkcs1-oaep > > - > > =================================================== > > > > Best regards, > > -- > > Roman Gorshunov > > > > From tidwellrdev at gmail.com Mon Feb 3 16:48:30 2020 From: tidwellrdev at gmail.com (Ryan Tidwell) Date: Mon, 3 Feb 2020 10:48:30 -0600 Subject: [neutron] Bug Deputy Report Jan. 27 - Feb. 3 Message-ID: Here's this week's bug report for neutron: https://bugs.launchpad.net/neutron/+bug/1861269 Functional tests failing due to failure with getting datapath ID from ovs Fix merged to master, backports in progress https://review.opendev.org/705401 https://review.opendev.org/705400 https://review.opendev.org/705399 https://review.opendev.org/705398 https://bugs.launchpad.net/neutron/+bug/1861442 QOS minimum bandwidth rejection of non-physnet network and updates should be driver specific This was discussed in the drivers meeting. The short-term plan is move where validation is done, then there is likely an RFE to tackle once this fixed. https://bugs.launchpad.net/neutron/+bug/1861496 All ports of server instance are open even no security group does allow this This one is worth a look as it involves security groups potentially not filtering things as expressed by the user. I asked a follow-up question in launchpad, others should chime in. https://bugs.launchpad.net/neutron/+bug/1861502 [OVN] Mechanism driver - failing to recreate floating IP Some errors related to the OVN ML2 driver are being observed in the gate "TypeError: create_floatingip() missing 1 required positional argument: 'floatingip'" https://bugs.launchpad.net/neutron/+bug/1861670 AttributeError: 'NetworkConnectivityTest' object has no attribute 'safe_client' Fix proposed - https://review.opendev.org/#/c/705413/ https://bugs.launchpad.net/neutron/+bug/1861674 Gateway which is not in subnet CIDR is unsupported in ha router This could use some follow-up by the team for clarification Some OVN scheduler-related issues: [OVN] GW rescheduling mechanism is triggered on every Chassis updated unnecessarily https://bugs.launchpad.net/bugs/1861510 Some PoC code up for review - https://review.opendev.org/#/c/705331/ https://bugs.launchpad.net/bugs/1861509 [OVN] GW rescheduling logic is broken RFE's: https://bugs.launchpad.net/neutron/+bug/1861032 Add support for configuring dnsmasq with multiple IPv6 addresses in same subnet on same port https://bugs.launchpad.net/neutron/+bug/1861529 A port's network should be changable -Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Feb 3 16:51:53 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 3 Feb 2020 10:51:53 -0600 Subject: [Ansible] European Ansible Cntributors Summit Message-ID: This was shared on the OpenStack-Ansible channel this morning and I wanted to share with everyone using Ansible who might be interested in attending. https://groups.google.com/forum/#!topic/ansible-outreach/sLte90d5hdc Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 3 17:21:50 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Feb 2020 11:21:50 -0600 Subject: [oslo] drop 2.7 support - track releases In-Reply-To: References: Message-ID: <1700c12980a.e8a768ec216738.734931729905313685@ghanshyammann.com> ---- On Mon, 03 Feb 2020 10:32:08 -0600 Herve Beraud wrote ---- > The goal of this thread is to track oslo releases related to drop of py2.7 support. > By creating this thread I want to isolate the oslo status from the "drop-py27-support" to help us to track our advancement internally in oslo. You can still track the oslo related work with the original topic by adding the extra query for project matching string. - https://review.opendev.org/#/q/topic:drop-py27-support+(status:open+OR+status:merged)+projects:openstack/oslo Main idea to use the single topic "drop-py27-support" for this goal is to track it on OpenStack level or avoid duplicating the work etc. -gmann > > Le lun. 3 févr. 2020 à 17:02, Herve Beraud a écrit : > > > -- > Hervé BeraudSenior Software Engineer > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > Hello, > FYI you can track the dropping of py2.7 support in oslo by using:https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) > Topic: oslo_drop_py2_support > We release a major version each time an oslo projects drop the py2.7 support. > -- Hervé BeraudSenior Software Engineer > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From jmacer at 42iso.com Mon Feb 3 18:03:38 2020 From: jmacer at 42iso.com (jmacer at 42iso.com) Date: Mon, 03 Feb 2020 12:03:38 -0600 Subject: Migration to Openstack from Proxmox Message-ID: <7150dfca5e513345a33573f7d00c3a83@42iso.com> We currently are using proxmox for a envrionment management, and while we were looking at DFS options we came across openstack, and we've decided to start looking at it as a replacement for proxmox. Has anyone made this migration before? Currently we're running a few Windows Server 12/16/19 virtual machines, but mostly centOS7 virtual machines, however what we are developing are micro-services that ideally would be deployed using k8s. Does anyone have any experience migrating between the two, or any other recommendation when considering openstack? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Mon Feb 3 21:36:10 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 3 Feb 2020 21:36:10 +0000 Subject: Virtio memory balloon driver Message-ID: When we build a Centos 7 VM with 1.4T RAM it fails with "[ 17.797177] BUG: unable to handle kernel paging request at ffff988b19478000" I asked in #centos and they asked me to show a list of devices from a working VM (if I use 720G RAM it works). This is the list: [root at alberttest1 ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon [root at alberttest1 ~]# lsusb Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub They suspect that the "Virtio memory balloon" driver is causing the problem, and that we should disable it. I googled around and found this: http://www.linux-kvm.org/page/Projects/auto-ballooning It looks like memory ballooning is deprecated. How can I get rid of the driver? Also they complained about my host bridge device; they say that we should have a newer one: 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) Where can I specify the host bridge? ok ozzzo one of the devices is called "virtio memory balloon" [13:18:12] do you see that? [13:18:21] yes [13:18:47] i suggest you google that and read about what it does - i think it would [13:19:02] be worth trying to disable that device on your larger vm to see what happens [13:19:18] ok I will try that, thank you [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC (Quit: Leaving) [13:21:45] * Sheogorath[m] (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos [13:22:06] <@TrevorH> I also notice that the VM seems to be using the very old 440FX and there's a newer model of hardware available that might be worth checking [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! [13:22:32] <@TrevorH> I had one of those in about 1996 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Feb 3 22:11:23 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 03 Feb 2020 14:11:23 -0800 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> On Mon, Feb 3, 2020, at 1:36 PM, Albert Braden wrote: > > When we build a Centos 7 VM with 1.4T RAM it fails with “[ 17.797177] > BUG: unable to handle kernel paging request at ffff988b19478000” > > > I asked in #centos and they asked me to show a list of devices from a > working VM (if I use 720G RAM it works). This is the list: > > > [root at alberttest1 ~]# lspci > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB > [Natoma/Triton II] (rev 01) > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > > [root at alberttest1 ~]# lsusb > > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > > They suspect that the “Virtio memory balloon” driver is causing the > problem, and that we should disable it. I googled around and found this: > > > http://www.linux-kvm.org/page/Projects/auto-ballooning > > > It looks like memory ballooning is deprecated. How can I get rid of the driver? Looking at Nova's code [0] the memballoon device is only set if mem_stats_period_seconds has a value greater than 0. The default [1] is 10 so you get it by default. I would try setting this config option to 0 and recreating the instance. Note I think this will apply to all VMs and was originally added so that tools could get memory usage statistics. > > > Also they complained about my host bridge device; they say that we > should have a newer one: > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > Where can I specify the host bridge? For this I think you need to set hw_machine_type [2]. Looking at this bug [3] I think the value you may want is q35. > > > ok ozzzo one of the devices is called "virtio memory balloon" > > [13:18:12] do you see that? > > [13:18:21] yes > > [13:18:47] i suggest you google that and read about what it > does - i think it would > > [13:19:02] be worth trying to disable that device on your > larger vm to see what happens > > [13:19:18] ok I will try that, thank you > > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC > (Quit: Leaving) > > [13:21:45] * Sheogorath[m] > (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the > very old 440FX and there's a newer model of hardware available that > might be worth checking > > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > > [13:22:32] <@TrevorH> I had one of those in about 1996 > [0] https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L5840-L5852 [1] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.mem_stats_period_seconds [2] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.hw_machine_type [3] https://bugs.launchpad.net/nova/+bug/1780138 From smooney at redhat.com Mon Feb 3 22:47:26 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Feb 2020 22:47:26 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: On Mon, 2020-02-03 at 21:36 +0000, Albert Braden wrote: > When we build a Centos 7 VM with 1.4T RAM it fails with "[ 17.797177] BUG: unable to handle kernel paging request at > ffff988b19478000" > > I asked in #centos and they asked me to show a list of devices from a working VM (if I use 720G RAM it works). This is > the list: > > [root at alberttest1 ~]# lspci > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > [root at alberttest1 ~]# lsusb > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > They suspect that the "Virtio memory balloon" driver is causing the problem, and that we should disable it. I googled > around and found this: > > http://www.linux-kvm.org/page/Projects/auto-ballooning > > It looks like memory ballooning is deprecated. How can I get rid of the driver? http://www.linux-kvm.org/page/Projects/auto-ballooning states that no qemu that exists today implements that feature but the fact you see it in lspci seams to be in conflict with that. there are several refernce to the feature in later release of qemu and it is documented in libvirt https://libvirt.org/formatdomain.html#elementsMemBalloon there is no way to turn it off specificly currently and im not aware of it being deprecated. the guest will not interact witht he vitio memory balloon by default. it is there too allow the guest to free memory and retrun it to the host to allow copperation between the guests and host to enable memory oversubscription. i belive this normally need the qemu guest agent to be deploy to work fully. with a 1.4TB vm how much memory have you reserved on the host. qemu will need memory to implement the vm emulation and this tends to increase as the guess uses more resouces. my first incliantion would be to check it the vm was killed as a result of a OOM event on the host. > > Also they complained about my host bridge device; they say that we should have a newer one: > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > Where can I specify the host bridge? you change this by specifying the machine type. you can use the q35 machine type instead. q35 is the replacement for i440 but when you enable it it will change a lot of other parameters. i dont know if it will disable the virtio memory ballon or not but if you are using large amount of memory you should also be using hugepages to reduce the hoverhead and improve performance. you can either set the machine type in the config https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.hw_machine_type [libvirt] hw_machine_type=x86_64=q35 or in the guest image https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L60-L64 e.g. hw_machine_type=q35 note in the image you dont include the arch > > ok ozzzo one of the devices is called "virtio memory balloon" > [13:18:12] do you see that? > [13:18:21] yes > [13:18:47] i suggest you google that and read about what it does - i think it would > [13:19:02] be worth trying to disable that device on your larger vm to see what happens > [13:19:18] ok I will try that, thank you > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC (Quit: Leaving) > [13:21:45] * Sheogorath[m] (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the very old 440FX and there's a newer model of > hardware available that might be worth checking > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > [13:22:32] <@TrevorH> I had one of those in about 1996 yes it is an old chip set form the 90s but it is the default that openstack has used since it was created. we will likely change that in a cycle or two but really dont be surprised that we are using 440fx by default. its not really emulating a plathform form 1996. it started that way but it has been updated with the same name kept. with that said it does not support pcie or many other fature which is why we want to move too q35. q35 however while much more modern and secure uses more memroy and does not support older operating systems so there are trade offs. if you need to run centos 5 or 6 i would not be surrpised if you have issue with q35. From smooney at redhat.com Mon Feb 3 22:56:19 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Feb 2020 22:56:19 +0000 Subject: Virtio memory balloon driver In-Reply-To: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> References: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> Message-ID: On Mon, 2020-02-03 at 14:11 -0800, Clark Boylan wrote: > On Mon, Feb 3, 2020, at 1:36 PM, Albert Braden wrote: > > > > When we build a Centos 7 VM with 1.4T RAM it fails with “[ 17.797177] > > BUG: unable to handle kernel paging request at ffff988b19478000” > > > > > > I asked in #centos and they asked me to show a list of devices from a > > working VM (if I use 720G RAM it works). This is the list: > > > > > > [root at alberttest1 ~]# lspci > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > > > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > > > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB > > [Natoma/Triton II] (rev 01) > > > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > > > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > > > > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > > > > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > > > > [root at alberttest1 ~]# lsusb > > > > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > > > > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > > > > > They suspect that the “Virtio memory balloon” driver is causing the > > problem, and that we should disable it. I googled around and found this: > > > > > > http://www.linux-kvm.org/page/Projects/auto-ballooning > > > > > > It looks like memory ballooning is deprecated. How can I get rid of the driver? > > Looking at Nova's code [0] the memballoon device is only set if mem_stats_period_seconds has a value greater than 0. > The default [1] is 10 so you get it by default. I would try setting this config option to 0 and recreating the > instance. Note I think this will apply to all VMs and was originally added so that tools could get memory usage > statistics. i forgot about that option. we had talked bout disableing the stats by default at one point. downstream i think we do at least via config on realtime hosts as we found the stat collect causes latency spikes. > > > > > > > Also they complained about my host bridge device; they say that we > > should have a newer one: > > > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > > > Where can I specify the host bridge? > > For this I think you need to set hw_machine_type [2]. Looking at this bug [3] I think the value you may want is q35. yes but if you enable q35 you neeed to also be aware that unlike the pc i440 machine type only 1 addtion pci slot will be allocated so if you want to allow attaching more then one volume or nic after teh vm is booted you need to adjust https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.num_pcie_ports. the more addtional pcie port you enable the more memory is required by qemu regardless of if you use them and by default even without allocating more pcie port qemu uses more memory in q35 mode then when using the pc machine type. you should also be aware that ide bus is not supported by default with q35 which causes issues for some older operating systems if you use config drives. with all that said we do want to eventully make q35 the default in nova but you just need to be aware that changing that has lots of other side effects which is why we have not done it yet. q35 is required for many new feature and is supported but its just not the default. > > > > > > > ok ozzzo one of the devices is called "virtio memory balloon" > > > > [13:18:12] do you see that? > > > > [13:18:21] yes > > > > [13:18:47] i suggest you google that and read about what it > > does - i think it would > > > > [13:19:02] be worth trying to disable that device on your > > larger vm to see what happens > > > > [13:19:18] ok I will try that, thank you > > > > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC > > (Quit: Leaving) > > > > [13:21:45] * Sheogorath[m] > > (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > > > > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the > > very old 440FX and there's a newer model of hardware available that > > might be worth checking > > > > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > > > > [13:22:32] <@TrevorH> I had one of those in about 1996 > > > > [0] https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L5840-L5852 > [1] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.mem_stats_period_seconds > [2] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.hw_machine_type > [3] https://bugs.launchpad.net/nova/+bug/1780138 > From Albert.Braden at synopsys.com Mon Feb 3 23:57:28 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 3 Feb 2020 23:57:28 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't seen any OOM errors. Where should I look for those? -----Original Message----- From: Sean Mooney Sent: Monday, February 3, 2020 2:47 PM To: Albert Braden ; OpenStack Discuss ML Subject: Re: Virtio memory balloon driver On Mon, 2020-02-03 at 21:36 +0000, Albert Braden wrote: > When we build a Centos 7 VM with 1.4T RAM it fails with "[ 17.797177] BUG: unable to handle kernel paging request at > ffff988b19478000" > > I asked in #centos and they asked me to show a list of devices from a working VM (if I use 720G RAM it works). This is > the list: > > [root at alberttest1 ~]# lspci > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > [root at alberttest1 ~]# lsusb > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > They suspect that the "Virtio memory balloon" driver is causing the problem, and that we should disable it. I googled > around and found this: > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linux-2Dkvm.org_page_Projects_auto-2Dballooning&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=iMCOz64cWwXgiqbpObDBdGZPUoLuQp4G931VKc_hqxI&s=uEnvgAhTPKxJpvz6a3bQisI9406ul8Q2SSHDCV1lqvU&e= > > It looks like memory ballooning is deprecated. How can I get rid of the driver? https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linux-2Dkvm.org_page_Projects_auto-2Dballooning&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=iMCOz64cWwXgiqbpObDBdGZPUoLuQp4G931VKc_hqxI&s=uEnvgAhTPKxJpvz6a3bQisI9406ul8Q2SSHDCV1lqvU&e= states that no qemu that exists today implements that feature but the fact you see it in lspci seams to be in conflict with that. there are several refernce to the feature in later release of qemu and it is documented in libvirt https://urldefense.proofpoint.com/v2/url?u=https-3A__libvirt.org_formatdomain.html-23elementsMemBalloon&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=iMCOz64cWwXgiqbpObDBdGZPUoLuQp4G931VKc_hqxI&s=DSej4fERS8HGIYb7CaIkbVBpssWtSxCbBRxukkAH0rI&e= there is no way to turn it off specificly currently and im not aware of it being deprecated. the guest will not interact witht he vitio memory balloon by default. it is there too allow the guest to free memory and retrun it to the host to allow copperation between the guests and host to enable memory oversubscription. i belive this normally need the qemu guest agent to be deploy to work fully. with a 1.4TB vm how much memory have you reserved on the host. qemu will need memory to implement the vm emulation and this tends to increase as the guess uses more resouces. my first incliantion would be to check it the vm was killed as a result of a OOM event on the host. From gagehugo at gmail.com Tue Feb 4 00:57:38 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 3 Feb 2020 18:57:38 -0600 Subject: [security] Security SIG Newsletter - Jan 2020 Message-ID: Hope everyone's 2020 is going good so far, here's the list of updates from the Security SIG. Overall Jan was a pretty quiet month, only have a few update items. #Month Jan 2020 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Updates - https://review.opendev.org/#/c/678426/ - Update to vmt policy, currently up for formal TC review - https://bugs.launchpad.net/nova/+bug/1492140 - Updates to stable branches - https://bugs.launchpad.net/neutron/+bug/1732067 - Backports to stable branches may require a configuration change #VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Feb 4 01:00:03 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 3 Feb 2020 19:00:03 -0600 Subject: [security] No Security SIG Meeting - Feb 06th Message-ID: The Security SIG won't be meeting this Thursday, Feb 06th, we will be back next week. For any questions please feel free to ask in #openstack-security. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Tue Feb 4 01:23:45 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 4 Feb 2020 01:23:45 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> Message-ID: I set mem_stats_period_seconds = 0 in nova.conf on controllers and hypervisors, and restarted nova services, and then built another VM, but it still has the balloon device: albertb at alberttest4:~ $ lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon I'll try the q35 setting now. -----Original Message----- From: Sean Mooney Sent: Monday, February 3, 2020 2:56 PM To: Clark Boylan ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Mon, 2020-02-03 at 14:11 -0800, Clark Boylan wrote: > On Mon, Feb 3, 2020, at 1:36 PM, Albert Braden wrote: > > > > When we build a Centos 7 VM with 1.4T RAM it fails with “[ 17.797177] > > BUG: unable to handle kernel paging request at ffff988b19478000” > > > > > > I asked in #centos and they asked me to show a list of devices from a > > working VM (if I use 720G RAM it works). This is the list: > > > > > > [root at alberttest1 ~]# lspci > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > > > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > > > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB > > [Natoma/Triton II] (rev 01) > > > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > > > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > > > > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > > > > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > > > > [root at alberttest1 ~]# lsusb > > > > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > > > > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > > > > > They suspect that the “Virtio memory balloon” driver is causing the > > problem, and that we should disable it. I googled around and found this: > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linux-2Dkvm.org_page_Projects_auto-2Dballooning&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=GWJcQEqsYfdcWE_hfTBSzWDxJYvLhAtPWlFz1EmzJKY&e= > > > > > > It looks like memory ballooning is deprecated. How can I get rid of the driver? > > Looking at Nova's code [0] the memballoon device is only set if mem_stats_period_seconds has a value greater than 0. > The default [1] is 10 so you get it by default. I would try setting this config option to 0 and recreating the > instance. Note I think this will apply to all VMs and was originally added so that tools could get memory usage > statistics. i forgot about that option. we had talked bout disableing the stats by default at one point. downstream i think we do at least via config on realtime hosts as we found the stat collect causes latency spikes. > > > > > > > Also they complained about my host bridge device; they say that we > > should have a newer one: > > > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > > > Where can I specify the host bridge? > > For this I think you need to set hw_machine_type [2]. Looking at this bug [3] I think the value you may want is q35. yes but if you enable q35 you neeed to also be aware that unlike the pc i440 machine type only 1 addtion pci slot will be allocated so if you want to allow attaching more then one volume or nic after teh vm is booted you need to adjust https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_nova_latest_configuration_config.html-23libvirt.num-5Fpcie-5Fports&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=y_xdeWDGbFM4fAr32iEVEER8d6hVZunyvwne1QJfOME&e= . the more addtional pcie port you enable the more memory is required by qemu regardless of if you use them and by default even without allocating more pcie port qemu uses more memory in q35 mode then when using the pc machine type. you should also be aware that ide bus is not supported by default with q35 which causes issues for some older operating systems if you use config drives. with all that said we do want to eventully make q35 the default in nova but you just need to be aware that changing that has lots of other side effects which is why we have not done it yet. q35 is required for many new feature and is supported but its just not the default. > > > > > > > ok ozzzo one of the devices is called "virtio memory balloon" > > > > [13:18:12] do you see that? > > > > [13:18:21] yes > > > > [13:18:47] i suggest you google that and read about what it > > does - i think it would > > > > [13:19:02] be worth trying to disable that device on your > > larger vm to see what happens > > > > [13:19:18] ok I will try that, thank you > > > > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC > > (Quit: Leaving) > > > > [13:21:45] * Sheogorath[m] > > (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > > > > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the > > very old 440FX and there's a newer model of hardware available that > > might be worth checking > > > > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > > > > [13:22:32] <@TrevorH> I had one of those in about 1996 > > > > [0] https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5840-2DL5852&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=H2dx7K2OyyHfGOjaQ51CGvU0308JWXFBp_80QuxCAPw&e= > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_nova_train_configuration_config.html-23libvirt.mem-5Fstats-5Fperiod-5Fseconds&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=FWkMnQE0rIldStIjTeTrXlBoCR0Bb06TqNQpjQwwXuM&e= > [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_nova_train_configuration_config.html-23libvirt.hw-5Fmachine-5Ftype&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=p4N-kGlblu2E47dPo8qYHl2hv4BPROlLoM_-5YNbAzc&e= > [3] https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_nova_-2Bbug_1780138&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=wZjiqcg-XvVbIVgTpHRsbPVuZd1K0mM4-BZ6P0JNMj8&e= > From Tushar.Patil at nttdata.com Tue Feb 4 07:53:45 2020 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Tue, 4 Feb 2020 07:53:45 +0000 Subject: [heat][tacker] After Stack is Created, will it change nested stack Id? Message-ID: Hi All, In tacker project, we are using heat API to create stack. Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2. So internally heat will create two nested stacks and add following resources to it:- child stack1 VDU1 - OS::Nova::Server CP1 - OS::Neutron::Port VDU2 - OS::Nova::Server CP2- OS::Neutron::Port child stack2 VDU1 - OS::Nova::Server CP1 - OS::Neutron::Port VDU2 - OS::Nova::Server CP2- OS::Neutron::Port Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack. Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly. My question is after the stack is created for the first time, will it ever change the nested child stack id? Thank you. Regards, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From hberaud at redhat.com Tue Feb 4 08:47:41 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 4 Feb 2020 09:47:41 +0100 Subject: [oslo] drop 2.7 support - track releases In-Reply-To: <1700c12980a.e8a768ec216738.734931729905313685@ghanshyammann.com> References: <1700c12980a.e8a768ec216738.734931729905313685@ghanshyammann.com> Message-ID: Thanks for you heads-up, I'm aware of the "drop-py27-support" topic and I don't want to duplicate the work, I just want to simplify the releasing steps of oslo projects for oslo maintainers and offer to us a big picture of released projects where support was dropped. Thanks for the tips but we can't track all the projects managed by oslo team with filters: - openstack/microversion-parse - openstack/debtcollector - openstack/pbr - ... - openstack/tooz All the patches on oslo scope related to the drop of the py27 support use the topic "drop-py27-support" but to track our releasing of these project where the support was dropped and to isolate this part I preferred use "oslo_drop_py2_support". In other words only our patches against openstack/releases use this topic. Sorry if I misleaded some of you with that. Le lun. 3 févr. 2020 à 18:21, Ghanshyam Mann a écrit : > ---- On Mon, 03 Feb 2020 10:32:08 -0600 Herve Beraud > wrote ---- > > The goal of this thread is to track oslo releases related to drop of > py2.7 support. > > By creating this thread I want to isolate the oslo status from the > "drop-py27-support" to help us to track our advancement internally in oslo. > > You can still track the oslo related work with the original topic by > adding the extra query for project matching string. > - > https://review.opendev.org/#/q/topic:drop-py27-support+(status:open+OR+status:merged)+projects:openstack/oslo > > Main idea to use the single topic "drop-py27-support" for this goal is to > track it on OpenStack level or avoid duplicating the work etc. > > -gmann > > > > > Le lun. 3 févr. 2020 à 17:02, Herve Beraud a > écrit : > > > > > > -- > > Hervé BeraudSenior Software Engineer > > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > Hello, > > FYI you can track the dropping of py2.7 support in oslo by using: > https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) > > Topic: oslo_drop_py2_support > > We release a major version each time an oslo projects drop the py2.7 > support. > > -- Hervé BeraudSenior Software Engineer > > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeepantil at gmail.com Tue Feb 4 08:37:20 2020 From: pradeepantil at gmail.com (Pradeep Antil) Date: Tue, 4 Feb 2020 14:07:20 +0530 Subject: RDO OpenStack Repo for CentOS 8 Message-ID: Hi Folks, I am trying to deploy single node openstack on CentOS 8 using packstack, but it seems like CentOS 8 packages are not updated in repo. Has anyone tried this before on centos 8 , if yes what repository should I use ? [root at xxxxx~]# dnf install -y https://www.rdoproject.org/repos/rdo-release.rpm Last metadata expiration check: 0:02:17 ago on Tue 04 Feb 2020 03:07:49 AM EST. rdo-release.rpm 936 B/s | 6.4 kB 00:07 Dependencies resolved. ============================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================== Installing: rdo-release noarch stein-3 @commandline 6.4 k Transaction Summary ============================================================================================================================== Install 1 Package Total size: 6.4 k Installed size: 3.1 k Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : rdo-release-stein-3.noarch 1/1 Verifying : rdo-release-stein-3.noarch 1/1 Installed: rdo-release-stein-3.noarch Complete! [root at xxxx ~]# [root at xxxx ~]# [root at xxxx ~]# dnf install -y openstack-packstack RDO CentOS-7 - QEMU EV 42 kB/s | 35 kB 00:00 OpenStack Stein Repository 1.3 MB/s | 4.1 MB 00:03 Last metadata expiration check: 0:00:01 ago on Tue 04 Feb 2020 03:12:15 AM EST. Error: Problem: cannot install the best candidate for the job - nothing provides python-netifaces needed by openstack-packstack-1:14.0.0-1.el7.noarch - nothing provides PyYAML needed by openstack-packstack-1:14.0.0-1.el7.noarch - nothing provides python-docutils needed by openstack-packstack-1:14.0.0-1.el7.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) [root at xxxxx ~]# -- Best Regards Pradeep Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Feb 4 10:27:25 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 4 Feb 2020 11:27:25 +0100 Subject: [rdo-dev] RDO OpenStack Repo for CentOS 8 In-Reply-To: References: Message-ID: Hi, RDO on CentOS8 is still a work in progress, there is no rdo-release rpm for CentOS8 yet as we are not providing signed packages in CentOS mirrors yet. However, there are available unsigned RDO Trunk repos if you want to start testing already, you can configure them by adding following files to /etc/yum.repos.d. https://trunk.rdoproject.org/centos8-train/delorean-deps.repo https://trunk.rdoproject.org/centos8-train/puppet-passed-ci/delorean.repo If you prefer to use packages from master branches for testing instead of train: https://trunk.rdoproject.org/centos8-master/delorean-deps.repo https://trunk.rdoproject.org/centos8-master/puppet-passed-ci/delorean.repo That should work. Best regards, Alfredo On Tue, Feb 4, 2020 at 9:38 AM Pradeep Antil wrote: > Hi Folks, > > I am trying to deploy single node openstack on CentOS 8 using packstack, > but it seems like CentOS 8 packages are not updated in repo. > > Has anyone tried this before on centos 8 , if yes what repository should I > use ? > > [root at xxxxx~]# dnf install -y > https://www.rdoproject.org/repos/rdo-release.rpm > Last metadata expiration check: 0:02:17 ago on Tue 04 Feb 2020 03:07:49 AM > EST. > rdo-release.rpm > 936 B/s | 6.4 kB 00:07 > Dependencies resolved. > > ============================================================================================================================== > Package Architecture Version > Repository Size > > ============================================================================================================================== > Installing: > rdo-release noarch stein-3 > @commandline 6.4 k > > Transaction Summary > > ============================================================================================================================== > Install 1 Package > > Total size: 6.4 k > Installed size: 3.1 k > Downloading Packages: > Running transaction check > Transaction check succeeded. > Running transaction test > Transaction test succeeded. > Running transaction > Preparing : > 1/1 > Installing : rdo-release-stein-3.noarch > 1/1 > Verifying : rdo-release-stein-3.noarch > 1/1 > > Installed: > rdo-release-stein-3.noarch > > Complete! > [root at xxxx ~]# > [root at xxxx ~]# > [root at xxxx ~]# dnf install -y openstack-packstack > RDO CentOS-7 - QEMU EV > 42 kB/s | 35 kB 00:00 > OpenStack Stein Repository > 1.3 MB/s | 4.1 MB 00:03 > Last metadata expiration check: 0:00:01 ago on Tue 04 Feb 2020 03:12:15 AM > EST. > Error: > Problem: cannot install the best candidate for the job > - nothing provides python-netifaces needed by > openstack-packstack-1:14.0.0-1.el7.noarch > - nothing provides PyYAML needed by > openstack-packstack-1:14.0.0-1.el7.noarch > - nothing provides python-docutils needed by > openstack-packstack-1:14.0.0-1.el7.noarch > (try to add '--skip-broken' to skip uninstallable packages or '--nobest' > to use not only best candidate packages) > [root at xxxxx ~]# > > -- > Best Regards > Pradeep Kumar > _______________________________________________ > dev mailing list > dev at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 4 11:54:39 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Feb 2020 11:54:39 +0000 Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... In-Reply-To: References: Message-ID: <20200204115438.vmdtuidaklmjbhkh@yuggoth.org> On 2020-02-03 13:50:03 +0000 (+0000), Srinivas Dasthagiri wrote: > We are working on Kaminario CI fresh configuration(since it is too > old, it has broken). We have communicated with OpenStack-infra > community for the suggestions and documentation. One of the > community member suggested us to go with manual CI > configuration(Not third party CI) instead of CI configuration with > puppet architecture(Since it is moving to Ansible). But we did not > get documents for all other CI components except ZuulV3 from > community. [...] To clarify that recommendation, it was to build a third-party CI system by reading the documentation for the components you're going to use and understanding how they work. The old Puppet-based documentation you're referring to was written by some operators of third-party CI systems but not kept updated, so the versions of software it would need if you follow it would be years-old and in some cases (especially Jenkins and associated plug-ins) dangerously unsafe to connect to the internet due to widely-known security vulnerabilities in the versions you'd have to use. Modern versions of the tools you would want to use are well-documented, there's just no current document explaining exactly how to use them together for the exact situation you're in without having to understand how the software works. Many folks in your situation seem to want someone else to provide a simple walk-through for them, so until someone who is in your situation (maybe you?) takes the time to gain familiarity with recent versions of CI software and publish some documentation on how you got it communicating correctly with OpenDev's Gerrit deployment, such a walk-through is not going to exist. But even that, if not kept current, will quickly fall stale: all of those components, including Gerrit itself, need updating over time to address bugs and security vulnerabilities, and those updates occasionally come with backward-incompatible behavior changes. People maintaining these third-party CI systems are going to need to *stay* familiar with the software they're running, keep on top of necessary behavior and configuration changes over time, and update any new walk-through document accordingly so that it doesn't wind up in the same state as the one we had. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue Feb 4 12:00:59 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Feb 2020 12:00:59 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > seen any OOM errors. Where should I look for those? [...] The `dmesg` utility on the hypervisor host should show you the kernel's log ring buffer contents (the -T flag is useful to translate its timestamps into something more readable than seconds since boot too). If the ring buffer has overwritten the relevant timeframe then look for signs of kernel OOM killer invocation in your syslog or persistent journald storage. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From missile0407 at gmail.com Tue Feb 4 12:18:29 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 4 Feb 2020 20:18:29 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. Message-ID: Hi everyone, We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 Storage (Ceph OSD) site without internet. We did the shutdown few days ago since CNY holidays. Today we re-launch whole cluster back. First we met the issue that MariaDB containers keep restarting, and we fixed by using mariadb_recovery command. After that we check the status of each services, and found that all services shown at Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP connection, or other error found when check the downed service log. We tried reboot each servers but the situation still a same. Then we found the RabbitMQ log not updating, the last log still stayed at the date we shutdown. Logged in to RabbitMQ container and type "rabbitmqctl status" shows connection refused, and tried access its web manager from :15672 on browser just gave us "503 Service unavailable" message. Also no port 5672 listening. I searched this issue on the internet but only few information about this. One of solution is delete some files in mnesia folder, another is remove rabbitmq container and its volume then re-deploy. But both are not sure. Does anyone know how to solve it? Many thanks, Eddie. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Tue Feb 4 12:42:42 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Tue, 04 Feb 2020 13:42:42 +0100 Subject: [kuryr] Deprecation of Ingress support and namespace isolation In-Reply-To: <97e0d4450c2777bcc4f8f2aff39dfdb0150fbe09.camel@redhat.com> References: <97e0d4450c2777bcc4f8f2aff39dfdb0150fbe09.camel@redhat.com> Message-ID: <5e8503ed1b353b5fde7fabb78ec08d2562cde41b.camel@redhat.com> Hi, We just triggered merge of the patch removing OpenShift Routes (Ingress) support. It was not tested for a long while and we really doubt it was working. We'll be following with namespace isolation feature removal, which might be a bit more controversial, but has a really easy workaround of just using network policies to do the same. Thanks, Michał On Mon, 2020-01-20 at 17:51 +0100, Michał Dulko wrote: > Hi, > > I've decided to put up a patch [1] deprecating the aforementioned > features. It's motivated by the fact that there are better ways to do > both: > > * Ingress can be done by another controller or through cloud provider. > * Namespace isolation can be achieved through network policies. > > Both alternative ways are way better tested and there's nobody > maintaining the deprecated features. I'm open to keep them if someone > using them steps up. > > With the fact that Kuryr seems to be tied more with Kubernetes releases > than with OpenStack ones and given there will be no objections, we > might start removing the code in the Ussuri timeframe. > > [1] https://review.opendev.org/#/c/703420/ > > Thanks, > Michał > From zhangbailin at inspur.com Tue Feb 4 12:46:32 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 4 Feb 2020 12:46:32 +0000 Subject: [nova] noVNC console with password authentication Message-ID: <792398a13fa74b83b0f8c3dc6b5ec0af@inspur.com> Hi all: About https://review.opendev.org/#/c/623120/ SPEC, there are two different perspectives, one from Alex and one from SPEC author Jingyu. 1. @Jingyu’s point is add“vnc_password”to the instance’s metadata,“vnc_password”is only provided for libvirtd support. As described in SPEC, the“vnc_password”parameter is populated when the instance generates XML, and when show server details that pop the “vnc_password”from nova api to ensure its security. That we can refer to the implementation of "adminPass" to understand this method. Its advantage is that it will not break the current nova api, you only need to store“vnc_password”in the instance's metadata. The disadvantage is that“vnc_password”is in the metadata but the user cannot get it. In addition, after we are evacuate/rebuild a server that we should reset it’s“vnc_password”, or take out "vnc_password" from the original instance and write into the new instance during evacuate/rebuild. 2. @Alex’s suggestion is change the Create Console API, add“vnc_password”as a new request optional parameter to the request body, that when we request create the remote console, if the“vnc_password”is not None we will reset the server’s vnc passwd, if“vnc_password”is None, that it will use the novnc password set last time you opened the console. The advantage is that it is more simple and convenient than storing "vnc_password" in metadata. When evacuate/rebuild, there is no need to consider the problem of "vnc_password" storage, but when we first open the console, we need to set it in the request body the value of "vnc_password". The disadvantage is that you need to add a new microversion to support this feature, which will break the current nova API (Create Console API). In addition, from the working principle of the RFB protocol, nova does not care about the "vnc_password" parameter passed in to obtain the Console URL. The verification of the vnc password is the job of the vnc server maintained by libvirtd. We look forward to completing it before the SPEC freeze, and hope to get more feedback, especially from the nova core team. SPEC link: https://review.opendev.org/#/c/623120/ brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Feb 4 13:19:36 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 4 Feb 2020 08:19:36 -0500 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: ⁹ On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: > Hi everyone, > > We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 > Storage (Ceph OSD) > site without internet. We did the shutdown few days ago since CNY > holidays. > > Today we re-launch whole cluster back. First we met the issue that MariaDB > containers keep > restarting, and we fixed by using mariadb_recovery command. > After that we check the status of each services, and found that all > services shown at > Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP > connection, > or other error found when check the downed service log. > > We tried reboot each servers but the situation still a same. Then we found > the RabbitMQ log not > updating, the last log still stayed at the date we shutdown. Logged in to > RabbitMQ container and > type "rabbitmqctl status" shows connection refused, and tried access its > web manager from > :15672 on browser just gave us "503 Service unavailable" message. > Also no port 5672 > listening. > Any chance you have a NIC that didn't come up? What is in the log of the container itself? (ie. docker log rabbitmq). > I searched this issue on the internet but only few information about this. > One of solution is delete > some files in mnesia folder, another is remove rabbitmq container and its > volume then re-deploy. > But both are not sure. Does anyone know how to solve it? > > > Many thanks, > Eddie. > -Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Feb 4 13:33:51 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 4 Feb 2020 08:33:51 -0500 Subject: Migration to Openstack from Proxmox In-Reply-To: <7150dfca5e513345a33573f7d00c3a83@42iso.com> References: <7150dfca5e513345a33573f7d00c3a83@42iso.com> Message-ID: On Mon, Feb 3, 2020, 3:28 PM wrote: > We currently are using proxmox for a envrionment management, and while we > were looking at DFS options we came across openstack, and we've decided to > start looking at it as a replacement for proxmox. Has anyone made this > migration before? > Keep on mind that you're still going to need a storage system like Ceph separate from Openstack whereas you may be used concept being native in Proxmox. Besides that, migration is pretty straightforward. I assume you're already running on KVM, so you should be able to snapshot and import your current instances, volumes, etc. Currently we're running a few Windows Server 12/16/19 virtual machines, but > mostly centOS7 virtual machines, however what we are developing are > micro-services that ideally would be deployed using k8s. > Openstack and k8s are great together. Check out the Openstack Magnum project, cloudprovider-openstack, and if you're trying to so multiattach persistent volumes, Manila. Does anyone have any experience migrating between the two, or any other > recommendation when considering openstack? > Follow one of the install guides on docs.openstack.org to do a manual install so you get familiar with all the bits and bobs under the hood. After that, pick a deployment project to create your production cluster. Kolla-ansible, Openstack-ansible, Triple-O, and Juju are some popular ones. Cheers, Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Tue Feb 4 13:45:18 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 4 Feb 2020 21:45:18 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Erik, I'm already checked NIC link and no issue found. Pinging the nodes each other on each interfaces is OK. And I'm not check docker logs about rabbitmq sbecause it works normally. I'll check that out later. -Eddie Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: > ⁹ > > On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: > >> Hi everyone, >> >> We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 >> Storage (Ceph OSD) >> site without internet. We did the shutdown few days ago since CNY >> holidays. >> >> Today we re-launch whole cluster back. First we met the issue that >> MariaDB containers keep >> restarting, and we fixed by using mariadb_recovery command. >> After that we check the status of each services, and found that all >> services shown at >> Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP >> connection, >> or other error found when check the downed service log. >> >> We tried reboot each servers but the situation still a same. Then we >> found the RabbitMQ log not >> updating, the last log still stayed at the date we shutdown. Logged in to >> RabbitMQ container and >> type "rabbitmqctl status" shows connection refused, and tried access its >> web manager from >> :15672 on browser just gave us "503 Service unavailable" message. >> Also no port 5672 >> listening. >> > > > Any chance you have a NIC that didn't come up? What is in the log of the > container itself? (ie. docker log rabbitmq). > > >> I searched this issue on the internet but only few information about >> this. One of solution is delete >> some files in mnesia folder, another is remove rabbitmq container and its >> volume then re-deploy. >> But both are not sure. Does anyone know how to solve it? >> >> >> Many thanks, >> Eddie. >> > > -Erik > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Feb 4 14:07:30 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 Feb 2020 08:07:30 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-15 2nd Update (# 2 weeks left to complete) In-Reply-To: <17003544244.b4776cd0166376.7089453842624552657@ghanshyammann.com> References: <17003544244.b4776cd0166376.7089453842624552657@ghanshyammann.com> Message-ID: <17010870c29.10825a9a8258458.5988453878282738658@ghanshyammann.com> ---- On Sat, 01 Feb 2020 18:36:58 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > Below is the progress on "Drop Python 2.7 Support" at end of R-15 week. > > Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > Highlights: > ======== > * 2 weeks left to finish the work. > > * QA tooling: > ** Tempest is dropping py3.5[1]. Tempest plugins can drop py3.5 now if they still support it. > ** Updating tox with basepython python3 [2]. > ** Pining stable/rocky testing with 23.0.0[3]. > ** Updating neutron-tempest-plugins rocky jobs to run with py3 on master and py2 on stable/rocky gate. > ** Ironic-tempest-plugin jobs are failing on uploading image on glance. Debugging in progress. > > * zipp failure fix on py3.5 job is merged. > > * 5 services listed below still not merged the patches, I request PTLs to review it on priority. > > Project wise status and need reviews: > ============================ > Phase-1 status: > The OpenStack services have not merged the py2 drop patches: > NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). > > * Adjutant > * ec2-api > * Karbor > * Masakari > * Qinling > * Tricircle > > Phase-2 status: > This is ongoing work and I think most of the repo have patches up to review. > Try to review them on priority. If any query and I missed to respond on review, feel free to ping me > in irc. > > * Most of the tempest plugins and python client patches are good to merge. > > * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open Oslo lib dropped the py2 and now would not be able to work on py2 jobs on master gate. You might see the failure if your project has not merged the patch. openstack-tox-py27 job will fail for sure. This is good time to merge your patch to drop the py2. I will not suggest capping the requirement to unblock the gate instead drop the py2 from your repo. -gmann > > > How you can help: > ============== > - Review the patches. Push the patches if I missed any repo. > > [1] https://review.opendev.org/#/c/704840/ > [2] https://review.opendev.org/#/c/704688/ > [3] https://review.opendev.org/#/c/705098/ > > -gmann > > From emccormick at cirrusseven.com Tue Feb 4 14:27:48 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 4 Feb 2020 09:27:48 -0500 Subject: Migration to Openstack from Proxmox In-Reply-To: References: <7150dfca5e513345a33573f7d00c3a83@42iso.com> Message-ID: +list again :) On Tue, Feb 4, 2020, 8:46 AM wrote: > Thanks for the reply. > > So if I understand you, swift is not an acceptable storage solution to > couple with OpenStack? I also think that I missed something in your first > paragraph "whereas you may be used concept being native in Proxmox." Are > you talking about how Proxmox has the local storage on the server AND the > NFS storage? > Swift is an Openstack project so sure it's good to use with Openstack or even on it's own. It is Object Storage (like AWS S3). If you want volumes you'll want some other system like Ceph, some NFS server, etc. Proxmox is capable of deploying Ceph and that's what I was referring to. > All of our VM's are the standard VM. We do have a few "templates" but > those are not really anything special. I do have quite a few images > pre-built, so that is my biggest concern with a migration. > You can easily import those images into Glance (Openstack image service);and then launch instances using them. > I know, just from looking at the docs/videos out there that OpenStack is > definitely more akin to AWS than it is ProxMox, but would you say it is > worth it from a manage a multi-location hardware environment to switch to > OpenStack from proxmox? > This is mainly a matter of your use case. Openstack is complex; more complex than Proxmox by quite a bit. It is also extremely powerful and has many projects to provide lots of cloudy services. Many of those services are analogous to ones found in AWS and other public clouds. It is well suited to orchestrate large numbers of instances, volumes, networks, etc. across many hypervisors, Proxmox, as far as I recall, is more of a straight virtualization platform. I also recall it being fairly simple and reliable. If it meets your needs, don't complicate your life. You may want to sign up for a trial at an Openstack-based public cloud provider to get familiar with the functionality before committing to building your own. You can find a list here: https://www.openstack.org/passport/ Cheers, Erik Thanks! > > > Jason > > -------- Original Message -------- > Subject: Re: Migration to Openstack from Proxmox > Date: 2020-02-04 07:33 > From: Erik McCormick > To: jmacer at 42iso.com > > > > > On Mon, Feb 3, 2020, 3:28 PM wrote: > > We currently are using proxmox for a envrionment management, and while we > were looking at DFS options we came across openstack, and we've decided to > start looking at it as a replacement for proxmox. Has anyone made this > migration before? > > > Keep on mind that you're still going to need a storage system like Ceph > separate from Openstack whereas you may be used concept being native in > Proxmox. > > Besides that, migration is pretty straightforward. I assume you're already > running on KVM, so you should be able to snapshot and import your current > instances, volumes, etc. > > > > Currently we're running a few Windows Server 12/16/19 virtual machines, > but mostly centOS7 virtual machines, however what we are developing are > micro-services that ideally would be deployed using k8s. > > Openstack and k8s are great together. Check out the Openstack Magnum > project, cloudprovider-openstack, and if you're trying to so multiattach > persistent volumes, Manila. > > > Does anyone have any experience migrating between the two, or any other > recommendation when considering openstack? > > Follow one of the install guides on docs.openstack.org to do a manual > install so you get familiar with all the bits and bobs under the hood. > After that, pick a deployment project to create your production cluster. > Kolla-ansible, Openstack-ansible, Triple-O, and Juju are some popular ones. > > Cheers, > Erik > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Feb 4 16:09:25 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 4 Feb 2020 09:09:25 -0700 Subject: [tripleo] rework of triple squads and the tripleo mtg. Message-ID: Greetings, As mentioned at the previous tripleo meeting [1], we're going to revisit the current tripleo squads and the expectations for those squads at the tripleo meeting. Currently we have the following squads.. 1. upgrades 2. edge 3. integration 4. validations 5. networking 6. transformation 7. ci A reasonable update could include the following.. 1. validations 2. transformation 3. mistral-to-ansible 4. CI 5. Ceph / Integration?? maybe just Ceph? 6. others?? The squads should reflect major current efforts by the TripleO team IMHO. For the meetings, I would propose we use this time and space to give context to current reviews in progress and solicit feedback. It's also a good time and space to discuss any upstream blockers for those reviews. Let's give this one week for comments etc.. Next week we'll update the etherpad list and squads. The etherpad list will be a decent way to communicate which reviews need attention. Thanks all!!! [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Tue Feb 4 16:39:07 2020 From: johfulto at redhat.com (John Fulton) Date: Tue, 4 Feb 2020 11:39:07 -0500 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin wrote: > > Greetings, > > As mentioned at the previous tripleo meeting [1], we're going to revisit the current tripleo squads and the expectations for those squads at the tripleo meeting. > > Currently we have the following squads.. > 1. upgrades > 2. edge > 3. integration > 4. validations > 5. networking > 6. transformation > 7. ci > > A reasonable update could include the following.. > > 1. validations > 2. transformation > 3. mistral-to-ansible > 4. CI > 5. Ceph / Integration?? maybe just Ceph? I'm fine with "Ceph". The original intent of going from "Ceph Integration" to the more generic "Integration" was that it could include anyone using external-deploy-steps to deploy non-openstack projects with TripleO (k8s, skydive, etc). Though those things happened, we didn't really get anyone else to join the squad or update our etherpad so I'm fine with renaming it to Ceph. We're still active but our etherpad was getting old. I updated it just now. Gulio? Francesco? Alan? John > 6. others?? > > The squads should reflect major current efforts by the TripleO team IMHO. > > For the meetings, I would propose we use this time and space to give context to current reviews in progress and solicit feedback. It's also a good time and space to discuss any upstream blockers for those reviews. > > Let's give this one week for comments etc.. Next week we'll update the etherpad list and squads. The etherpad list will be a decent way to communicate which reviews need attention. > > Thanks all!!! > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > From marios at redhat.com Tue Feb 4 16:48:19 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 4 Feb 2020 18:48:19 +0200 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 6:12 PM Wesley Hayutin wrote: > Greetings, > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > Currently we have the following squads.. > 1. upgrades > 2. edge > 3. integration > 4. validations > 5. networking > 6. transformation > 7. ci > > A reasonable update could include the following.. > > 1. validations > 2. transformation > 3. mistral-to-ansible > 4. CI > 5. Ceph / Integration?? maybe just Ceph? > 6. others?? > > The squads should reflect major current efforts by the TripleO team IMHO. > so I agree with the notion that current efforts are discussed/reported on/summarized etc during the tripleo weekly but I'd be careful about saying 'these are the only squads'. Something like upgrades for example is still very much ongoing (though as far as I understand this is now splintered into 3 subgroups, updates, upgrades and migrations) even if they aren't reporting during the meeting as a habit. Instead of just accepting this absence and saying there is no upgrades squad, instead lets try and get them to check in to the meeting more often. I think upgrades is a special case, perhaps the above also applies to the networking squad too. Otherwise agree with your new list, but as above I'd be careful not to exclude folks that *are* working on $tripleo_stuff but for _reasons_ (likely workload/pressure) aren't coming to the tripleo weekly. just my 2c thanks > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > Thanks all!!! > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpantano at redhat.com Tue Feb 4 17:14:35 2020 From: fpantano at redhat.com (Francesco Pantano) Date: Tue, 4 Feb 2020 18:14:35 +0100 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > > > Currently we have the following squads.. > > 1. upgrades > > 2. edge > > 3. integration > > 4. validations > > 5. networking > > 6. transformation > > 7. ci > > > > A reasonable update could include the following.. > > > > 1. validations > > 2. transformation > > 3. mistral-to-ansible > > 4. CI > > 5. Ceph / Integration?? maybe just Ceph? > > I'm fine with "Ceph". The original intent of going from "Ceph > Integration" to the more generic "Integration" was that it could > include anyone using external-deploy-steps to deploy non-openstack > projects with TripleO (k8s, skydive, etc). Though those things > happened, we didn't really get anyone else to join the squad or update > our etherpad so I'm fine with renaming it to Ceph. We're still active > but our etherpad was getting old. I updated it just now. > Gulio? Francesco? Alan? > Agree here and I also updated the etherpad as well [1] w/ our current status and the open topics we still have on ceph side. Not sure if we want to use "Integration" since the topics couldn't be only ceph related but can involve other storage components. Giulio, Alan, wdyt? > > John > > > 6. others?? > > > > The squads should reflect major current efforts by the TripleO team IMHO. > > > > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > > > Thanks all!!! > > > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > > > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Tue Feb 4 17:31:20 2020 From: kendall at openstack.org (Kendall Waters) Date: Tue, 4 Feb 2020 11:31:20 -0600 Subject: Sponsorship Prospectus is Now Live - OpenDev+PTG Vancouver 2020 Message-ID: Hi everyone, The sponsorship prospectus for the upcoming OpenDev + PTG event happening in Vancouver, BC on June 8-11 is now live! We expect this to be an important event for sponsors, and will have a place for sponsors to have a physical presence (embedded in the event rather in a separate sponsors hall), as well as branding throughout the event and the option for a keynote in the morning for headline sponsors. OpenDev + PTG is a collaborative event organized by the OpenStack Foundation (OSF) gathering developers, system architects, and operators to address common open source infrastructure challenges. OpenDev will take place June 8-10 and the PTG will take place June 8-11. Each day will be broken into three parts: Short kickoff with all attendees to set the goals for the day or discuss the outcomes of the previous day. Think of this like a mini-keynote, challenging your thoughts around the topic areas before you head into real collaborative sessions. OpenDev: Morning discussions covering projects like Airship, Ansible, Ceph, Kata Containers, Kubernetes, OpenStack, StarlingX, Zuul and more centered around one of four different topics: Hardware Automation Large-scale Usage of Open Source Infrastructure Software Containers in Production Key challenges for open source in 2020 PTG: Afternoon working sessions for project teams and SIGs to continue the morning’s discussions. The sponsorship contract will be available to sign starting tomorrow, February 5 at 9:30am PST at https://www.openstack.org/events/opendev-ptg-2020/sponsors . Please let me know if you have any questions. Cheers, Kendall Kendall Waters Perez OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kuirong.Chen at infortrend.com Tue Feb 4 09:11:04 2020 From: Kuirong.Chen at infortrend.com (=?utf-8?B?S3Vpcm9uZy5DaGVuKOmZs+WljuiejSk=?=) Date: Tue, 4 Feb 2020 09:11:04 +0000 Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... In-Reply-To: References: Message-ID: Hi jay, Infortrend third part CI is broken for some reason, I’ll check our environment and resolve it. KuiRong Software Design Dept.II Ext. 7125 From: Jay Bryant Sent: Thursday, January 23, 2020 3:51 AM To: openstack-discuss at lists.openstack.org; inspur.ci at inspur.com; wangyong2017 at inspur.com; Chengwei.Chou(周政緯) ; Bill.Sung(宋柏毅) ; Kuirong.Chen(陳奎融) ; ido.benda at kaminario.com; srinivasd.ctr at kaminario.com; nec-cinder-ci at istorage.jp.nec.com; silvan at quobyte.com; robert at quobyte.com; felix at quobyte.com; bjoern at quobyte.com; OpenStack Development ; Shlomi Avihou | Zadara ; msdu-openstack at groups.ext.hpe.com Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... All, We once again are at the point in the release where we are talking about 3rd Party CI and what is going on for Cinder. At the moment I have analyzed drivers that have not successfully reported results on a Cinder patch in 30 or more days and have put together the following list of drivers to be unsupported in the Ussuri release: * Inspur Drivers * Infortrend * Kaminario * NEC * Quobyte * Zadara * HPE Drivers If your name is in the list above you are receiving this e-mail directly, not just through the mailing list. If you are working on resolving CI issues please let me know so we can discuss how to proceed. In addition to the fact that we will be pushing up unsupported patches for the drivers above, we have already unsupported and removed a number of drivers during this release. They are as follows: * Unsupported: * MacroSAN Driver * Removed: * ProphetStor Driver * Nimble Storage Driver * Veritas Access Driver * Veritas CNFS Driver * Virtuozzo Storage Driver * Huawei FusionStorage Driver * Sheepdog Storage Driver Obviously we are reaching the point that the number of drivers leaving the community is concerning and it has sparked discussions around the fact that maybe our 3rd Party CI approach isn't working as intended. So what do we do? Just mark drivers unsupported and no longer remove drivers? Do we restore drivers that have recently been removed? We are planning to have further discussion around these questions at our next Cinder meeting in #openstack-meeting-4 on Wednesday, 1/29/20 at 14:00 UTC. If you have thoughts or strong opinions around this topic please join us. Thank you! Jay Bryant jsbryant at electronicjungle.net IRC: jungleboyj -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 4 19:16:46 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 4 Feb 2020 11:16:46 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #1 Message-ID: Hello All! At last a dedicated update solely to the Contrib & PTL Docs community goal! Get excited :) At this point the goal has been accepted[1], and the template[2] has been created and merged! So, the next step is for all projects to use the cookiecutter template[2] and fill in the extra details after you generate the .rst that should auto populate some of the information. As you are doing that, please assign yourself the associated tasks in the story I created[3]. If you have any questions or concerns, please let me know! -Kendall Nelson (diablo_rojo) [1] Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html [2] Docs Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From viroel at gmail.com Tue Feb 4 19:33:03 2020 From: viroel at gmail.com (Douglas) Date: Tue, 4 Feb 2020 16:33:03 -0300 Subject: [manila] Manila Drivers and CI Status Message-ID: Hi all, In the past months we have been analyzing drivers that are not reporting any status on patches submitted to the master branch of the openstack/manila repository [1]. We would like to request to driver maintainers to pay attention on the following warnings: 1. CI's that are not running and qualifying changes on master branch will be marked as `Not Qualified` on Manila's wiki page [2]. This annotation should help clarify to deployers/operators what drivers have been tested. A link to this will be created within Manila's administrator and user documentation [3]. 2. Python runtimes for Train and Ussuri cycles have been published by the OpenStack TC and we expect vendor CIs to adhere to these [4][5]. That said, no CIs should be running master (Ussuri) with python < 3.6. 3. If your CI system, and recheck trigger are missing from the Third Party CI wiki page [6], please add it. As has been the norm, no vendor driver changes will be allowed to merge without Third Party CI assuring us that the change has been tested for regressions. If you have any questions/concerns regarding CI and testing your drivers, please reach out to us here (openstack-discuss at lists.openstack.org) or on #openstack-manila [1] https://docs.google.com/spreadsheets/d/1dBSCqtQKoyFMX6oWTahhP9Z133NRFW3ynezq1CItx8M/edit#gid=0 [2] https://wiki.openstack.org/wiki/Manila [3] https://docs.openstack.org/manila/latest/index.html [4] https://governance.openstack.org/tc/reference/runtimes/train.html [5] https://governance.openstack.org/tc/reference/runtimes/ussuri.html [6] https://wiki.openstack.org/wiki/ThirdPartySystems Thanks! Douglas Viroel (dviroel) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doriamgray89 at gmail.com Tue Feb 4 19:57:34 2020 From: doriamgray89 at gmail.com (=?UTF-8?Q?Alain_Viv=C3=B3?=) Date: Tue, 4 Feb 2020 14:57:34 -0500 Subject: Problem with juju osd deploy Message-ID: I tried to deploy ceph-osd with juju for the openstack installation in this guide https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-juju.html but, when i depoy ceph-osd and ceph-mon, osd not acquire sdv disk, my 4 nodes have a sdb with 2 TB, but vm not add this storage. And another question, osd should not be installed on the host? -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Tue Feb 4 20:20:28 2020 From: abishop at redhat.com (Alan Bishop) Date: Tue, 4 Feb 2020 12:20:28 -0800 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano wrote: > > > On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > >> On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin >> wrote: >> > >> > Greetings, >> > >> > As mentioned at the previous tripleo meeting [1], we're going to >> revisit the current tripleo squads and the expectations for those squads at >> the tripleo meeting. >> > >> > Currently we have the following squads.. >> > 1. upgrades >> > 2. edge >> > 3. integration >> > 4. validations >> > 5. networking >> > 6. transformation >> > 7. ci >> > >> > A reasonable update could include the following.. >> > >> > 1. validations >> > 2. transformation >> > 3. mistral-to-ansible >> > 4. CI >> > 5. Ceph / Integration?? maybe just Ceph? >> >> I'm fine with "Ceph". The original intent of going from "Ceph >> Integration" to the more generic "Integration" was that it could >> include anyone using external-deploy-steps to deploy non-openstack >> projects with TripleO (k8s, skydive, etc). Though those things >> happened, we didn't really get anyone else to join the squad or update >> our etherpad so I'm fine with renaming it to Ceph. We're still active >> but our etherpad was getting old. I updated it just now. > > >> Gulio? Francesco? Alan? >> > > Agree here and I also updated the etherpad as well [1] w/ > our current status and the open topics we still have on > ceph side. > Not sure if we want to use "Integration" since the topics > couldn't be only ceph related but can involve other storage > components. > > Giulio, Alan, wdyt? > "Ceph integration" makes the most sense to me, but I'm fine with just naming it "Ceph" as we all know what that means. Alan > >> >> John >> >> > 6. others?? >> > >> > The squads should reflect major current efforts by the TripleO team >> IMHO. >> > >> > For the meetings, I would propose we use this time and space to give >> context to current reviews in progress and solicit feedback. It's also a >> good time and space to discuss any upstream blockers for those reviews. >> > >> > Let's give this one week for comments etc.. Next week we'll update the >> etherpad list and squads. The etherpad list will be a decent way to >> communicate which reviews need attention. >> > >> > Thanks all!!! >> > >> > [1] >> http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html >> > >> > >> >> [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status > > -- > Francesco Pantano > GPG KEY: F41BD75C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Tue Feb 4 21:06:18 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 4 Feb 2020 16:06:18 -0500 Subject: Problem with juju osd deploy In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 3:00 PM Alain Vivó wrote: > > I tried to deploy ceph-osd with juju for the openstack installation in this guide > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-juju.html > Hi Alain, This guide has recently been entirely refreshed. The new changes are visible by going to: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide > but, when i depoy ceph-osd and ceph-mon, osd not acquire sdv disk, my 4 nodes have a sdb with 2 TB, but vm not add this storage. > You will need to set option 'osd-devices' (on the CLI or by a YAML file) to the devices found on your OSD nodes. The default is just /dev/vdb. > And another question, osd should not be installed on the host? If you have device /dev/sdb on machine X and /dev/sdb is listed by 'osd-devices' then deploying the ceph-osd charm on machine X should do everything for you (/dev/sdb will be initialised for use as an OSD). Peter Matulis From feilong at catalyst.net.nz Tue Feb 4 21:23:38 2020 From: feilong at catalyst.net.nz (Feilong Wang) Date: Wed, 5 Feb 2020 10:23:38 +1300 Subject: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx In-Reply-To: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> References: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> Message-ID: <854bcaa6-784f-30e9-c208-796f7d85f7bb@catalyst.net.nz> Hi Greg, Currently the conformance test result upload is manually done by me. We didn't setup an automated pipeline for this because the openstack infra doesn't support nested visualization, so we can't run Magnum and generate the conformance test fully automated. But if StartingX can do that, it should be doable. On 31/01/20 4:44 AM, Waines, Greg wrote: > > Hello, > >   > > I am working in the OpenStack StarlingX team. > > We are working on getting StarlingX certified through the CNCF > conformance > program, https://www.cncf.io/certification/software-conformance/ . > > ( in the same way that you guys, OpenStack Magnum project,  got > certified with CNCF ) > > As you know, in order for the logo to be shown as based on > open-source, CNCF requires that the code be mirrored on github.com . > > e.g. https://github.com/openstack/magnum > >   > > The openstack foundation guys did provide some info on how to do this: > > /The further steps for the project owner to take:/ > > /* create a dedicated account for zuul/ > > /* create the individual empty repos/ > > /* add a job to each repo to do the mirroring, like:/ > > /  * https://opendev.org/airship/deckhand/src/commit/51dcea4fa12b0bcce65c381c286e61378a0826e2/.zuul.yaml#L406-L463/ > > / / > > /Also, you can find documentation for the parent job > here: https://zuul-ci.org/docs/zuul-jobs/general-jobs.html#job-upload-git-mirror/ > >   > > ... maybe it’s cause I don’t know anything about zuul jobs, but these > instructions are not super clear to me. > >   > > */Is the person who did this for magnum available to provide some more > detailed instructions or help on doing this ?/* > >   > > Let me know ... any help is much appreciated, > > Greg. > >   > >   > -- Cheers & Best regards, Feilong Wang (王飞龙) Head of R&D Catalyst Cloud - Cloud Native New Zealand -------------------------------------------------------------------------- Tel: +64-48032246 Email: flwang at catalyst.net.nz Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Feb 4 23:50:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 4 Feb 2020 16:50:14 -0700 Subject: [tripleo] py27 tests are being removed Message-ID: Greetings, Just in case you didn't see or hear about it. TripleO is removing all the py27 tox tests [1]. See the bug for details why. We really need to get CentOS-8 jobs in place ASAP for ussuri and train, that will continue to be the upstream CI teams focus until it's done. Thank you [1] https://review.opendev.org/#/q/topic:1861803+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 5 00:42:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 Feb 2020 18:42:47 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> Message-ID: <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> I am writing at top now for easy ready. * Gate status: - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. [1] https://review.opendev.org/#/c/705089/ [2] https://review.opendev.org/#/c/705870/ -gmann ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > of 'EOLing python2 drama' in subject :). > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > to install the latest neutron-lib and failing. > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > from master and kepe testing py2 on stable bracnhes. > > > > We have two way to fix this: > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > I am trying this first[2] and testing[3]. > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > Tried option#2: > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > or distro-specific job like centos7 etc where we have < py3.6. > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > Testing your cloud with the latest Tempest is the best possible way. > > Going with option#1: > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > for all possible distro/py version. > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > on >py3.6 env(venv or separate node). > * Patch is up - https://review.opendev.org/#/c/704840/ > > 2.Modify Tempest tox env basepython to py3 > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > like fedora or future distro > *Patch is up- https://review.opendev.org/#/c/704688/2 > > 3. Use compatible Tempest & its plugin tag for distro having * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > in-tree plugins for their stable branch testing. > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > in neutron-vpnaas case. This will be easy for future maintenance also. > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > [1] https://review.opendev.org/#/c/703476/ > [2] https://review.opendev.org/#/c/703011/ > [3] https://releases.openstack.org/rocky/#rocky-tempest > > -gmann > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > [2] https://review.opendev.org/#/c/703011/ > > [3] https://review.opendev.org/#/c/703012/ > > > > > > -gmanne > > > > > > > > > > > From kazumasa.nomura.rx at hitachi.com Wed Feb 5 05:16:54 2020 From: kazumasa.nomura.rx at hitachi.com (=?iso-2022-jp?B?GyRCTG5CPE9CQDUbKEIgLyBOT01VUkEbJEIhJBsoQktBWlVNQVNB?=) Date: Wed, 5 Feb 2020 05:16:54 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? Message-ID: Hi everyone, I work in Hitachi, Ltd as cinder driver development member. I am trying to build cinder third-party CI by using Software Factory. Is there anyone already used Software Factory for cinder third-party CI? If you have already built it, I want to get you to give me the information how to build it. If you are trying to build it, I want to share settings and procedures for building it and cooperate with anyone trying to build it. Please contact me if you are trying to build third-party CI by using Software Factory. Thanks, Kazumasa Nomura E-mail: kazumasa.nomura.rx at hitachi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wespark at suddenlink.net Wed Feb 5 06:46:51 2020 From: wespark at suddenlink.net (wespark at suddenlink.net) Date: Tue, 04 Feb 2020 22:46:51 -0800 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: References: Message-ID: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> HI 野村和正 I have the interest to know how you are doing. can you share the info to me? Thanks. On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: > Hi everyone, > > I work in Hitachi, Ltd as cinder driver development member. > > I am trying to build cinder third-party CI by using Software Factory. > Is there anyone already used Software Factory for cinder third-party > CI? > > If you have already built it, I want to get you to give me the > information how to build it. > > If you are trying to build it, I want to share settings and procedures > for building it and cooperate with anyone trying to build it. > > Please contact me if you are trying to build third-party CI by using > Software Factory. > > Thanks, > > Kazumasa Nomura > > E-mail: kazumasa.nomura.rx at hitachi.com From rico.lin.guanyu at gmail.com Wed Feb 5 08:07:51 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 5 Feb 2020 16:07:51 +0800 Subject: [tc] February meeting agenda Message-ID: Hello everyone, Our next meeting is happening this Thursday (the 6th), and the agenda is, as usual, on the wiki! Here is a primer of the agenda for this month: - Report on large scale sig - Report on tc/uc merge - report on the post for the analysis of the survey - Report on the convo Telemetry - Report on multi-arch SIG - report on infra liaison and static hosting - report on stable branch policy work - report on the oslo metrics project - report on the community goals for U and V, py2 drop - report on release naming - report on the ideas repo - report on charter change - Report on whether SIG guidelines worked - volunteers to represent OpenStack at the OpenDev advisory board - Report on the OSF board initiatives - Dropping side projects: using golden signals See you all in meeting:) -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Wed Feb 5 08:21:08 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 5 Feb 2020 09:21:08 +0100 Subject: [tripleo] py27 tests are being removed In-Reply-To: References: Message-ID: W dniu 05.02.2020 o 00:50, Wesley Hayutin pisze: > Just in case you didn't see or hear about it. TripleO is removing all the > py27 tox tests [1]. See the bug for details why. We really need to get > CentOS-8 jobs in place ASAP for ussuri and train, that will continue to be > the upstream CI teams focus until it's done. You also have a job in Kolla - the only thing which blocks removal of py2 from project. From witold.bedyk at suse.com Wed Feb 5 09:17:18 2020 From: witold.bedyk at suse.com (Witek Bedyk) Date: Wed, 5 Feb 2020 10:17:18 +0100 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: References: Message-ID: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> Hi, we're using ujson in Monasca for serialization all over the place. While changing it to any other alternative is probably a drop-in replacement, we have in the past chosen to use ujson because of better performance. It is of great importance, in particular in persister. Current alternatives include orjson [1] and rapidjson [2]. We're going to measure which of them works best for our use case and how much faster they are compared to standard library module. Assuming there is a significant performance benefit, is there any preference from requirements team which one to include in global requirements? I haven't seen any distro packages for any of them. [1] https://pypi.org/project/orjson/ [2] https://pypi.org/project/python-rapidjson/ Best greetings Witek On 1/31/20 9:34 AM, Radosław Piliszek wrote: > This is a spinoff discussion of [1] to attract more people. > > As the subject goes, the situation of ujson is bad. Still, monasca and > gnocchi (both server and client) seem to be using it which may break > depending on compiler. > The original issue is that the released version of ujson is in > non-spec-conforming C which may break randomly based on used compiler > and linker. > There has been no release of ujson for more than 4 years. > > Based on general project activity, Monasca is probably able to fix it > but Gnocchi not so surely... > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > -yoctozepto > From kazumasa.nomura.rx at hitachi.com Wed Feb 5 09:26:36 2020 From: kazumasa.nomura.rx at hitachi.com (=?utf-8?B?6YeO5p2R5ZKM5q2jIC8gTk9NVVJB77yMS0FaVU1BU0E=?=) Date: Wed, 5 Feb 2020 09:26:36 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> Message-ID: Hi Wespark, I have started to take a look Software Factory from last week. I'm just getting started and trying to run sfconfig. If I got a beneficial information during building Software Factory, I let community members know. Thanks, Kazumasa Nomura E-mail: kazumasa.nomura.rx at hitachi.com -----Original Message----- From: wespark at suddenlink.net Sent: Wednesday, February 5, 2020 3:47 PM To: 野村和正 / NOMURA,KAZUMASA Cc: openstack-discuss at lists.openstack.org Subject: Re: [cinder] anyone using Software Factory for third-party CI? HI 野村和正 I have the interest to know how you are doing. can you share the info to me? Thanks. On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: > Hi everyone, > > I work in Hitachi, Ltd as cinder driver development member. > > I am trying to build cinder third-party CI by using Software Factory. > Is there anyone already used Software Factory for cinder third-party > CI? > > If you have already built it, I want to get you to give me the > information how to build it. > > If you are trying to build it, I want to share settings and procedures > for building it and cooperate with anyone trying to build it. > > Please contact me if you are trying to build third-party CI by using > Software Factory. > > Thanks, > > Kazumasa Nomura > > E-mail: kazumasa.nomura.rx at hitachi.com From mark at stackhpc.com Wed Feb 5 10:35:58 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 5 Feb 2020 10:35:58 +0000 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Sun, 2 Feb 2020 at 21:06, Neal Gompa wrote: > > On Wed, Jan 29, 2020 at 9:37 AM Mark Goddard wrote: > > > > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > > wrote: > > > > > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > > >> > > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > > >> > > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > > >> > wrote: > > >> > > > > >> > > I know it was for masakari. > > >> > > Gaëtan had to grab crmsh from opensuse: > > >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > > >> > > > > >> > > -yoctozepto > > >> > > > >> > Thanks Wes for getting this discussion going. I've been looking at > > >> > CentOS 8 today and trying to assess where we are. I created an > > >> > Etherpad to track status: > > >> > https://etherpad.openstack.org/p/kolla-centos8 > > >> > > > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. > > > > I found them, thanks. > > > > > > > >> > > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > > >> code when installing packages. It often happens on the rabbitmq and > > >> grafana images. There is a prompt about importing GPG keys prior to > > >> the error. > > >> > > >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > > >> > > >> Related bug report? https://github.com/containers/libpod/issues/4431 > > >> > > >> Anyone familiar with it? > > >> > > > > > > Didn't know about this issue. > > > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > > > > It seems to be due to the use of a GPG check on the repo (as opposed > > to packages). DNF doesn't use keys imported via rpm --import for this > > (I'm not sure what it uses), and prompts to add the key. This breaks > > without a terminal. More explanation here: > > https://review.opendev.org/#/c/704782. > > > > librepo has its own keyring for repo signature verification. Thanks Neal. Any pointers on how to add keys to it? > > > > -- > 真実はいつも一つ!/ Always, there's only one truth! From pfb29 at cam.ac.uk Wed Feb 5 10:53:32 2020 From: pfb29 at cam.ac.uk (Paul Browne) Date: Wed, 5 Feb 2020 10:53:32 +0000 Subject: Active-Active Cinder + RBD driver + Co-ordination Message-ID: Hi list, I had a quick question about Active-Active in cinder-volume and cinder-backup stable/stein and RBD driver, if anyone can help. Using only the Ceph RBD driver for volume backends, is it required to run A-A cinder services with clustering configuration so that they form a cluster? And, if so, is an external coordinator (redis/etcd/Consul) necessary, again only using RBD driver? Best docs I could find on this so far were; https://docs.openstack.org/cinder/latest/contributor/high_availability.html , I support more aimed at devs/contributoers than operators, but it's not 100% clear to me on these questions Thanks, Paul -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29 at cam.ac.uk Tel: 0044-1223-746548 ******************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Wed Feb 5 11:33:25 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Wed, 5 Feb 2020 19:33:25 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Today I tried to recovery RabbitMQ back, but still not useful, even delete everything about data and configs for RabbitMQ then re-deploy (without destroy). And I found that the /etc/hosts on every nodes all been flushed, the hostname resolve data created by kolla-ansible are gone. Checked and found that the MAAS just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which caused /etc/hosts been reset everytime when boot. Not sure it was a root cause or not but unfortunately I already reset whole RabbitMQ data, so only I can do is destroy and deploy again. Fortunately this cluster was just beginning so no VM launch, and no do complex setup yet. I think the issue may solved, although still need a time to investigate. Based on this experience, need to notice about this may going to happen if using MAAS to deploy the OS. -Eddie Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: > Hi Erik, > > I'm already checked NIC link and no issue found. Pinging the nodes each > other on each interfaces is OK. > And I'm not check docker logs about rabbitmq sbecause it works normally. > I'll check that out later. > > -Eddie > > Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: > >> ⁹ >> >> On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: >> >>> Hi everyone, >>> >>> We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + >>> 3 Storage (Ceph OSD) >>> site without internet. We did the shutdown few days ago since CNY >>> holidays. >>> >>> Today we re-launch whole cluster back. First we met the issue that >>> MariaDB containers keep >>> restarting, and we fixed by using mariadb_recovery command. >>> After that we check the status of each services, and found that all >>> services shown at >>> Admin > System > System Information are DOWN. Strange is no MariaDB, >>> AMQP connection, >>> or other error found when check the downed service log. >>> >>> We tried reboot each servers but the situation still a same. Then we >>> found the RabbitMQ log not >>> updating, the last log still stayed at the date we shutdown. Logged in >>> to RabbitMQ container and >>> type "rabbitmqctl status" shows connection refused, and tried access its >>> web manager from >>> :15672 on browser just gave us "503 Service unavailable" message. >>> Also no port 5672 >>> listening. >>> >> >> >> Any chance you have a NIC that didn't come up? What is in the log of the >> container itself? (ie. docker log rabbitmq). >> >> >>> I searched this issue on the internet but only few information about >>> this. One of solution is delete >>> some files in mnesia folder, another is remove rabbitmq container and >>> its volume then re-deploy. >>> But both are not sure. Does anyone know how to solve it? >>> >>> >>> Many thanks, >>> Eddie. >>> >> >> -Erik >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Feb 5 11:42:44 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 05 Feb 2020 11:42:44 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> Message-ID: <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> On Wed, 2020-02-05 at 09:26 +0000, 野村和正 / NOMURA,KAZUMASA wrote: > Hi Wespark, > > I have started to take a look Software Factory from last week. I'm just getting started and trying to run sfconfig. > If I got a beneficial information during building Software Factory, I let community members know. downstream at redhat we are starting to use software factory in the compute team for our internal ci. software factory is just a packaged version of zuul so haveing used zuul to do third party ci in the past i think it will be a good way of doing third party ci. that said zuul is also easy to deploy form source so its really a personaly choice if you deploy software factory or zuul nativivly. zuul is an excllent ci/gating system for both 1st and 3rd party ci. > > Thanks, > Kazumasa Nomura > E-mail: kazumasa.nomura.rx at hitachi.com > > -----Original Message----- > From: wespark at suddenlink.net > Sent: Wednesday, February 5, 2020 3:47 PM > To: 野村和正 / NOMURA,KAZUMASA > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [cinder] anyone using Software Factory for third-party CI? > > HI 野村和正 > > I have the interest to know how you are doing. can you share the info to me? > > Thanks. > > > On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: > > Hi everyone, > > > > I work in Hitachi, Ltd as cinder driver development member. > > > > I am trying to build cinder third-party CI by using Software Factory. > > Is there anyone already used Software Factory for cinder third-party > > CI? > > > > If you have already built it, I want to get you to give me the > > information how to build it. > > > > If you are trying to build it, I want to share settings and procedures > > for building it and cooperate with anyone trying to build it. > > > > Please contact me if you are trying to build third-party CI by using > > Software Factory. > > > > Thanks, > > > > Kazumasa Nomura > > > > E-mail: kazumasa.nomura.rx at hitachi.com From tdecacqu at redhat.com Wed Feb 5 13:52:00 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 05 Feb 2020 13:52:00 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> Message-ID: <87zhdxuoov.tristanC@fedora> Hi, I am a member of the SF development team. We started to work on a guide to document how to setup the services for OpenStack third-party CI here: https://softwarefactory-project.io/r/17097 Feel free to reach out if you have issues with the configuration. Regards, -Tristan On Tue, Feb 04, 2020 at 22:46 wespark at suddenlink.net wrote: > HI 野村和正 > > I have the interest to know how you are doing. can you share the info to > me? > > Thanks. > > > On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi everyone, >> >> I work in Hitachi, Ltd as cinder driver development member. >> >> I am trying to build cinder third-party CI by using Software Factory. >> Is there anyone already used Software Factory for cinder third-party >> CI? >> >> If you have already built it, I want to get you to give me the >> information how to build it. >> >> If you are trying to build it, I want to share settings and procedures >> for building it and cooperate with anyone trying to build it. >> >> Please contact me if you are trying to build third-party CI by using >> Software Factory. >> >> Thanks, >> >> Kazumasa Nomura >> >> E-mail: kazumasa.nomura.rx at hitachi.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Feb 5 14:27:28 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 05 Feb 2020 14:27:28 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> Message-ID: <87wo91un1r.tristanC@fedora> On Wed, Feb 05, 2020 at 11:42 Sean Mooney wrote: > On Wed, 2020-02-05 at 09:26 +0000, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi Wespark, >> >> I have started to take a look Software Factory from last week. I'm just getting started and trying to run sfconfig. >> If I got a beneficial information during building Software Factory, I let community members know. > downstream at redhat we are starting to use software factory in the compute team for our internal ci. > software factory is just a packaged version of zuul so haveing used zuul to do third party ci in the past i > think it will be a good way of doing third party ci. that said zuul is also easy to deploy form source so > its really a personaly choice if you deploy software factory or zuul nativivly. zuul is an excllent ci/gating system > for both 1st and 3rd party ci. Please note that Software Factory is not just a packaged version of Zuul. It also integrates extra components such as Grafana for the monitoring and Logstash for indexing builds. We find that installing from source and configuring by hand all the services needed to operate a third party ci can be complicated. Thus Software Factory also provides a configuration playbook to deploy and configure all the services together in just a few commands. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From smooney at redhat.com Wed Feb 5 14:37:07 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 05 Feb 2020 14:37:07 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <87wo91un1r.tristanC@fedora> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> <87wo91un1r.tristanC@fedora> Message-ID: <3bc440019a3c9d705d77835e5b0b406e81fe99f7.camel@redhat.com> On Wed, 2020-02-05 at 14:27 +0000, Tristan Cacqueray wrote: > On Wed, Feb 05, 2020 at 11:42 Sean Mooney wrote: > > On Wed, 2020-02-05 at 09:26 +0000, 野村和正 / NOMURA,KAZUMASA wrote: > > > Hi Wespark, > > > > > > I have started to take a look Software Factory from last week. I'm just getting started and trying to run > > > sfconfig. > > > If I got a beneficial information during building Software Factory, I let community members know. > > > > downstream at redhat we are starting to use software factory in the compute team for our internal ci. > > software factory is just a packaged version of zuul so haveing used zuul to do third party ci in the past i > > think it will be a good way of doing third party ci. that said zuul is also easy to deploy form source so > > its really a personaly choice if you deploy software factory or zuul nativivly. zuul is an excllent ci/gating system > > for both 1st and 3rd party ci. > > Please note that Software Factory is not just a packaged version of > Zuul. It also integrates extra components such as Grafana for the > monitoring and Logstash for indexing builds. yes that is true. > > We find that installing from source and configuring by hand all the > services needed to operate a third party ci can be complicated. > Thus Software Factory also provides a configuration playbook to deploy > and configure all the services together in just a few commands. yes although i have done it 3 times, once by hand manually, once with kubernets and once with the docker compose exmaple my experince is you can do it by hand in a day. kubernetes took me about a week or two weeks to get it working and the docker comose example that zuul has which deploy gerrit,zuul,nodepool and a contianer as a static worker node can be done in like an hour. im sure Software Factory helps and i will proably try it next time but its not hard to do it manually its just involved. if software factory makes it just a few commands and supprots day two operation like reconfiguration and upgrades simpley then that is defintly added value. > > Regards, > -Tristan From aschultz at redhat.com Wed Feb 5 14:40:26 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 5 Feb 2020 07:40:26 -0700 Subject: [tripleo] py27 tests are being removed In-Reply-To: References: Message-ID: For clarification (because this has come up multiple times now), we're not officially removing py27 support yet. We're killing off the job because of incompatible versions being released and it sounds like there isn't a desire to cap it in master. Because we're still using centos7 for our CI, we still need to be able to run the actual code under python2 until we get centos8 jobs up. On Wed, Feb 5, 2020 at 1:27 AM Marcin Juszkiewicz wrote: > > W dniu 05.02.2020 o 00:50, Wesley Hayutin pisze: > > > Just in case you didn't see or hear about it. TripleO is removing all the > > py27 tox tests [1]. See the bug for details why. We really need to get > > CentOS-8 jobs in place ASAP for ussuri and train, that will continue to be > > the upstream CI teams focus until it's done. > > You also have a job in Kolla - the only thing which blocks removal of > py2 from project. > From gmann at ghanshyammann.com Wed Feb 5 15:15:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Feb 2020 09:15:58 -0600 Subject: [all] Gate status: Stable/ocata|pike|queens|rocky is broken: Avoid recheck Message-ID: <17015ec15d8.113d69dac313692.3141092730262956226@ghanshyammann.com> Hello Everyone, Stable/ocata, pike, queens, rocky gate is broken now due to Temepst dependency require >=py3.6. I summarized the situation in ML[1]. Do not recheck on failed patches of those branches until the job is explicitly disabling Tempest. Fixes are in progress, I will update the status here once fixes are merged. [1]http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012371.html -gmann From ashlee at openstack.org Wed Feb 5 16:25:58 2020 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 5 Feb 2020 10:25:58 -0600 Subject: Help Shape the Track Content for OpenDev + PTG, June 8-11 in Vancouver In-Reply-To: References: Message-ID: <9C495D22-CA8F-4030-9961-ABE72B1F7DC3@openstack.org> Hi everyone, Just a reminder to nominate yourself by February 11 if you’re interested in volunteering as an OpenDev Programming Committee member, discussion Moderator, or would like to suggest topics for moderated discussions within a particular Track. We’re looking for subject matter experts on the following OpenDev Tracks: - Hardware Automation (accelerators, provisioning hardware, networking) - Large-scale Usage of Open Source Infrastructure Software (scale pain points, multi-location, CI/CD) - Containers in Production (isolation, virtualization, telecom containers) - Key Challenges for Open Source in 2020 (beyond licensing, public clouds, ethics) Nominate yourself here: https://openstackfoundation.formstack.com/forms/opendev_vancouver2020_volunteer Please let me know if you have any questions! Thanks, Ashlee > On Jan 29, 2020, at 4:04 PM, Ashlee Ferguson wrote: > > Hi everyone, > > Hopefully by now you’ve heard about our upcoming event, OpenDev + PTG Vancouver, June 8-11, 2020 . We need your help shaping this event! Our vision is for the content to be programmed by you-- the community. OSF is looking to kick things off by selecting members for OpenDev Programming Committees for each Track. That Program Committee will then select Moderators who will lead interactive discussions on a particular topic within the track. Below you'll have the opportunity to nominate yourself for a position on the Programming Committee, as a Moderator, or both, as well as suggesting specific Topics within each Track. PTG programming will kick off in the coming weeks. > > If you’re interested in volunteering as an OpenDev Programming Committee member, discussion Moderator, or would like to suggest topics for moderated discussions within a particular Track, please read the details below, and then fill out this form . > > We’re looking for subject matter experts on the following OpenDev Tracks: > - Hardware Automation (accelerators, provisioning hardware, networking) > - Large-scale Usage of Open Source Infrastructure Software (scale pain points, multi-location, CI/CD) > - Containers in Production (isolation, virtualization, telecom containers) > - Key Challenges for Open Source in 2020 (beyond licensing, public clouds, ethics) > > OpenDev Programming Committee members will: > Work with other Committee members, which will include OSF representatives, to curate OpenDev content based on subject expertise, community input, and relevance to open source infrastructure > Promote the individual Tracks within your networks > Review community input and suggestions for Track discussions > Solicit moderators from your network if you know someone who is a subject matter expert > Ensure diversity of speakers and companies represented in your Track > Focus topics around on real-world user stories and technical, in-the-trenches experiences > > Programming Committee members need to be available during the following dates/time commitments: > 8 - 10 hours from February - May for bi-weekly calls with your Track's Programming Committee (plus a couple of OSF representatives to facilitate the call) > OpenDev, June 8 - 10, 2020 (not required, but preferred) > Programming Committee members will receive a complimentary pass to the event > > Programming Committees will be comprised of a few people per Track who will work to select a handful of topics and moderators for each Track. The exact topic counts will be determined before Committees begin deciding. > > OpenDev Discussion Moderators will > Be appointed by the Programming Committees > Facilitate discussions within a particular Track > Have adequate knowledge and experience to lead and moderate discussion around certain topics during the event > Work with Programming Committee to decide focal point of discussion > > Moderators need to be available to attend OpenDev, June 8 - 10, 2020, and will receive a complimentary pass. > > > Programming Committee nominations are open until February 11. Deadlines to volunteer to be a moderator and suggest topics will be in late February. > > > Nominate yourself or suggest discussion topics here: https://openstackfoundation.formstack.com/forms/opendev_vancouver2020_volunteer > > Cheers, > The OpenStack Foundation > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Wed Feb 5 16:46:30 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 5 Feb 2020 10:46:30 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> Message-ID: <20200205164630.xidg42ego5kvg4ia@mthode.org> On 20-02-05 10:17:18, Witek Bedyk wrote: > Hi, > > we're using ujson in Monasca for serialization all over the place. While > changing it to any other alternative is probably a drop-in replacement, we > have in the past chosen to use ujson because of better performance. It is of > great importance, in particular in persister. Current alternatives include > orjson [1] and rapidjson [2]. We're going to measure which of them works > best for our use case and how much faster they are compared to standard > library module. > > Assuming there is a significant performance benefit, is there any preference > from requirements team which one to include in global requirements? I > haven't seen any distro packages for any of them. > > [1] https://pypi.org/project/orjson/ > [2] https://pypi.org/project/python-rapidjson/ > > Best greetings > Witek > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > This is a spinoff discussion of [1] to attract more people. > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > gnocchi (both server and client) seem to be using it which may break > > depending on compiler. > > The original issue is that the released version of ujson is in > > non-spec-conforming C which may break randomly based on used compiler > > and linker. > > There has been no release of ujson for more than 4 years. > > > > Based on general project activity, Monasca is probably able to fix it > > but Gnocchi not so surely... > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > -yoctozepto > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 in requiring glibc 2.18, released 2013, or later. orjson does not support PyPy. Given the above (I think we still need to support py35 at least) I'm not sure we can use it. Though it is my preferred other than that... (faster than ujson, more updates (last release yesterday), etc) -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Wed Feb 5 16:58:09 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 5 Feb 2020 16:58:09 +0000 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> Is there a need for Ironic integration one? From: Alan Bishop Sent: Tuesday, February 4, 2020 2:20 PM To: Francesco Pantano Cc: John Fulton; Wesley Hayutin; OpenStack Discuss; Phil Weeks Subject: Re: [tripleo] rework of triple squads and the tripleo mtg. [EXTERNAL EMAIL] On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano > wrote: On Tue, Feb 4, 2020 at 5:45 PM John Fulton > wrote: On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > Greetings, > > As mentioned at the previous tripleo meeting [1], we're going to revisit the current tripleo squads and the expectations for those squads at the tripleo meeting. > > Currently we have the following squads.. > 1. upgrades > 2. edge > 3. integration > 4. validations > 5. networking > 6. transformation > 7. ci > > A reasonable update could include the following.. > > 1. validations > 2. transformation > 3. mistral-to-ansible > 4. CI > 5. Ceph / Integration?? maybe just Ceph? I'm fine with "Ceph". The original intent of going from "Ceph Integration" to the more generic "Integration" was that it could include anyone using external-deploy-steps to deploy non-openstack projects with TripleO (k8s, skydive, etc). Though those things happened, we didn't really get anyone else to join the squad or update our etherpad so I'm fine with renaming it to Ceph. We're still active but our etherpad was getting old. I updated it just now. Gulio? Francesco? Alan? Agree here and I also updated the etherpad as well [1] w/ our current status and the open topics we still have on ceph side. Not sure if we want to use "Integration" since the topics couldn't be only ceph related but can involve other storage components. Giulio, Alan, wdyt? "Ceph integration" makes the most sense to me, but I'm fine with just naming it "Ceph" as we all know what that means. Alan John > 6. others?? > > The squads should reflect major current efforts by the TripleO team IMHO. > > For the meetings, I would propose we use this time and space to give context to current reviews in progress and solicit feedback. It's also a good time and space to discuss any upstream blockers for those reviews. > > Let's give this one week for comments etc.. Next week we'll update the etherpad list and squads. The etherpad list will be a decent way to communicate which reviews need attention. > > Thanks all!!! > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Wed Feb 5 17:09:05 2020 From: witold.bedyk at suse.com (Witek Bedyk) Date: Wed, 5 Feb 2020 18:09:05 +0100 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205164630.xidg42ego5kvg4ia@mthode.org> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> Message-ID: <7498de77-6493-630e-0994-4dde4f28340b@suse.com> > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > in requiring glibc 2.18, released 2013, or later. orjson does not > support PyPy. > > Given the above (I think we still need to support py35 at least) I'm not > sure we can use it. Though it is my preferred other than that... > (faster than ujson, more updates (last release yesterday), etc) > Thanks Matthew, what's the status of Python 3.5 support? We've been dropping unit tests for 3.5 in Train [1]. [1] https://governance.openstack.org/tc/goals/selected/train/python3-updates.html Witek From mthode at mthode.org Wed Feb 5 17:27:22 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 5 Feb 2020 11:27:22 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <7498de77-6493-630e-0994-4dde4f28340b@suse.com> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <7498de77-6493-630e-0994-4dde4f28340b@suse.com> Message-ID: <20200205172722.brzykcdwk44abhj4@mthode.org> On 20-02-05 18:09:05, Witek Bedyk wrote: > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > > in requiring glibc 2.18, released 2013, or later. orjson does not > > support PyPy. > > > > Given the above (I think we still need to support py35 at least) I'm not > > sure we can use it. Though it is my preferred other than that... > > (faster than ujson, more updates (last release yesterday), etc) > > > > > Thanks Matthew, > what's the status of Python 3.5 support? We've been dropping unit tests for > 3.5 in Train [1]. > > [1] > https://governance.openstack.org/tc/goals/selected/train/python3-updates.html > > Witek > Looks like you are right, for some reason I thought we still supported it. looks good to me then -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kendall at openstack.org Wed Feb 5 17:28:19 2020 From: kendall at openstack.org (Kendall Waters) Date: Wed, 5 Feb 2020 11:28:19 -0600 Subject: [OpenStack Foundation] Sponsorship Prospectus is Now Live - OpenDev+PTG Vancouver 2020 In-Reply-To: References: Message-ID: <2FE8AB56-F93D-4082-8FAF-C16FBCE99E09@openstack.org> Hi everyone, Sponsor sales for the Vancouver OpenDev + PTG event are now open! Below are 3 easy steps to become a sponsor: Step 1 - Review the sponsorship prospectus  Step 2 - Sign the Master Event Sponsorship Agreement (This step is only for companies who are sponsoring an OpenStack Foundation event for the first time) Step 3 - Sign the OpenDev + PTG Sponsorship Agreement Please let me know if you have any questions or would like to set up a call to talk through the sponsorship options. Cheers, Kendall Kendall Waters Perez OpenStack Marketing & Events kendall at openstack.org > On Feb 4, 2020, at 11:31 AM, Kendall Waters wrote: > > Hi everyone, > > The sponsorship prospectus for the upcoming OpenDev + PTG event happening in Vancouver, BC on June 8-11 is now live! We expect this to be an important event for sponsors, and will have a place for sponsors to have a physical presence (embedded in the event rather in a separate sponsors hall), as well as branding throughout the event and the option for a keynote in the morning for headline sponsors. > > OpenDev + PTG is a collaborative event organized by the OpenStack Foundation (OSF) gathering developers, system architects, and operators to address common open source infrastructure challenges. OpenDev will take place June 8-10 and the PTG will take place June 8-11. > > Each day will be broken into three parts: > Short kickoff with all attendees to set the goals for the day or discuss the outcomes of the previous day. Think of this like a mini-keynote, challenging your thoughts around the topic areas before you head into real collaborative sessions. > OpenDev: Morning discussions covering projects like Airship, Ansible, Ceph, Kata Containers, Kubernetes, OpenStack, StarlingX, Zuul and more centered around one of four different topics: > Hardware Automation > Large-scale Usage of Open Source Infrastructure Software > Containers in Production > Key challenges for open source in 2020 > PTG: Afternoon working sessions for project teams and SIGs to continue the morning’s discussions. > > The sponsorship contract will be available to sign starting tomorrow, February 5 at 9:30am PST at https://www.openstack.org/events/opendev-ptg-2020/sponsors . > > Please let me know if you have any questions. > > Cheers, > Kendall > > Kendall Waters Perez > OpenStack Marketing & Events > kendall at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Feb 5 17:33:34 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 5 Feb 2020 17:33:34 +0000 Subject: Virtio memory balloon driver In-Reply-To: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> Message-ID: When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to disable it. How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? Console log: https://f.perl.bot/p/njvgbm The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 Dmesg: [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state Syslog: Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: host-cpu Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 -----Original Message----- From: Jeremy Stanley Sent: Tuesday, February 4, 2020 4:01 AM To: openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > seen any OOM errors. Where should I look for those? [...] The `dmesg` utility on the hypervisor host should show you the kernel's log ring buffer contents (the -T flag is useful to translate its timestamps into something more readable than seconds since boot too). If the ring buffer has overwritten the relevant timeframe then look for signs of kernel OOM killer invocation in your syslog or persistent journald storage. -- Jeremy Stanley From Albert.Braden at synopsys.com Wed Feb 5 17:40:59 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 5 Feb 2020 17:40:59 +0000 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Eddie, This is the process that I use to reset RMQ when it fails. RMQ messages are ephemeral; losing your old RMQ messages doesn’t ruin the cluster. On master: service rabbitmq-server stop ps auxw|grep rabbit (kill any rabbit processes) rm -rf /var/lib/rabbitmq/mnesia/* service rabbitmq-server start rabbitmqctl add_user admin rabbitmqctl set_user_tags admin administrator rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" rabbitmqctl add_user openstack rabbitmqctl set_permissions -p / openstack ".*" ".*" ".*" rabbitmqctl set_policy ha-all "" '{"ha-mode":"all"}' rabbitmqctl list_policies on slaves: rabbitmqctl stop_app If RMQ fails to reset on a slave, or fails to start after resetting, then: service rabbitmq-server stop ps auxw|grep rabbit (kill any rabbit processes) rm -rf /var/lib/rabbitmq/mnesia/* service rabbitmq-server start rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl start_app rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@ rabbitmqctl start_app From: Eddie Yen Sent: Wednesday, February 5, 2020 3:33 AM To: openstack-discuss Subject: Re: [kolla] All services stats DOWN after re-launch whole cluster. Today I tried to recovery RabbitMQ back, but still not useful, even delete everything about data and configs for RabbitMQ then re-deploy (without destroy). And I found that the /etc/hosts on every nodes all been flushed, the hostname resolve data created by kolla-ansible are gone. Checked and found that the MAAS just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which caused /etc/hosts been reset everytime when boot. Not sure it was a root cause or not but unfortunately I already reset whole RabbitMQ data, so only I can do is destroy and deploy again. Fortunately this cluster was just beginning so no VM launch, and no do complex setup yet. I think the issue may solved, although still need a time to investigate. Based on this experience, need to notice about this may going to happen if using MAAS to deploy the OS. -Eddie Eddie Yen > 於 2020年2月4日 週二 下午9:45寫道: Hi Erik, I'm already checked NIC link and no issue found. Pinging the nodes each other on each interfaces is OK. And I'm not check docker logs about rabbitmq sbecause it works normally. I'll check that out later. -Eddie Erik McCormick > 於 2020年2月4日 週二 下午9:19寫道: ⁹ On Tue, Feb 4, 2020, 7:20 AM Eddie Yen > wrote: Hi everyone, We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 Storage (Ceph OSD) site without internet. We did the shutdown few days ago since CNY holidays. Today we re-launch whole cluster back. First we met the issue that MariaDB containers keep restarting, and we fixed by using mariadb_recovery command. After that we check the status of each services, and found that all services shown at Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP connection, or other error found when check the downed service log. We tried reboot each servers but the situation still a same. Then we found the RabbitMQ log not updating, the last log still stayed at the date we shutdown. Logged in to RabbitMQ container and type "rabbitmqctl status" shows connection refused, and tried access its web manager from :15672 on browser just gave us "503 Service unavailable" message. Also no port 5672 listening. Any chance you have a NIC that didn't come up? What is in the log of the container itself? (ie. docker log rabbitmq). I searched this issue on the internet but only few information about this. One of solution is delete some files in mnesia folder, another is remove rabbitmq container and its volume then re-deploy. But both are not sure. Does anyone know how to solve it? Many thanks, Eddie. -Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Feb 5 18:24:32 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 05 Feb 2020 18:24:32 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> Message-ID: <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L5842-L5852 it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://f.perl.bot/p/njvgbm > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From gmann at ghanshyammann.com Wed Feb 5 18:39:17 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Feb 2020 12:39:17 -0600 Subject: [tc][uc][all] Starting community-wide goals ideas for V series Message-ID: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> Hello everyone, We are in R14 week of Ussuri cycle which means It's time to start the discussions about community-wide goals ideas for the V series. Community-wide goals are important in term of solving and improving a technical area across OpenStack as a whole. It has lot more benefits to be considered from users as well from a developers perspective. See [1] for more details about community-wide goals and process. We have the Zuulv3 migration goal already accepted and pre-selected for v cycle. If you are interested in proposing a goal, please write down the idea on this etherpad[2] - https://etherpad.openstack.org/p/YVR-v-series-goals Accordingly, we will start the separate ML discussion over each goal idea. Also, you can refer to the backlogs of community-wide goals from this[3] and ussuri cycle goals[4]. NOTE: TC has defined the goal process schedule[5] to streamlined the process and ready with goals for projects to plan/implement at the start of the cycle. I am hoping to start that schedule for W cycle goals. [1] https://governance.openstack.org/tc/goals/index.html [2] https://etherpad.openstack.org/p/YVR-v-series-goals [3] https://etherpad.openstack.org/p/community-goals [4] https://etherpad.openstack.org/p/PVG-u-series-goals [5] https://governance.openstack.org/tc/goals/#goal-selection-schedule -gmann From jeremyfreudberg at gmail.com Wed Feb 5 18:39:32 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 5 Feb 2020 13:39:32 -0500 Subject: [sahara] Cancelling Sahara meeting February 6 Message-ID: Hi all, There will be no Sahara meeting 2020-02-06, the principal reason being that I am on PTO. Holler if you need anything. Thanks, Jeremy From Rajini.Karthik at Dell.com Wed Feb 5 19:36:36 2020 From: Rajini.Karthik at Dell.com (Rajini.Karthik at Dell.com) Date: Wed, 5 Feb 2020 19:36:36 +0000 Subject: [Ironic] [Sushy] [CI] 3rd Party CI Message-ID: <264e9786861f4e55ad614f990d4585aa@AUSX13MPS308.AMER.DELL.COM> Announcing Dell 3rd Party Ironic CI is now functional for Openstack/Sushy library. Please take a look at https://review.opendev.org/#/c/705289/ Regards Rajini -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Feb 5 19:48:49 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 5 Feb 2020 19:48:49 +0000 Subject: Virtio memory balloon driver In-Reply-To: <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> Message-ID: Thanks Sean! Should I start a bug report for this? -----Original Message----- From: Sean Mooney Sent: Wednesday, February 5, 2020 10:25 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5842-2DL5852&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=_kEGfZqTkPscjy0GJB2N_WBXRJPEt2400ADV12hhxR8&e= it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.perl.bot_p_njvgbm&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=5J3hH_mxdtOyNFqbW6j9yGiSyMXmhy3bXrmXRHkJ9I0&e= > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From gmann at ghanshyammann.com Wed Feb 5 20:28:28 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Feb 2020 14:28:28 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> Message-ID: <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- > I am writing at top now for easy ready. > > * Gate status: > - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. > - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. > - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates > the tempest tox env with master u-c needs to be fixed[2]. While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems to cap the upper-constraint like it was proposed by chandan[1]. NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain such cap only for Tempest & its plugins till stable/rocky EOL. Background on this issue: -------------------------------- Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. ------------------------------------------- [1] https://review.opendev.org/#/c/705685/ [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 [3] https://review.opendev.org/#/c/705089 [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml -gmann > - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. > > * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED > - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. > > [1] https://review.opendev.org/#/c/705089/ > [2] https://review.opendev.org/#/c/705870/ > > -gmann > > ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > > Hello Everyone, > > > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > > of 'EOLing python2 drama' in subject :). > > > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > > to install the latest neutron-lib and failing. > > > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > > from master and kepe testing py2 on stable bracnhes. > > > > > > We have two way to fix this: > > > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > > I am trying this first[2] and testing[3]. > > > > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > > > Tried option#2: > > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > > or distro-specific job like centos7 etc where we have < py3.6. > > > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > > Testing your cloud with the latest Tempest is the best possible way. > > > > Going with option#1: > > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > > for all possible distro/py version. > > > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > > on >py3.6 env(venv or separate node). > > * Patch is up - https://review.opendev.org/#/c/704840/ > > > > 2.Modify Tempest tox env basepython to py3 > > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > > like fedora or future distro > > *Patch is up- https://review.opendev.org/#/c/704688/2 > > > > 3. Use compatible Tempest & its plugin tag for distro having > * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > > in-tree plugins for their stable branch testing. > > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > > in neutron-vpnaas case. This will be easy for future maintenance also. > > > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > > > [1] https://review.opendev.org/#/c/703476/ > > [2] https://review.opendev.org/#/c/703011/ > > [3] https://releases.openstack.org/rocky/#rocky-tempest > > > > -gmann > > > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > > [2] https://review.opendev.org/#/c/703011/ > > > [3] https://review.opendev.org/#/c/703012/ > > > > > > > > > -gmanne > > > > > > > > > > > > > > > > > > > > From mtreinish at kortar.org Wed Feb 5 21:38:26 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 5 Feb 2020 16:38:26 -0500 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205164630.xidg42ego5kvg4ia@mthode.org> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> Message-ID: <20200205213826.GA898532@zeong> On Wed, Feb 05, 2020 at 10:46:30AM -0600, Matthew Thode wrote: > On 20-02-05 10:17:18, Witek Bedyk wrote: > > Hi, > > > > we're using ujson in Monasca for serialization all over the place. While > > changing it to any other alternative is probably a drop-in replacement, we > > have in the past chosen to use ujson because of better performance. It is of > > great importance, in particular in persister. Current alternatives include > > orjson [1] and rapidjson [2]. We're going to measure which of them works > > best for our use case and how much faster they are compared to standard > > library module. > > > > Assuming there is a significant performance benefit, is there any preference > > from requirements team which one to include in global requirements? I > > haven't seen any distro packages for any of them. > > > > [1] https://pypi.org/project/orjson/ > > [2] https://pypi.org/project/python-rapidjson/ > > > > Best greetings > > Witek > > > > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > > This is a spinoff discussion of [1] to attract more people. > > > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > > gnocchi (both server and client) seem to be using it which may break > > > depending on compiler. > > > The original issue is that the released version of ujson is in > > > non-spec-conforming C which may break randomly based on used compiler > > > and linker. > > > There has been no release of ujson for more than 4 years. > > > > > > Based on general project activity, Monasca is probably able to fix it > > > but Gnocchi not so surely... > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > > > -yoctozepto > > > > > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > in requiring glibc 2.18, released 2013, or later. orjson does not > support PyPy. > > Given the above (I think we still need to support py35 at least) I'm not > sure we can use it. Though it is my preferred other than that... > (faster than ujson, more updates (last release yesterday), etc) It's also probably worth looking at the thread on this from August [1] discussing potentially using orjson. While they've added sdists since that original discussion (because of the pyo3-pack support being added) building it locally requires having rust nightly installed. This means for anyone on a non-x86_64 platform (including i686) will need to have rust nightly installed to pip install a package. Not that it's a super big burden, rustup makes it pretty easy to do, but it's a pretty uncommon thing for most people. But, I think that combined with no py35 support probably makes it a difficult thing to add to g-r. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008849.html -Matthew Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Wed Feb 5 22:21:43 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 5 Feb 2020 16:21:43 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205213826.GA898532@zeong> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> Message-ID: <20200205222143.xortzgo3zgk7d2im@mthode.org> On 20-02-05 16:38:26, Matthew Treinish wrote: > On Wed, Feb 05, 2020 at 10:46:30AM -0600, Matthew Thode wrote: > > On 20-02-05 10:17:18, Witek Bedyk wrote: > > > Hi, > > > > > > we're using ujson in Monasca for serialization all over the place. While > > > changing it to any other alternative is probably a drop-in replacement, we > > > have in the past chosen to use ujson because of better performance. It is of > > > great importance, in particular in persister. Current alternatives include > > > orjson [1] and rapidjson [2]. We're going to measure which of them works > > > best for our use case and how much faster they are compared to standard > > > library module. > > > > > > Assuming there is a significant performance benefit, is there any preference > > > from requirements team which one to include in global requirements? I > > > haven't seen any distro packages for any of them. > > > > > > [1] https://pypi.org/project/orjson/ > > > [2] https://pypi.org/project/python-rapidjson/ > > > > > > Best greetings > > > Witek > > > > > > > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > > > This is a spinoff discussion of [1] to attract more people. > > > > > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > > > gnocchi (both server and client) seem to be using it which may break > > > > depending on compiler. > > > > The original issue is that the released version of ujson is in > > > > non-spec-conforming C which may break randomly based on used compiler > > > > and linker. > > > > There has been no release of ujson for more than 4 years. > > > > > > > > Based on general project activity, Monasca is probably able to fix it > > > > but Gnocchi not so surely... > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > > > > > -yoctozepto > > > > > > > > > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > > in requiring glibc 2.18, released 2013, or later. orjson does not > > support PyPy. > > > > Given the above (I think we still need to support py35 at least) I'm not > > sure we can use it. Though it is my preferred other than that... > > (faster than ujson, more updates (last release yesterday), etc) > > It's also probably worth looking at the thread on this from August [1] > discussing potentially using orjson. While they've added sdists since that > original discussion (because of the pyo3-pack support being added) building > it locally requires having rust nightly installed. This means for anyone on > a non-x86_64 platform (including i686) will need to have rust nightly > installed to pip install a package. Not that it's a super big burden, rustup > makes it pretty easy to do, but it's a pretty uncommon thing for most people. > But, I think that combined with no py35 support probably makes it a difficult > thing to add to g-r. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008849.html > > -Matthew Treinish Forgot about that, does it still need nightly though? I'd hope that it doesn't but wouldn't be surprised if it does. Arch support is important though, some jobs execute on ppc64 and arm64 as well iirc. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mtreinish at kortar.org Wed Feb 5 23:03:38 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 5 Feb 2020 18:03:38 -0500 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205222143.xortzgo3zgk7d2im@mthode.org> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> <20200205222143.xortzgo3zgk7d2im@mthode.org> Message-ID: <20200205230338.GB898532@zeong> On Wed, Feb 05, 2020 at 04:21:43PM -0600, Matthew Thode wrote: > On 20-02-05 16:38:26, Matthew Treinish wrote: > > On Wed, Feb 05, 2020 at 10:46:30AM -0600, Matthew Thode wrote: > > > On 20-02-05 10:17:18, Witek Bedyk wrote: > > > > Hi, > > > > > > > > we're using ujson in Monasca for serialization all over the place. While > > > > changing it to any other alternative is probably a drop-in replacement, we > > > > have in the past chosen to use ujson because of better performance. It is of > > > > great importance, in particular in persister. Current alternatives include > > > > orjson [1] and rapidjson [2]. We're going to measure which of them works > > > > best for our use case and how much faster they are compared to standard > > > > library module. > > > > > > > > Assuming there is a significant performance benefit, is there any preference > > > > from requirements team which one to include in global requirements? I > > > > haven't seen any distro packages for any of them. > > > > > > > > [1] https://pypi.org/project/orjson/ > > > > [2] https://pypi.org/project/python-rapidjson/ > > > > > > > > Best greetings > > > > Witek > > > > > > > > > > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > > > > This is a spinoff discussion of [1] to attract more people. > > > > > > > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > > > > gnocchi (both server and client) seem to be using it which may break > > > > > depending on compiler. > > > > > The original issue is that the released version of ujson is in > > > > > non-spec-conforming C which may break randomly based on used compiler > > > > > and linker. > > > > > There has been no release of ujson for more than 4 years. > > > > > > > > > > Based on general project activity, Monasca is probably able to fix it > > > > > but Gnocchi not so surely... > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > > > > > > > -yoctozepto > > > > > > > > > > > > > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > > > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > > > in requiring glibc 2.18, released 2013, or later. orjson does not > > > support PyPy. > > > > > > Given the above (I think we still need to support py35 at least) I'm not > > > sure we can use it. Though it is my preferred other than that... > > > (faster than ujson, more updates (last release yesterday), etc) > > > > It's also probably worth looking at the thread on this from August [1] > > discussing potentially using orjson. While they've added sdists since that > > original discussion (because of the pyo3-pack support being added) building > > it locally requires having rust nightly installed. This means for anyone on > > a non-x86_64 platform (including i686) will need to have rust nightly > > installed to pip install a package. Not that it's a super big burden, rustup > > makes it pretty easy to do, but it's a pretty uncommon thing for most people. > > But, I think that combined with no py35 support probably makes it a difficult > > thing to add to g-r. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008849.html > > > > -Matthew Treinish > > Forgot about that, does it still need nightly though? I'd hope that it > doesn't but wouldn't be surprised if it does. Arch support is important > though, some jobs execute on ppc64 and arm64 as well iirc. > It does, it's because of PyO3 [1] which is the library most projects use for a python<->rust interface. PyO3 relies on some rust language features which are not part of stable yet. That being said I've got a couple rust/python projects[2][3] and have never had an issue with nightly rust, it's surprisingly stable and useable, and rustup makes installing it and keeping up to date simple. But despite that, for a project the size of OpenStack I think it's probably a bit much to ask for anyone not on x86_64 to need to have rust nightly installed just to install things via pip. -Matthew Treinish [1] https://github.com/PyO3/pyo3 [2] https://github.com/mtreinish/retworkx [3] https://github.com/mtreinish/pyrqasm -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emilien at redhat.com Thu Feb 6 00:14:31 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 5 Feb 2020 19:14:31 -0500 Subject: [tripleo] deep-dive containers & tooling Message-ID: Hi folks, On Friday I'll do a deep-dive on where we are with container tools. It's basically an update on the removal of Paunch, what will change etc. I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask questions or give feedback. https://bluejeans.com/6007759543 Link of the slides: https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Thu Feb 6 00:22:06 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 6 Feb 2020 08:22:06 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Albert, thanks for your process. I'll record these and think about how to do this on kolla if similar issue happen in the future. -Eddie Albert Braden 於 2020年2月6日 週四 上午1:41寫道: > Hi Eddie, > > > > This is the process that I use to reset RMQ when it fails. RMQ messages > are ephemeral; losing your old RMQ messages doesn’t ruin the cluster. > > > > On master: > > service rabbitmq-server stop > > ps auxw|grep rabbit > > (kill any rabbit processes) > > rm -rf /var/lib/rabbitmq/mnesia/* > > service rabbitmq-server start > > rabbitmqctl add_user admin > > rabbitmqctl set_user_tags admin administrator > > rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" > > rabbitmqctl add_user openstack > > rabbitmqctl set_permissions -p / openstack ".*" ".*" ".*" > > rabbitmqctl set_policy ha-all "" '{"ha-mode":"all"}' > > rabbitmqctl list_policies > > > > on slaves: > > rabbitmqctl stop_app > > If RMQ fails to reset on a slave, or fails to start after resetting, then: > > service rabbitmq-server stop > > ps auxw|grep rabbit > > (kill any rabbit processes) > > rm -rf /var/lib/rabbitmq/mnesia/* > > service rabbitmq-server start > > rabbitmqctl stop_app > > rabbitmqctl reset > > rabbitmqctl start_app > > rabbitmqctl stop_app > > rabbitmqctl join_cluster rabbit@ > > rabbitmqctl start_app > > > > *From:* Eddie Yen > *Sent:* Wednesday, February 5, 2020 3:33 AM > *To:* openstack-discuss > *Subject:* Re: [kolla] All services stats DOWN after re-launch whole > cluster. > > > > Today I tried to recovery RabbitMQ back, but still not useful, even delete > everything > > about data and configs for RabbitMQ then re-deploy (without destroy). > > > > And I found that the /etc/hosts on every nodes all been flushed, the > hostname > > resolve data created by kolla-ansible are gone. Checked and found that the > MAAS > > just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which > caused > > /etc/hosts been reset everytime when boot. > > > > Not sure it was a root cause or not but unfortunately I already reset > whole RabbitMQ > > data, so only I can do is destroy and deploy again. Fortunately this > cluster was just > > beginning so no VM launch, and no do complex setup yet. > > > > I think the issue may solved, although still need a time to investigate. > Based on this > > experience, need to notice about this may going to happen if using MAAS to > deploy > > the OS. > > > > -Eddie > > > > Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: > > Hi Erik, > > > > I'm already checked NIC link and no issue found. Pinging the nodes each > other on each interfaces is OK. > > And I'm not check docker logs about rabbitmq sbecause it works normally. > I'll check that out later. > > > > -Eddie > > > > Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: > > ⁹ > > On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: > > Hi everyone, > > > > We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 > Storage (Ceph OSD) > > site without internet. We did the shutdown few days ago since CNY > holidays. > > > > Today we re-launch whole cluster back. First we met the issue that MariaDB > containers keep > > restarting, and we fixed by using mariadb_recovery command. > > After that we check the status of each services, and found that all > services shown at > > Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP > connection, > > or other error found when check the downed service log. > > > > We tried reboot each servers but the situation still a same. Then we found > the RabbitMQ log not > > updating, the last log still stayed at the date we shutdown. Logged in to > RabbitMQ container and > > type "rabbitmqctl status" shows connection refused, and tried access its > web manager from > > :15672 on browser just gave us "503 Service unavailable" message. > Also no port 5672 > > listening. > > > > > > Any chance you have a NIC that didn't come up? What is in the log of the > container itself? (ie. docker log rabbitmq). > > > > > > I searched this issue on the internet but only few information about this. > One of solution is delete > > some files in mnesia folder, another is remove rabbitmq container and its > volume then re-deploy. > > But both are not sure. Does anyone know how to solve it? > > > > > > Many thanks, > > Eddie. > > > > -Erik > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 6 00:22:02 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 5 Feb 2020 19:22:02 -0500 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Of course it'll be recorded and the link will be available for everyone. On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, wrote: > Hi folks, > > On Friday I'll do a deep-dive on where we are with container tools. > It's basically an update on the removal of Paunch, what will change etc. > > I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask > questions or give feedback. > > https://bluejeans.com/6007759543 > Link of the slides: > https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Feb 6 01:25:23 2020 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 6 Feb 2020 10:25:23 +0900 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #1 In-Reply-To: References: Message-ID: Thank Kendall for the effort. Searchlight has landed the doc on master :D [1][2] [1] https://review.opendev.org/#/c/705968/ [2] https://docs.openstack.org/searchlight/latest/contributor/contributing.html On Wed, Feb 5, 2020 at 4:20 AM Kendall Nelson wrote: > > Hello All! > > At last a dedicated update solely to the Contrib & PTL Docs community > goal! Get excited :) > > At this point the goal has been accepted[1], and the template[2] has been > created and merged! > > So, the next step is for all projects to use the cookiecutter template[2] > and fill in the extra details after you generate the .rst that should auto > populate some of the information. > > As you are doing that, please assign yourself the associated tasks in the > story I created[3]. > > If you have any questions or concerns, please let me know! > > -Kendall Nelson (diablo_rojo) > > > [1] Goal: > https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html > [2] Docs Template: > https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst > [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 > > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Thu Feb 6 07:09:39 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 6 Feb 2020 07:09:39 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> Message-ID: <1416517604.208171.1580972979203@mail.yahoo.com> Hi,I am trying to run kubelet in arm64 platform1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile2.    Generated kuryr-cni-arm64 container.3.     my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) My master node in x86 installed successfully using devstack While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) COntroller logs: http://paste.openstack.org/show/789209/ Please help me to fix the issue Veera. Regards, Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at dincercelik.com Thu Feb 6 07:13:10 2020 From: hello at dincercelik.com (Dincer Celik) Date: Thu, 6 Feb 2020 10:13:10 +0300 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Eddie, Seems like an issue[1] which has been fixed previously. Could you please let me know which version are you using? -osmanlicilegi [1] https://bugs.launchpad.net/kolla-ansible/+bug/1837699 > On 5 Feb 2020, at 14:33, Eddie Yen wrote: > > Today I tried to recovery RabbitMQ back, but still not useful, even delete everything > about data and configs for RabbitMQ then re-deploy (without destroy). > > And I found that the /etc/hosts on every nodes all been flushed, the hostname > resolve data created by kolla-ansible are gone. Checked and found that the MAAS > just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which caused > /etc/hosts been reset everytime when boot. > > Not sure it was a root cause or not but unfortunately I already reset whole RabbitMQ > data, so only I can do is destroy and deploy again. Fortunately this cluster was just > beginning so no VM launch, and no do complex setup yet. > > I think the issue may solved, although still need a time to investigate. Based on this > experience, need to notice about this may going to happen if using MAAS to deploy > the OS. > > -Eddie > > Eddie Yen > 於 2020年2月4日 週二 下午9:45寫道: > Hi Erik, > > I'm already checked NIC link and no issue found. Pinging the nodes each other on each interfaces is OK. > And I'm not check docker logs about rabbitmq sbecause it works normally. I'll check that out later. > > -Eddie > > Erik McCormick > 於 2020年2月4日 週二 下午9:19寫道: > ⁹ > > On Tue, Feb 4, 2020, 7:20 AM Eddie Yen > wrote: > Hi everyone, > > We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 Storage (Ceph OSD) > site without internet. We did the shutdown few days ago since CNY holidays. > > Today we re-launch whole cluster back. First we met the issue that MariaDB containers keep > restarting, and we fixed by using mariadb_recovery command. > After that we check the status of each services, and found that all services shown at > Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP connection, > or other error found when check the downed service log. > > We tried reboot each servers but the situation still a same. Then we found the RabbitMQ log not > updating, the last log still stayed at the date we shutdown. Logged in to RabbitMQ container and > type "rabbitmqctl status" shows connection refused, and tried access its web manager from > :15672 on browser just gave us "503 Service unavailable" message. Also no port 5672 > listening. > > > Any chance you have a NIC that didn't come up? What is in the log of the container itself? (ie. docker log rabbitmq). > > > I searched this issue on the internet but only few information about this. One of solution is delete > some files in mnesia folder, another is remove rabbitmq container and its volume then re-deploy. > But both are not sure. Does anyone know how to solve it? > > > Many thanks, > Eddie. > > -Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Feb 6 07:35:03 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 6 Feb 2020 08:35:03 +0100 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205230338.GB898532@zeong> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> <20200205222143.xortzgo3zgk7d2im@mthode.org> <20200205230338.GB898532@zeong> Message-ID: Alrighty, folks. The summary is: orjson is bad due to requirement of PyO3 to run rust nightly on non-x86_64. Then what about the other contestant, rapidjson? [1] It's a wrapper over a popular C++ json library. Both wrapper and lib look alive enough to be considered serious. [1] https://pypi.org/project/python-rapidjson/ -yoctozepto From missile0407 at gmail.com Thu Feb 6 07:57:21 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 6 Feb 2020 15:57:21 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Dincer, I'm using Rocky, and seems like this fix didn't merge to stable/rocky. And also what you wrote about flush host table issue in MAAS deployment. -Eddie Dincer Celik 於 2020年2月6日 週四 下午3:13寫道: > Hi Eddie, > > Seems like an issue[1] which has been fixed previously. Could you please > let me know which version are you using? > > -osmanlicilegi > > [1] https://bugs.launchpad.net/kolla-ansible/+bug/1837699 > > On 5 Feb 2020, at 14:33, Eddie Yen wrote: > > Today I tried to recovery RabbitMQ back, but still not useful, even delete > everything > about data and configs for RabbitMQ then re-deploy (without destroy). > > And I found that the /etc/hosts on every nodes all been flushed, the > hostname > resolve data created by kolla-ansible are gone. Checked and found that the > MAAS > just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which > caused > /etc/hosts been reset everytime when boot. > > Not sure it was a root cause or not but unfortunately I already reset > whole RabbitMQ > data, so only I can do is destroy and deploy again. Fortunately this > cluster was just > beginning so no VM launch, and no do complex setup yet. > > I think the issue may solved, although still need a time to investigate. > Based on this > experience, need to notice about this may going to happen if using MAAS to > deploy > the OS. > > -Eddie > > Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: > >> Hi Erik, >> >> I'm already checked NIC link and no issue found. Pinging the nodes each >> other on each interfaces is OK. >> And I'm not check docker logs about rabbitmq sbecause it works normally. >> I'll check that out later. >> >> -Eddie >> >> Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: >> >>> ⁹ >>> >>> On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: >>> >>>> Hi everyone, >>>> >>>> We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + >>>> 3 Storage (Ceph OSD) >>>> site without internet. We did the shutdown few days ago since CNY >>>> holidays. >>>> >>>> Today we re-launch whole cluster back. First we met the issue that >>>> MariaDB containers keep >>>> restarting, and we fixed by using mariadb_recovery command. >>>> After that we check the status of each services, and found that all >>>> services shown at >>>> Admin > System > System Information are DOWN. Strange is no MariaDB, >>>> AMQP connection, >>>> or other error found when check the downed service log. >>>> >>>> We tried reboot each servers but the situation still a same. Then we >>>> found the RabbitMQ log not >>>> updating, the last log still stayed at the date we shutdown. Logged in >>>> to RabbitMQ container and >>>> type "rabbitmqctl status" shows connection refused, and tried access >>>> its web manager from >>>> :15672 on browser just gave us "503 Service unavailable" message. >>>> Also no port 5672 >>>> listening. >>>> >>> >>> >>> Any chance you have a NIC that didn't come up? What is in the log of the >>> container itself? (ie. docker log rabbitmq). >>> >>> >>>> I searched this issue on the internet but only few information about >>>> this. One of solution is delete >>>> some files in mnesia folder, another is remove rabbitmq container and >>>> its volume then re-deploy. >>>> But both are not sure. Does anyone know how to solve it? >>>> >>>> >>>> Many thanks, >>>> Eddie. >>>> >>> >>> -Erik >>> >>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Thu Feb 6 08:08:25 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 6 Feb 2020 16:08:25 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: (Click the "Send" button too fast...) Thanks to Dincer's information. Looks like the issue has already been resolved before but not merge to the branch we're using. I'll do the cherry-pick to stable/rocky later. -Eddie Eddie Yen 於 2020年2月6日 週四 下午3:57寫道: > Hi Dincer, > > I'm using Rocky, and seems like this fix didn't merge to stable/rocky. > And also what you wrote about flush host table issue in MAAS deployment. > > -Eddie > > Dincer Celik 於 2020年2月6日 週四 下午3:13寫道: > >> Hi Eddie, >> >> Seems like an issue[1] which has been fixed previously. Could you please >> let me know which version are you using? >> >> -osmanlicilegi >> >> [1] https://bugs.launchpad.net/kolla-ansible/+bug/1837699 >> >> On 5 Feb 2020, at 14:33, Eddie Yen wrote: >> >> Today I tried to recovery RabbitMQ back, but still not useful, even >> delete everything >> about data and configs for RabbitMQ then re-deploy (without destroy). >> >> And I found that the /etc/hosts on every nodes all been flushed, the >> hostname >> resolve data created by kolla-ansible are gone. Checked and found that >> the MAAS >> just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which >> caused >> /etc/hosts been reset everytime when boot. >> >> Not sure it was a root cause or not but unfortunately I already reset >> whole RabbitMQ >> data, so only I can do is destroy and deploy again. Fortunately this >> cluster was just >> beginning so no VM launch, and no do complex setup yet. >> >> I think the issue may solved, although still need a time to investigate. >> Based on this >> experience, need to notice about this may going to happen if using MAAS >> to deploy >> the OS. >> >> -Eddie >> >> Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: >> >>> Hi Erik, >>> >>> I'm already checked NIC link and no issue found. Pinging the nodes each >>> other on each interfaces is OK. >>> And I'm not check docker logs about rabbitmq sbecause it works normally. >>> I'll check that out later. >>> >>> -Eddie >>> >>> Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: >>> >>>> ⁹ >>>> >>>> On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: >>>> >>>>> Hi everyone, >>>>> >>>>> We have the Kolla Openstack site, which is 3 HCI >>>>> (Controller+Compute) + 3 Storage (Ceph OSD) >>>>> site without internet. We did the shutdown few days ago since CNY >>>>> holidays. >>>>> >>>>> Today we re-launch whole cluster back. First we met the issue that >>>>> MariaDB containers keep >>>>> restarting, and we fixed by using mariadb_recovery command. >>>>> After that we check the status of each services, and found that all >>>>> services shown at >>>>> Admin > System > System Information are DOWN. Strange is no MariaDB, >>>>> AMQP connection, >>>>> or other error found when check the downed service log. >>>>> >>>>> We tried reboot each servers but the situation still a same. Then we >>>>> found the RabbitMQ log not >>>>> updating, the last log still stayed at the date we shutdown. Logged in >>>>> to RabbitMQ container and >>>>> type "rabbitmqctl status" shows connection refused, and tried access >>>>> its web manager from >>>>> :15672 on browser just gave us "503 Service unavailable" message. >>>>> Also no port 5672 >>>>> listening. >>>>> >>>> >>>> >>>> Any chance you have a NIC that didn't come up? What is in the log of >>>> the container itself? (ie. docker log rabbitmq). >>>> >>>> >>>>> I searched this issue on the internet but only few information about >>>>> this. One of solution is delete >>>>> some files in mnesia folder, another is remove rabbitmq container and >>>>> its volume then re-deploy. >>>>> But both are not sure. Does anyone know how to solve it? >>>>> >>>>> >>>>> Many thanks, >>>>> Eddie. >>>>> >>>> >>>> -Erik >>>> >>>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 6 08:27:35 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 6 Feb 2020 09:27:35 +0100 Subject: [queens]cinder][iscsi] issue In-Reply-To: References: Message-ID: Hello , On centos kvm nodes, setting skip_kpartx no in multipath.conf solved the problem and now os_brick can flush maps Ignazio Il giorno lun 3 feb 2020 alle ore 15:13 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello all we are testing openstack queens cinder driver for Unity iscsi > (driver cinder.volume.drivers.dell_emc.unity.Driver). > > The unity storage is a Unity600 Version 4.5.10.5.001 > > We are facing an issue when we try to detach volume from a virtual machine > with two or more volumes attached (this happens often but not always). > > We also facing the same issue live migrating a virtual machine. > > > Multimaph -f does not work because it returns "map in use" and the volume > is not detached. > > Attached here there hare nova-compute e volume logs in debug mode. > > Could anyone help me ? > > Ignazio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Feb 6 09:15:12 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 6 Feb 2020 10:15:12 +0100 Subject: Active-Active Cinder + RBD driver + Co-ordination In-Reply-To: References: Message-ID: <20200206091512.lh5qh366prup6qop@localhost> On 05/02, Paul Browne wrote: > Hi list, > > I had a quick question about Active-Active in cinder-volume and > cinder-backup stable/stein and RBD driver, if anyone can help. > > Using only the Ceph RBD driver for volume backends, is it required to run > A-A cinder services with clustering configuration so that they form a > cluster? > > And, if so, is an external coordinator (redis/etcd/Consul) necessary, > again only using RBD driver? > Hi, For the time being the cluster concept only applies to the cinder-volume service. The cinder-backup service has 2 modes of operation: only backup from current node (so we can have different backup drivers on each node) deployed as A/P, and backup from any node (then all drivers must be the same) deployed as A/A. When deploying cinder-volume as active-active a coordinator is required to perform the functions of a DLM to respect mutual exclusion sections across the whole cluster. Drivers that can be used for the coordination are those available in Tooz [1] that support locks (afaik they all support them). If you don't form a cluster and deploy all cinder-volume services in A/A sharing the same host you'll end up in a world of pain, among other things, as soon as you add a new node or a cinder-volume service is restarted, as it will disturb all ongoing operations from the other cinder-volume services. I hope this helps. Cheers, Gorka. [1]: https://docs.openstack.org/tooz/latest/user/drivers.html > Best docs I could find on this so far were; > > https://docs.openstack.org/cinder/latest/contributor/high_availability.html > , > > I support more aimed at devs/contributoers than operators, but it's not > 100% clear to me on these questions > > Thanks, > Paul > > -- > ******************* > Paul Browne > Research Computing Platforms > University Information Services > Roger Needham Building > JJ Thompson Avenue > University of Cambridge > Cambridge > United Kingdom > E-Mail: pfb29 at cam.ac.uk > Tel: 0044-1223-746548 > ******************* From chkumar246 at gmail.com Thu Feb 6 09:42:16 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 6 Feb 2020 15:12:16 +0530 Subject: [tripleo] no rechecks Message-ID: Hello, TripleO jobs are failing with following bug. upstream tripleo jobs using os_tempest failing with ERROR: No matching distribution found for oslo.db===7.0.0 -> https://bugs.launchpad.net/tripleo/+bug/1862134 Fix is in progress: https://review.opendev.org/#/c/706196/ We need this patch to land before it's safe to blindly recheck. We appreciate your patience while we resolve the issues. Thanks, Chandan Kumar From florian at datalounges.com Thu Feb 6 10:01:42 2020 From: florian at datalounges.com (Florian Rommel) Date: Thu, 06 Feb 2020 12:01:42 +0200 Subject: [charms] modular deployment Message-ID: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> Hi, we are starting to look into deploying Openstack with Charms ,however there seems to be NO documentation on how to separate ceph osd and mon out of the Openstack-base bundle. We already have a ceph cluster so we would like to reuse that instead of hyperconverging. Is there any way do do this with charms or do we need to look into openstack ansible? Thank you and best regards, //F -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Thu Feb 6 10:13:26 2020 From: james.page at canonical.com (James Page) Date: Thu, 6 Feb 2020 10:13:26 +0000 Subject: [charms] modular deployment In-Reply-To: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> References: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> Message-ID: Hi Florian On Thu, Feb 6, 2020 at 10:03 AM Florian Rommel wrote: > Hi, we are starting to look into deploying Openstack with Charms ,however > there seems to be NO documentation on how to separate ceph osd and mon out > of the Openstack-base bundle. We already have a ceph cluster so we would > like to reuse that instead of hyperconverging. > > > > Is there any way do do this with charms or do we need to look into > openstack ansible? > This is not a typical deployment but is something that is possible - you can use the ceph-proxy to wire your existing ceph deployment into charm deployed OpenStack: https://jaas.ai/ceph-proxy Basically it acts as an intermediary between the OpenStack Charms and the existing Ceph deployment. Cheers James > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian at datalounges.com Thu Feb 6 10:15:38 2020 From: florian at datalounges.com (Florian Rommel) Date: Thu, 06 Feb 2020 12:15:38 +0200 Subject: [charms] modular deployment In-Reply-To: References: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> Message-ID: Hi James, thanks for that.. I will check this out, this might do the trick from briefly checking it over.. Best regards, //F From: James Page Date: Thursday 6. February 2020 at 12.13 To: Florian Rommel Cc: "openstack-discuss at lists.openstack.org" Subject: Re: [charms] modular deployment Hi Florian On Thu, Feb 6, 2020 at 10:03 AM Florian Rommel wrote: Hi, we are starting to look into deploying Openstack with Charms ,however there seems to be NO documentation on how to separate ceph osd and mon out of the Openstack-base bundle. We already have a ceph cluster so we would like to reuse that instead of hyperconverging. Is there any way do do this with charms or do we need to look into openstack ansible? This is not a typical deployment but is something that is possible - you can use the ceph-proxy to wire your existing ceph deployment into charm deployed OpenStack: https://jaas.ai/ceph-proxy Basically it acts as an intermediary between the OpenStack Charms and the existing Ceph deployment. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Thu Feb 6 10:35:59 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 06 Feb 2020 11:35:59 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <1416517604.208171.1580972979203@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> Message-ID: <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> Hi, The logs you provided doesn't seem to indicate any issues. Please provide logs of kuryr-daemon (kuryr-cni pod). Thanks, Michał On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > Hi, > I am trying to run kubelet in arm64 platform > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > 2. Generated kuryr-cni-arm64 container. > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > My master node in x86 installed successfully using devstack > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > COntroller logs: http://paste.openstack.org/show/789209/ > > Please help me to fix the issue > > Veera. > > > > > > > > > Regards, > Veera. From veeraready at yahoo.co.in Thu Feb 6 10:46:02 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 6 Feb 2020 10:46:02 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> Message-ID: <541855276.335476.1580985962108@mail.yahoo.com> Hi mdulko,Please find kuryr-cni logshttp://paste.openstack.org/show/789209/ Regards, Veera. On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: Hi, The logs you provided doesn't seem to indicate any issues. Please provide logs of kuryr-daemon (kuryr-cni pod). Thanks, Michał On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > Hi, > I am trying to run kubelet in arm64 platform > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > 2.    Generated kuryr-cni-arm64 container. > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > My master node in x86 installed successfully using devstack > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > COntroller logs: http://paste.openstack.org/show/789209/ > > Please help me to fix the issue > > Veera. > > > > > > > > > Regards, > Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Feb 6 11:07:24 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 6 Feb 2020 12:07:24 +0100 Subject: [ops][cinder] Moving volume to new type In-Reply-To: <20200130163623.alxwbl3jt5w2bldw@csail.mit.edu> References: <20200130163623.alxwbl3jt5w2bldw@csail.mit.edu> Message-ID: <20200206110724.mj36cproak2t67az@localhost> On 30/01, Jonathan Proulx wrote: > Hi All, > > I'm currently languishing on Mitaka so perhaps further back than help > can reach but...if anyone can tell me if this is something dumb I'm > doing or a know bug in mitaka that's preventing me for movign volumes > from one type to anyother it'd be a big help. > > In the further past I did a cinder backend migration by creating a new > volume type then changing all the existign volume sto the new type. > This is how we got from iSCSI to RBD (probably in Grizzly or Havana). > > Currently I'm starting to move from one RBD pool to an other and seems > like this should work in the same way. Both pools and types exist and > I can create volumes in either but when I run: > > openstack volume set --type ssd test-vol > > it rather silently fails to do anything (CLI returns 0), looking into > schedulerlogs I see: > > # yup 2 "hosts" to check > DEBUG cinder.scheduler.base_filter Starting with 2 host(s) get_filtered_objects > DEBUG cinder.scheduler.base_filter Filter AvailabilityZoneFilter returned 2 host(s) get_filtered_objects > DEBUG cinder.scheduler.filters.capacity_filter Space information for volume creation on host nimbus-1 at ssdrbd#ssdrbd (requested / avail): 8/47527.78 host_passes > DEBUG cinder.scheduler.base_filter Filter CapacityFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/base > DEBUG cinder.scheduler.filters.capabilities_filter extra_spec requirement 'ssdrbd' does not match 'rbd' _satisfies_extra_specs /usr/lib/python2.7/dist- > DEBUG cinder.scheduler.filters.capabilities_filter host 'nimbus-1 at rbd#rbd': free_capacity_gb: 71127.03, pools: None fails resource_type extra_specs req > DEBUG cinder.scheduler.base_filter Filter CapabilitiesFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/ > > # after filtering we have one > DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1 at ssdrbd#ssdrbd': free_capacity_gb: 47527.78, pools: None] _get_weighted_candidates > > # but it fails? > ERROR cinder.scheduler.manager Could not find a host for volume 49299c0b-8bcf-4cdb-a0e1-dec055b0e78c with type bc2bc9ad-b0ad-43d2-93db-456d750f194d. Hi, This looks like you didn't say that it was OK to migrate volumes on the retype. Did you set the migration policy on the request to "on-demand": cinder retype --migration-policy on-demand test-vol ssd Cheers, Gorka. > > > successfully creating a volume in ssdrbd is identical to that point, except rather than ERROR on the last line it goes to: > > # Actually chooses 'nimbus-1 at ssdrbd#ssdrbd' as top host > > DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1 at ssdrbd#ssdrbd': free_capacity_gb: 47527.8, pools: None] _get_weighted_candidates > DEBUG cinder.scheduler.filter_scheduler Choosing nimbus-1 at ssdrbd#ssdrbd _choose_top_host > > # then goes and makes volume > > DEBUG oslo_messaging._drivers.amqpdriver CAST unique_id: 1b7a9d88402a41f8b889b88a2e2a198d exchange 'openstack' topic 'cinder-volume' _send > DEBUG cinder.scheduler.manager Task 'cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create' (e70dcc3f-7d88-4542-abff-f1a1293e90fb) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver > > > Anyone recognize this situation? > > Since I'm retiring the old spinning disks I can also "solve" this on > the Ceph side by changing the crush map such that the old rbd pool > just picks all ssds. So this isn't critical, but in the transitional > period until I have enough SSD capacity to really throw *everything* > over, there are some hot spot volumes it would be really nice to move > this way. > > Thanks, > -Jon > From mdulko at redhat.com Thu Feb 6 11:45:49 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 06 Feb 2020 12:45:49 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <541855276.335476.1580985962108@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> Message-ID: Hm, nothing too troubling there too, besides Kubernetes not answering on /healthz endpoint. Are those full logs, including the moment you tried spawning a container there? It seems like you only pasted the fragments with tracebacks regarding failures to read /healthz endpoint of kube-apiserver. That is another problem you should investigate - that causes Kuryr pods to restart. At first I'd disable the healthchecks (remove readinessProbe and livenessProbe from Kuryr pod definitions) and try to get fresh set of logs. On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > Hi mdulko, > Please find kuryr-cni logs > http://paste.openstack.org/show/789209/ > > > Regards, > Veera. > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > Hi, > > The logs you provided doesn't seem to indicate any issues. Please > provide logs of kuryr-daemon (kuryr-cni pod). > > Thanks, > Michał > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > Hi, > > I am trying to run kubelet in arm64 platform > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > 2. Generated kuryr-cni-arm64 container. > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > My master node in x86 installed successfully using devstack > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > Please help me to fix the issue > > > > Veera. > > > > > > > > > > > > > > > > > > Regards, > > Veera. > > From rico.lin.guanyu at gmail.com Thu Feb 6 15:22:24 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 6 Feb 2020 23:22:24 +0800 Subject: [tc] February meeting agenda In-Reply-To: References: Message-ID: Hi all The meeting logs are available here: http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-02-06-14.00.html Notice that some discussion continues at office hour time. Please check. Thank you everyone! On Wed, Feb 5, 2020 at 4:07 PM Rico Lin wrote: > Hello everyone, > > Our next meeting is happening this Thursday (the 6th), and the > agenda is, as usual, on the wiki! > > Here is a primer of the agenda for this month: > - Report on large scale sig > - Report on tc/uc merge > - report on the post for the analysis of the survey > - Report on the convo Telemetry > - Report on multi-arch SIG > - report on infra liaison and static hosting > - report on stable branch policy work > - report on the oslo metrics project > - report on the community goals for U and V, py2 drop > - report on release naming > - report on the ideas repo > - report on charter change > - Report on whether SIG guidelines worked > - volunteers to represent OpenStack at the OpenDev advisory board > - Report on the OSF board initiatives > - Dropping side projects: using golden signals > > See you all in meeting:) > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Feb 6 16:04:14 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 6 Feb 2020 10:04:14 -0600 Subject: [nova] Ussuri spec push (and scrub) Message-ID: Hi all. Ahead of spec freeze next Thursday (Feb 13) we've agreed to do a review day next Tuesday, Feb 11th. If you own a blueprint [1] that isn't already Design:Approved (there are 20 as of this writing) and are still interested in landing it in Ussuri, please make sure your spec is review-ready, and plan to be present in #openstack-nova on Tuesday so we can work through any issues quickly. If you are a nova maintainer, please plan as much time as possible on Tuesday to review open specs [2] and discuss them in IRC. As a reminder, soon after spec freeze I would like to scrub the list of Design:Approved blueprints ("if we're going to do this, here's how") to decide which should be Direction:Approved ("we're going to do this in Ussuri") and which should be deferred. I will send a separate email about this after the milestone. Thanks, efried [1] https://blueprints.launchpad.net/nova/ussuri [2] https://review.opendev.org/#/q/project:openstack/nova-specs+path:%255Especs/ussuri/approved/.*+status:open From openstack at fried.cc Thu Feb 6 16:04:35 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 6 Feb 2020 10:04:35 -0600 Subject: [nova][ptg] Vancouver Planning Message-ID: <07f5fe7b-8539-315b-e30f-cf0920ea0ccd@fried.cc> Nova- I've started an etherpad for the Victoria PTG [1]. I've been asked to let the organizers know how much space and time we're going to need within the next few weeks, so please register your attendance and add any topics you know of as soon as possible. Thanks, efried [1] https://etherpad.openstack.org/p/nova-victoria-ptg From mthode at mthode.org Thu Feb 6 16:40:31 2020 From: mthode at mthode.org (Matthew Thode) Date: Thu, 6 Feb 2020 10:40:31 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> <20200205222143.xortzgo3zgk7d2im@mthode.org> <20200205230338.GB898532@zeong> Message-ID: <20200206164031.6wsm2li34equapno@mthode.org> On 20-02-06 08:35:03, Radosław Piliszek wrote: > Alrighty, folks. > > The summary is: orjson is bad due to requirement of PyO3 to run rust > nightly on non-x86_64. > > Then what about the other contestant, rapidjson? [1] > It's a wrapper over a popular C++ json library. > Both wrapper and lib look alive enough to be considered serious. > > [1] https://pypi.org/project/python-rapidjson/ > > -yoctozepto > Yep, rapidjson looks fine to me -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ignaziocassano at gmail.com Thu Feb 6 19:58:56 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 6 Feb 2020 20:58:56 +0100 Subject: [Neutron][segments] Message-ID: Hello, I am reading about openstack neutron routed provider networks. If I understood well, using segments I can create more subnets within the same provider net vlan id. Correct? In documentation seems I must use different physical network for each segment. I am using only one physical network ...in other words I created an openvswitch with a bridge (br-ex) with an interface on which I receive all my vlans (trunk). Why must I use different physical net? Sorry, but my net skill is poor. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Feb 6 20:40:59 2020 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 6 Feb 2020 15:40:59 -0500 Subject: [heat][tacker] After Stack is Created, will it change nested stack Id? In-Reply-To: References: Message-ID: <72fa408d-072a-7f72-91b1-caba3a7d1d6a@redhat.com> Hi Tushar, Great question. On 4/02/20 2:53 am, Patil, Tushar wrote: > Hi All, > > In tacker project, we are using heat API to create stack. > Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2. OK, so the scaled unit is a stack containing two servers and two ports. > So internally heat will create two nested stacks and add following resources to it:- > > child stack1 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port > > child stack2 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port In fact, Heat will create 3 nested stacks - one child stack for the AutoScalingGroup that contains two Template resources, which each have a grandchild stack (the ones you list above). I'm sure you know this, but I mention it because it makes what I'm about to say below clearer. > Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack. > > Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly. That's entirely reasonable. > My question is after the stack is created for the first time, will it ever change the nested child stack id? Short answer: no. Long answer: yes ;) In general normal updates and such will never result in the (grand)child stack ID changing. Even if a resource inside the stack fails (so the stack gets left in UPDATE_FAILED state), the next update will just try to update it again in-place. Obviously if you scale down the AutoScalingGroup and then scale it back up again, you'll end up with the different grandchild stack there. The only other time it gets replaced is if you use the "mark unhealthy" command on the template resource in the child stack (i.e. the autoscaling group stack), or on the AutoScalingGroup resource itself in the parent stack. If you do this a whole new replacement (grand)child stack will get replaced. Marking only the resources within the grandchild stack (e.g. VDU1) will *not* cause the stack to be replaced, so you should be OK. In code: https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/stack_resource.py#L106-L135 Hope that helps. Feel free to ask if you need more clarification. cheers, Zane. From skaplons at redhat.com Thu Feb 6 20:55:52 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 6 Feb 2020 21:55:52 +0100 Subject: [neutron][ptg] Vancouver attendance and planning Message-ID: Hi Neutrinos, As You probably know, Victoria PTG will be in Vancouver in June. I have been asked by organisers how much space and time we will need for Neutron so I started etherpad [1]. Please put there Your name if You are planning to go to Vancouver. Even if it’s not confirmed yet. I need to have number of people before Sunday, March 2nd. Also if You have any ideas about topics which we should discuss there, please write them in this etherpad too. [1] https://etherpad.openstack.org/p/neutron-victoria-ptg — Slawek Kaplonski Senior software engineer Red Hat From kennelson11 at gmail.com Thu Feb 6 21:00:12 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 6 Feb 2020 13:00:12 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #1 In-Reply-To: References: Message-ID: Woohoo! Looks awesome Trinh! Thanks for getting things started! -Kendall (diablo_rojo) On Wed, Feb 5, 2020 at 5:25 PM Trinh Nguyen wrote: > Thank Kendall for the effort. Searchlight has landed the doc on master :D > [1][2] > > [1] https://review.opendev.org/#/c/705968/ > [2] > https://docs.openstack.org/searchlight/latest/contributor/contributing.html > > > > On Wed, Feb 5, 2020 at 4:20 AM Kendall Nelson > wrote: > >> >> Hello All! >> >> At last a dedicated update solely to the Contrib & PTL Docs community >> goal! Get excited :) >> >> At this point the goal has been accepted[1], and the template[2] has been >> created and merged! >> >> So, the next step is for all projects to use the cookiecutter template[2] >> and fill in the extra details after you generate the .rst that should auto >> populate some of the information. >> >> As you are doing that, please assign yourself the associated tasks in the >> story I created[3]. >> >> If you have any questions or concerns, please let me know! >> >> -Kendall Nelson (diablo_rojo) >> >> >> [1] Goal: >> https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html >> [2] Docs Template: >> https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst >> [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 >> >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 7 00:23:51 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 06 Feb 2020 18:23:51 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> Message-ID: <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> ---- On Wed, 05 Feb 2020 14:28:28 -0600 Ghanshyam Mann wrote ---- > ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- > > I am writing at top now for easy ready. > > > > * Gate status: > > - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for > - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. > > - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. > > - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates > > the tempest tox env with master u-c needs to be fixed[2]. > > While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for > stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems > to cap the upper-constraint like it was proposed by chandan[1]. > > NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain > such cap only for Tempest & its plugins till stable/rocky EOL. Updates: ====== Instead of capping requirement, Tempest role run-tempest fix on master branch work fine because of Zuul pickup the job definition and playbooks from master branch. But there is another issue occurring here. grenade job fails and blocks the fixes on stable branches to merge because old branch has to be fixed first. So we need to merge the stable/ocata fix first and then stab;e/pike and so on. New bug: ======= Another bug on py2 jobs even on stable/train etc. - https://bugs.launchpad.net/tempest/+bug/1862240 Tempest tox env moved to py3 fail the py2 jobs on stable branch jobs who are still using deprecated tox env 'all-plugin'. plugins are installed on py2 in py2 jobs and tempest create tox env with py3 and so does 'all-plugin' which is sitepackage=True try to find plugins on py3 and fail. The solution is to replace the 'all-plugin' tox env to 'all'. I am fixing it for designate. But due to grenade job nature, we need to reverse the fixes here, first, merge the stable/stein and then stable/train. Stay tuned for updates and more bugs. The stable branches gate is still not green, I will update the status on ML about gate staus. -gmann > > Background on this issue: > -------------------------------- > > Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on > stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with > stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. > > I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. > ------------------------------------------- > > [1] https://review.opendev.org/#/c/705685/ > [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 > [3] https://review.opendev.org/#/c/705089 > [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml > > -gmann > > > > - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. > > > > * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED > > - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. > > > > [1] https://review.opendev.org/#/c/705089/ > > [2] https://review.opendev.org/#/c/705870/ > > > > -gmann > > > > ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > > > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > > > Hello Everyone, > > > > > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > > > of 'EOLing python2 drama' in subject :). > > > > > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > > > to install the latest neutron-lib and failing. > > > > > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > > > from master and kepe testing py2 on stable bracnhes. > > > > > > > > We have two way to fix this: > > > > > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > > > I am trying this first[2] and testing[3]. > > > > > > > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > > > > > Tried option#2: > > > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > > > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > > > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > > > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > > > or distro-specific job like centos7 etc where we have < py3.6. > > > > > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > > > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > > > Testing your cloud with the latest Tempest is the best possible way. > > > > > > Going with option#1: > > > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > > > for all possible distro/py version. > > > > > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > > > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > > > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > > > on >py3.6 env(venv or separate node). > > > * Patch is up - https://review.opendev.org/#/c/704840/ > > > > > > 2.Modify Tempest tox env basepython to py3 > > > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > > > like fedora or future distro > > > *Patch is up- https://review.opendev.org/#/c/704688/2 > > > > > > 3. Use compatible Tempest & its plugin tag for distro having > > * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > > > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > > > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > > > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > > > > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > > > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > > > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > > > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > > > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > > > in-tree plugins for their stable branch testing. > > > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > > > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > > > in neutron-vpnaas case. This will be easy for future maintenance also. > > > > > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > > > > > [1] https://review.opendev.org/#/c/703476/ > > > [2] https://review.opendev.org/#/c/703011/ > > > [3] https://releases.openstack.org/rocky/#rocky-tempest > > > > > > -gmann > > > > > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > > > [2] https://review.opendev.org/#/c/703011/ > > > > [3] https://review.opendev.org/#/c/703012/ > > > > > > > > > > > > -gmanne > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From rosmaita.fossdev at gmail.com Fri Feb 7 02:40:02 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 6 Feb 2020 21:40:02 -0500 Subject: [cinder][nova] volume-local-cache proposal cross-project meeting Message-ID: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> Liang Fang's volume-local-cache spec has gotten stuck because the Cinder team doesn't want to approve something that won't get approved on the Nova side and vice-versa. We discussed the spec at this week's Cinder meeting and there's sufficient interest to continue with it. In order to keep things moving along, we'd like to have a cross-project video conference next week instead of waiting for the PTG. I think we can get everything covered in a half-hour. I've put together a doodle poll to find a reasonable day and time: https://doodle.com/poll/w8ttitucgyqwe5yc Please take the poll before 17:00 UTC on Saturday 8 February. I'll send out an announcement shortly after that letting you know what time's been selected. I'll send out connection info once the meeting's been set. The meeting will be recorded and interested parties who can't make it can always leave notes on the discussion etherpad. Info: - cinder spec: https://review.opendev.org/#/c/684556/ - nova spec: https://review.opendev.org/#/c/689070/ - etherpad: https://etherpad.openstack.org/p/volume-local-cache cheers, brian From chkumar246 at gmail.com Fri Feb 7 04:46:00 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 7 Feb 2020 10:16:00 +0530 Subject: [tripleo] no rechecks In-Reply-To: References: Message-ID: On Thu, Feb 6, 2020 at 3:12 PM Chandan kumar wrote: > > Hello, > > TripleO jobs are failing with following bug. > > upstream tripleo jobs using os_tempest failing with ERROR: No matching > distribution found for oslo.db===7.0.0 -> > https://bugs.launchpad.net/tripleo/+bug/1862134 > > Fix is in progress: https://review.opendev.org/#/c/706196/ > > We need this patch to land before it's safe to blindly recheck. > We appreciate your patience while we resolve the issues. > Gate is fixed now. Please go ahead with rechecks. Thanks, Chandan Kumar From agarwalvishakha18 at gmail.com Fri Feb 7 07:20:33 2020 From: agarwalvishakha18 at gmail.com (Vishakha Agarwal) Date: Fri, 7 Feb 2020 12:50:33 +0530 Subject: [keystone] Keystone Team Update - Week of 3 February 2020 Message-ID: # Keystone Team Update - Week of 3 February 2020 ## News ### User Support and Bug Duty Every week the duty is being rotated between the members. The person-in-charge for bug duty for current and upcoming week can be seen on the etherpad [1] [1] https://etherpad.openstack.org/p/keystone-l1-duty ## Open Specs Ussuri specs: https://bit.ly/2XDdpkU Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 13 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 27 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Ussuri Roadmap Stories - Groups in keystone SAML assertion https://tree.taiga.io/project/keystone-ussuri-roadmap/us/33 https://review.opendev.org/#/c/588211/ Add openstack_groups to assertion - Add support for modifying resource options to CLI tool https://tree.taiga.io/project/keystone-ussuri-roadmap/us/53 https://review.opendev.org/#/c/697444/ Adding options to user cli * Special Requests https://review.opendev.org/#/c/705887/ Drop py3.5 from tempest plugins ## Bugs This week we opened 2 new bugs and closed 3. Bugs opened (2) Bug #1861695 (keystoneauth:Undecided): Programming error choosing an endpoint. - Opened by Raphaël Droz https://bugs.launchpad.net/keystoneauth/+bug/1861695 Bug #1862035 (keystoneauth:Undecided): Make possible to pass TCP_USER_TIMEOUT in keystoneauth1 session - Opened by Anton Kurbatov https://bugs.launchpad.net/keystoneauth/+bug/1862035 Bugs closed (3) Bug #1756190 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1756190 Bug #1853839 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1853839 Bug #1856286 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1856286 ## Milestone Outlook https://releases.openstack.org/ussuri/schedule.html Next week is spec freeze. One month from now is feature proposal freeze, by which point all code for new features should be proposed and ready for review - no WIPs. This will give us 4 weeks to review. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From katonalala at gmail.com Fri Feb 7 08:22:12 2020 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 7 Feb 2020 09:22:12 +0100 Subject: [Neutron][segments] In-Reply-To: References: Message-ID: Hi, I assume you refer to this documentation (just to be sure that others have the same base): https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html Based on that (and my limited knowledge of the usecases of this feature): To avoid large l2 tenant networks and small tenant networks which the user has to know and select, routed provider nets "provide" the option to use a provider network (characterized by a segment: physical network - segmentation type - segmentation id) and you can add more segments (as in the example for example with new VLAN id, but on the same physnet on other compute) and add extra subnet range for it which is specific for that segment (i.e.: a group of computes). With this the user will have 1 network to boot VMs on, but openstack (neutron - nova - placement) will decide where to put the VM which segment and subnet to use. I mentioned placement: The feature should work in a way that based on information provided by neutron (during segment & subnet creation) placement knows how many free IP addresses are available on a segment, and based on this places VMs to hosts, that is not working as I know, see: https://review.opendev.org/656885 & https://review.opendev.org/665155 Regards Lajos Ignazio Cassano ezt írta (időpont: 2020. febr. 6., Cs, 21:04): > Hello, > I am reading about openstack neutron routed provider networks. > If I understood well, using segments I can create more subnets within the > same provider net vlan id. > Correct? > In documentation seems I must use different physical network for each > segment. > I am using only one physical network ...in other words I created an > openvswitch with a bridge (br-ex) with an interface on which I receive all > my vlans (trunk). > Why must I use different physical net? > Sorry, but my net skill is poor. > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Feb 7 08:46:28 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Fri, 7 Feb 2020 08:46:28 +0000 Subject: [cinder][nova] volume-local-cache proposal cross-project meeting In-Reply-To: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> References: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> Message-ID: <1581065185.279185.6@est.tech> On Thu, Feb 6, 2020 at 21:40, Brian Rosmaita wrote: > Liang Fang's volume-local-cache spec has gotten stuck because the > Cinder team doesn't want to approve something that won't get approved > on the Nova side and vice-versa. > > We discussed the spec at this week's Cinder meeting and there's > sufficient interest to continue with it. In order to keep things > moving along, we'd like to have a cross-project video conference next > week instead of waiting for the PTG. > > I think we can get everything covered in a half-hour. I've put > together a doodle poll to find a reasonable day and time: > https://doodle.com/poll/w8ttitucgyqwe5yc Something is wrong with the poll. I'm sitting in UTC+1 and it is offering choices like Monday 8:30 am - 9:00 am for me. But the description text says 13:30-14:00 UTC Monday-Thursday. T Monday, Wednesday, Thursday, 13:30 - 14:00 UTC works for me. The 02:30-03:00 UTC slot is just in the middle of my night. Cheers, gibi > > Please take the poll before 17:00 UTC on Saturday 8 February. I'll > send out an announcement shortly after that letting you know what > time's been selected. > > I'll send out connection info once the meeting's been set. The > meeting will be recorded and interested parties who can't make it can > always leave notes on the discussion etherpad. > > Info: > - cinder spec: https://review.opendev.org/#/c/684556/ > - nova spec: https://review.opendev.org/#/c/689070/ > - etherpad: https://etherpad.openstack.org/p/volume-local-cache > > cheers, > brian > From ignaziocassano at gmail.com Fri Feb 7 10:32:05 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 7 Feb 2020 11:32:05 +0100 Subject: [Neutron][segments] In-Reply-To: References: Message-ID: Many thanks, Layos Il giorno ven 7 feb 2020 alle ore 09:22 Lajos Katona ha scritto: > Hi, > > I assume you refer to this documentation (just to be sure that others have > the same base): > https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html > > Based on that (and my limited knowledge of the usecases of this feature): > To avoid large l2 tenant networks and small tenant networks which the user > has to know and select, > routed provider nets "provide" the option to use a provider network > (characterized by a segment: physical network - segmentation type - > segmentation id) > and you can add more segments (as in the example for example with new VLAN > id, but on the same physnet on other compute) and add extra subnet range > for it > which is specific for that segment (i.e.: a group of computes). > With this the user will have 1 network to boot VMs on, but openstack > (neutron - nova - placement) will decide where to put the VM which segment > and subnet to use. > > I mentioned placement: The feature should work in a way that based on > information provided by neutron (during segment & subnet creation) > placement knows > how many free IP addresses are available on a segment, and based on this > places VMs to hosts, that is not working as I know, see: > https://review.opendev.org/656885 & https://review.opendev.org/665155 > > Regards > Lajos > > Ignazio Cassano ezt írta (időpont: 2020. febr. > 6., Cs, 21:04): > >> Hello, >> I am reading about openstack neutron routed provider networks. >> If I understood well, using segments I can create more subnets within the >> same provider net vlan id. >> Correct? >> In documentation seems I must use different physical network for each >> segment. >> I am using only one physical network ...in other words I created an >> openvswitch with a bridge (br-ex) with an interface on which I receive all >> my vlans (trunk). >> Why must I use different physical net? >> Sorry, but my net skill is poor. >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 7 12:19:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Feb 2020 06:19:04 -0600 Subject: [all] Nominations for the "W" release name In-Reply-To: References: Message-ID: <9c93db20-1912-cc6d-39da-1afb2feb4f52@gmx.com> We're down to the last few hours for naming the W release. Please add any ideas before the deadline. We will then have some time to discuss anything with the names and get the poll set up for election the following week. Thanks! Sean On 1/21/20 9:48 AM, Sean McGinnis wrote: > Hello all, > > We get to be a little proactive this time around and get the release > name chosen for the "W" release. Time to start thinking of good names > again! > > Process Changes > --------------- > > There are a couple of changes to be aware of with our naming process. In > the past, we had always based our naming criteria on something > geographically local to the Summit location. With the event changes to > no longer have two large Summit-type events per year, we have tweaked > our process to open things up and make it hopefully a little easier to > pick a good name that the community likes. > > There are a couple of significant changes. First, names can now be > proposed for anything that starts with the appropriate letter. It is no > longer tied to a specific geographic region. Second, in order to > simplify the process, the electorate for the poll will be the OpenStack > Technical Committee. Full details of the release naming process can be > found here: > > https://governance.openstack.org/tc/reference/release-naming.html > > Name Selection > -------------- > > With that, the nomination period for the "W" release name is now open. > Please add suitable names to: > > https://wiki.openstack.org/wiki/Release_Naming/W_Proposals > > We will accept nominations until February 7, 2020 at 23:59:59 UTC. We > will then have a brief period for any necessary discussions and to get > the poll set up, with the TC electorate voting starting by February 17, > 2020 and going no longer than February 23, 2020. > > Based on past timing with trademark and copyright reviews, we will > likely have an official release name by mid to late March. > > Happy naming! > > Sean > > From sean.mcginnis at gmx.com Fri Feb 7 13:50:17 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Feb 2020 07:50:17 -0600 Subject: [release] Release countdown for week R-13, February 10-14 Message-ID: <20200207135017.GA1488525@sm-workstation> Development Focus ----------------- The Ussuri-2 milestone is next week, on February 13! Ussuri-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/ussuri/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new 'Ussuri' deliverables need to have a deliverable file (https://opendev.org/openstack/releases/src/branch/master/deliverables/ussuri ) and need to have done a release by milestone-2. The following new deliverables have not had a release yet, and will not be included in Ussuri unless a release is requested for them in the coming week: - adjutant, adjutant-ui, python-adjutantclient (Adjutant) - barbican-ui (Barbican) - js-openstack-lib (OpenStackSDK) - ovn-octavia-provider (Neutron) - sushy-cli (Ironic) - tripleo-operator-ansible (TripleO) Changes proposing those deliverables for inclusion in Ussuri have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in Ussuri, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Ussuri-2 Milestone: February 13 (R-13 week) From sean.mcginnis at gmx.com Fri Feb 7 13:52:14 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Feb 2020 07:52:14 -0600 Subject: [Release-job-failures] Release of openstack/mox3 for ref refs/tags/1.0.0 failed In-Reply-To: References: Message-ID: <20200207135214.GB1488525@sm-workstation> On Fri, Feb 07, 2020 at 01:16:38PM +0000, zuul at openstack.org wrote: > Build failed. > > - release-openstack-python https://zuul.opendev.org/t/openstack/build/720dc1833f464e2db76b90d1c8437883 : SUCCESS in 3m 48s > - announce-release https://zuul.opendev.org/t/openstack/build/d85b18ed6e2a47f98e96c1ad168ae98a : FAILURE in 4m 00s > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/c2abc022840846c7a87b5804580492ee : SUCCESS in 3m 51s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures There was a temporary SMTP failure in sending the release announcement for mox3. All other release activities were successful, so I think this error can safely be ignored. Sean From emilien at redhat.com Fri Feb 7 15:18:24 2020 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 7 Feb 2020 10:18:24 -0500 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Thanks for joining, and the great questions, I hope you learned something, and that we can do it again soon. Here is the recording: https://bluejeans.com/s/vTSAY Slides: https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/edit On Wed, Feb 5, 2020 at 7:22 PM Emilien Macchi wrote: > Of course it'll be recorded and the link will be available for everyone. > > On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, > wrote: > >> Hi folks, >> >> On Friday I'll do a deep-dive on where we are with container tools. >> It's basically an update on the removal of Paunch, what will change etc. >> >> I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask >> questions or give feedback. >> >> https://bluejeans.com/6007759543 >> Link of the slides: >> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ >> -- >> Emilien Macchi >> > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccamacho at redhat.com Fri Feb 7 15:23:24 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Fri, 7 Feb 2020 16:23:24 +0100 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Thanks Emilien for the session, I wasn't able to be present but I'll proceed to edit/publish it in the TripleO youtube channel. Thanks! On Fri, Feb 7, 2020 at 4:21 PM Emilien Macchi wrote: > Thanks for joining, and the great questions, I hope you learned something, > and that we can do it again soon. > > Here is the recording: > https://bluejeans.com/s/vTSAY > Slides: > https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/edit > > > > On Wed, Feb 5, 2020 at 7:22 PM Emilien Macchi wrote: > >> Of course it'll be recorded and the link will be available for everyone. >> >> On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, >> wrote: >> >>> Hi folks, >>> >>> On Friday I'll do a deep-dive on where we are with container tools. >>> It's basically an update on the removal of Paunch, what will change etc. >>> >>> I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask >>> questions or give feedback. >>> >>> https://bluejeans.com/6007759543 >>> Link of the slides: >>> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ >>> -- >>> Emilien Macchi >>> >> > > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gryf73 at gmail.com Fri Feb 7 15:25:02 2020 From: gryf73 at gmail.com (Roman Dobosz) Date: Fri, 7 Feb 2020 16:25:02 +0100 Subject: [kuryr] macvlan driver looking for an owner Message-ID: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> Hi, Recently we migrated most of the drivers and handler to OpenStackSDK, instead of neutron-client[1], and have a plan to drop neutron-client usage. One of the drivers - MACVLAN based interfaces for nested containers - wasn't migrated due to lack of confidence backed up with sufficient tempest tests. Therefore we are looking for a maintainer, who will take care about both - migration to the openstacksdk (which I can help with) and to provide appropriate environment and tests. In case there is no interest on continuing support for this driver we will deprecate it and remove from the source tree possibly in this release. [1] https://review.opendev.org/#/q/topic:bp/switch-to-openstacksdk -- Cheers, Roman Dobosz gryf at freenode From Albert.Braden at synopsys.com Fri Feb 7 19:15:58 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 7 Feb 2020 19:15:58 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' Message-ID: If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see "Loaded 2 Fernet keys" every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Feb 7 20:13:57 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 7 Feb 2020 12:13:57 -0800 Subject: [manila][ptg] Victoria Cycle PTG in Vancouver - Planning Message-ID: Hello Zorillas and other interested Stackers, We've requested space to gather at the Open Infrastructure PTG event in Vancouver, BC between June 8th-11th 2020. This event gives us the opportunity to catch up in person (and via telepresence) and discuss the next set of goals for manila and its ecosystem of projects. As has been the norm, there's now a planning etherpad for us to collect topics and vote upon [1]. Depending on how much space/time we are budgeted, we will select and schedule these topics. Please take a look and add your topics to the etherpad. Thanks, Goutham [1] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning From Albert.Braden at synopsys.com Fri Feb 7 22:25:46 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 7 Feb 2020 22:25:46 +0000 Subject: Virtio memory balloon driver In-Reply-To: <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> Message-ID: I opened a bug: https://bugs.launchpad.net/nova/+bug/1862425 -----Original Message----- From: Sean Mooney Sent: Wednesday, February 5, 2020 10:25 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5842-2DL5852&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=_kEGfZqTkPscjy0GJB2N_WBXRJPEt2400ADV12hhxR8&e= it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.perl.bot_p_njvgbm&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=5J3hH_mxdtOyNFqbW6j9yGiSyMXmhy3bXrmXRHkJ9I0&e= > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From ignaziocassano at gmail.com Sat Feb 8 14:55:21 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 15:55:21 +0100 Subject: [Queens][neutron] multiple networks with same vlan id Message-ID: Hello, I trying to create multiple networks with the same vlan id and different address but it is impossibile because neutron returns vlan id is already present. I tried with success on vcenter and it works. On openstack I can create more subnets with different adresses under the same vlan id, but instances receive static routes from dhcp for all subnets. Any workaround ? I am also trying with segments but they do not seem to fit my case. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 14:59:03 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 15:59:03 +0100 Subject: [quuens] neutron vxlan to vxlan Message-ID: Hello, I have an openstack installation and a vcenter installation with nsxv. Vcenter is not under openstack Any solution for vxlan to vxlan communication between them? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sat Feb 8 16:21:53 2020 From: donny at fortnebula.com (Donny Davis) Date: Sat, 8 Feb 2020 11:21:53 -0500 Subject: [Queens][neutron] multiple networks with same vlan id In-Reply-To: References: Message-ID: Are you trying to create a new network each time? There is probably some different terminology in openstack vs vmware A network in openstack defines the underlying L2 switching device/ vlan/ vxlan / gre .. etc You can add as many subnets to a network as you would like to, and host routes can be added for each subnet. On Sat, Feb 8, 2020 at 10:00 AM Ignazio Cassano wrote: > > Hello, I trying to create multiple networks with the same vlan id and different address but it is impossibile because neutron returns vlan id is already present. > I tried with success on vcenter and it works. > On openstack I can create more subnets with different adresses under the same vlan id, but instances receive static routes from dhcp for all subnets. > Any workaround ? > I am also trying with segments but they do not seem to fit my case. > Regards > Ignazio > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" From romain.chanu at univ-lyon1.fr Sat Feb 8 17:11:31 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Sat, 8 Feb 2020 17:11:31 +0000 Subject: [Queens][neutron] multiple networks with same vlan id In-Reply-To: References: Message-ID: <1581181900656.35220@univ-lyon1.fr> Hello, In the case of Openstack the vlan driver permits to isolate each project's layer 2. This way each project's L3 can overlaps without any problem. You can create many L3 segments within only one VLAN ID. I do not use vcenter but for old ESXi it was "possible to create" many vswitches whom contains same vlan ID, but this is very different and vmware misuses term of switch. Best regards, Romain . ________________________________ From: Ignazio Cassano Sent: Saturday, February 8, 2020 3:55 PM To: openstack-discuss Subject: [Queens][neutron] multiple networks with same vlan id Hello, I trying to create multiple networks with the same vlan id and different address but it is impossibile because neutron returns vlan id is already present. I tried with success on vcenter and it works. On openstack I can create more subnets with different adresses under the same vlan id, but instances receive static routes from dhcp for all subnets. Any workaround ? I am also trying with segments but they do not seem to fit my case. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.chanu at univ-lyon1.fr Sat Feb 8 17:29:42 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Sat, 8 Feb 2020 17:29:42 +0000 Subject: [quuens] neutron vxlan to vxlan In-Reply-To: References: Message-ID: <1581182992368.92191@univ-lyon1.fr> Hello, You could do it if you use linuxbridge driver, arp_responder has to be set to false. VMware's VNI has to be set in multicast mode, then if OS et NSX use same multicast IP VM should be able to communicate. If your ESX et OS's hypervisors are not conencted to same network, you have to enable PIM in your router. Anyway this configuration is a bit dirty and you should look a proper way to make it works over L3. A VPN inside each subnets with static routes is cleaner and less risky. Best regards, Romain ________________________________ From: Ignazio Cassano Sent: Saturday, February 8, 2020 3:59 PM To: openstack-discuss Subject: [quuens] neutron vxlan to vxlan Hello, I have an openstack installation and a vcenter installation with nsxv. Vcenter is not under openstack Any solution for vxlan to vxlan communication between them? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 17:48:45 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 18:48:45 +0100 Subject: [quuens] neutron vxlan to vxlan In-Reply-To: <1581182992368.92191@univ-lyon1.fr> References: <1581182992368.92191@univ-lyon1.fr> Message-ID: Thanks Roman. I think secondo way you suggested is better than first. Ignazio Il Sab 8 Feb 2020, 18:29 CHANU ROMAIN ha scritto: > Hello, > > > You could do it if you use linuxbridge driver, arp_responder has to be set > to false. VMware's VNI has to be set in multicast mode, then if OS et NSX > use same multicast IP VM should be able to communicate. > > > If your ESX et OS's hypervisors are not conencted to same network, you > have to enable PIM in your router. > > > Anyway this configuration is a bit dirty and you should look a proper way > to make it works over L3. A VPN inside each subnets with static routes is > cleaner and less risky. > > > Best regards, > > Romain > > > > ------------------------------ > *From:* Ignazio Cassano > *Sent:* Saturday, February 8, 2020 3:59 PM > *To:* openstack-discuss > *Subject:* [quuens] neutron vxlan to vxlan > > Hello, I have an openstack installation and a vcenter installation with > nsxv. > Vcenter is not under openstack > Any solution for vxlan to vxlan communication between them? > Regards > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 17:53:54 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 18:53:54 +0100 Subject: [Queens][neutron] multiple networks with same vlan id. In-Reply-To: References: Message-ID: Hello Donny, I do not want to see long routing tables on instances. An instance in a network with many subnets, receives routing tables for all subnets. Ignazio Il Sab 8 Feb 2020, 17:22 Donny Davis ha scritto: > Are you trying to create a new network each time? > There is probably some different terminology in openstack vs vmware > > A network in openstack defines the underlying L2 switching device/ > vlan/ vxlan / gre .. etc > > You can add as many subnets to a network as you would like to, and > host routes can be added for each subnet. > > On Sat, Feb 8, 2020 at 10:00 AM Ignazio Cassano > wrote: > > > > Hello, I trying to create multiple networks with the same vlan id and > different address but it is impossibile because neutron returns vlan id is > already present. > > I tried with success on vcenter and it works. > > On openstack I can create more subnets with different adresses under the > same vlan id, but instances receive static routes from dhcp for all subnets. > > Any workaround ? > > I am also trying with segments but they do not seem to fit my case. > > Regards > > Ignazio > > > > > > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 17:56:31 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 18:56:31 +0100 Subject: [Queens][neutron] multiple networks with same vlan id In-Reply-To: <1581181900656.35220@univ-lyon1.fr> References: <1581181900656.35220@univ-lyon1.fr> Message-ID: Many thanks. Ignazio Il Sab 8 Feb 2020, 18:11 CHANU ROMAIN ha scritto: > Hello, > > > In the case of Openstack the vlan driver permits to isolate each project's > layer 2. This way each project's L3 can overlaps without any problem. You > can create many L3 segments within only one VLAN ID. > > > I do not use vcenter but for old ESXi it was "possible to create" many > vswitches whom contains same vlan ID, but this is very different and vmware > misuses term of switch. > > > Best regards, > > Romain > . > ------------------------------ > *From:* Ignazio Cassano > *Sent:* Saturday, February 8, 2020 3:55 PM > *To:* openstack-discuss > *Subject:* [Queens][neutron] multiple networks with same vlan id > > Hello, I trying to create multiple networks with the same vlan id and > different address but it is impossibile because neutron returns vlan id is > already present. > I tried with success on vcenter and it works. > On openstack I can create more subnets with different adresses under the > same vlan id, but instances receive static routes from dhcp for all subnets. > Any workaround ? > I am also trying with segments but they do not seem to fit my case. > Regards > Ignazio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Feb 8 18:00:01 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 8 Feb 2020 18:00:01 +0000 Subject: [Queens][neutron] multiple networks with same vlan id. In-Reply-To: References: Message-ID: <20200208180001.gjegpvkw3b6mxg3h@yuggoth.org> On 2020-02-08 18:53:54 +0100 (+0100), Ignazio Cassano wrote: [...] > I do not want to see long routing tables on instances. An instance > in a network with many subnets, receives routing tables for all > subnets. [...] And as we've discussed recently on this list, if you're relying on DHCP there's a hard limit to the size of the route list it can provide in leases anyway, so the shorter the routing table per Neutron network, the better. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Sat Feb 8 18:28:22 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sat, 8 Feb 2020 13:28:22 -0500 Subject: [cinder][nova] volume-local-cache proposal cross-project meeting In-Reply-To: <1581065185.279185.6@est.tech> References: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> <1581065185.279185.6@est.tech> Message-ID: On 2/7/20 3:46 AM, Balázs Gibizer wrote: > > > On Thu, Feb 6, 2020 at 21:40, Brian Rosmaita > wrote: >> Liang Fang's volume-local-cache spec has gotten stuck because the >> Cinder team doesn't want to approve something that won't get approved >> on the Nova side and vice-versa. >> >> We discussed the spec at this week's Cinder meeting and there's >> sufficient interest to continue with it. In order to keep things >> moving along, we'd like to have a cross-project video conference next >> week instead of waiting for the PTG. >> >> I think we can get everything covered in a half-hour. I've put >> together a doodle poll to find a reasonable day and time: >> https://doodle.com/poll/w8ttitucgyqwe5yc > > Something is wrong with the poll. I'm sitting in UTC+1 and it is > offering choices like Monday 8:30 am - 9:00 am for me. But the > description text says 13:30-14:00 UTC Monday-Thursday. > T > Monday, Wednesday, Thursday, 13:30 - 14:00 UTC works for me. The > 02:30-03:00 UTC slot is just in the middle of my night. Thanks for letting me know. I get inconsistent TZ results from the interface, too. > > Cheers, > gibi > > >> >> Please take the poll before 17:00 UTC on Saturday 8 February. I'll >> send out an announcement shortly after that letting you know what >> time's been selected. >> >> I'll send out connection info once the meeting's been set. The >> meeting will be recorded and interested parties who can't make it can >> always leave notes on the discussion etherpad. >> >> Info: >> - cinder spec: https://review.opendev.org/#/c/684556/ >> - nova spec: https://review.opendev.org/#/c/689070/ >> - etherpad: https://etherpad.openstack.org/p/volume-local-cache >> >> cheers, >> brian >> > > From rosmaita.fossdev at gmail.com Sat Feb 8 18:46:23 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sat, 8 Feb 2020 13:46:23 -0500 Subject: [cinder][nova] volume-local-cache meeting day/time set Message-ID: I picked the day/time with the most votes (9/10, sorry Rajat), but as gibi pointed out in the other thread, it may not have been obvious what you actually voted for. The good news is that the selected day is late in the week, so if you look at this time and say WTF?, let me know and we can scramble to try to find another day/time (though hopefully not--we're trying to find a time that works stretching from Minneapolis to Shanghai). Meeting time: 13:30-14:00 UTC Thursday 13 February 2020 Location: https://bluejeans.com/3228528973 Topic: volume-local-cache specs: cinder: https://review.opendev.org/#/c/684556/ nova: https://review.opendev.org/#/c/689070/ Etherpad: https://etherpad.openstack.org/p/volume-local-cache The meeting will be recorded. Feel free to leave comments on the etherpad if you can't make it but want something addressed. cheers, brian From rajatdhasmana at gmail.com Sat Feb 8 19:15:59 2020 From: rajatdhasmana at gmail.com (Rajat Dhasmana) Date: Sun, 9 Feb 2020 00:45:59 +0530 Subject: [cinder][nova] volume-local-cache meeting day/time set In-Reply-To: References: Message-ID: Yes, I was suspicious about the options as timings showed one as AM and the other PM. This timing certainly suits me. And thanks for the concern. Regards Rajat Dhasmana On Sun, Feb 9, 2020, 12:20 AM Brian Rosmaita wrote: > I picked the day/time with the most votes (9/10, sorry Rajat), but as > gibi pointed out in the other thread, it may not have been obvious what > you actually voted for. The good news is that the selected day is late > in the week, so if you look at this time and say WTF?, let me know and > we can scramble to try to find another day/time (though hopefully > not--we're trying to find a time that works stretching from Minneapolis > to Shanghai). > > Meeting time: 13:30-14:00 UTC Thursday 13 February 2020 > Location: https://bluejeans.com/3228528973 > Topic: volume-local-cache specs: > cinder: https://review.opendev.org/#/c/684556/ > nova: https://review.opendev.org/#/c/689070/ > Etherpad: https://etherpad.openstack.org/p/volume-local-cache > > The meeting will be recorded. Feel free to leave comments on the > etherpad if you can't make it but want something addressed. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harishkumarivaturi at gmail.com Sat Feb 8 19:16:06 2020 From: harishkumarivaturi at gmail.com (HARISH KUMAR Ivaturi) Date: Sat, 8 Feb 2020 20:16:06 +0100 Subject: Regarding OpenStack Message-ID: Hi I am Harish Kumar, Master Student at BTH, Karlskrona, Sweden. I have started my Master thesis at BTH and my thesis topic is Performance evaluation of OpenStack with HTTP/3. My solutions will take some time (few months) and i would like to request you that could you add me as a contributor to your github repository , so after completing my thesis i could push my codes in that repository. You can contact me for further details. Thanks and Regards Harish Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sat Feb 8 20:27:28 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 8 Feb 2020 21:27:28 +0100 Subject: Regarding OpenStack In-Reply-To: References: Message-ID: Hi Harish, Thanks for your interest, we use GitHub inside OpenStack simply to host a read-only mirror of our code, we use Gerrit to manage contributions, you can read more here: https://docs.openstack.org/contributors/ Regards, Mohammed On Sat, Feb 8, 2020 at 8:33 PM HARISH KUMAR Ivaturi wrote: > > Hi > I am Harish Kumar, Master Student at BTH, Karlskrona, Sweden. I have started my Master thesis at BTH and my thesis topic is Performance evaluation of OpenStack with HTTP/3. > My solutions will take some time (few months) and i would like to request you that could you add me as a contributor to your github repository , so after completing my thesis i could push my codes in that repository. > > You can contact me for further details. > > Thanks and Regards > Harish Kumar -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From mesutaygn at gmail.com Sat Feb 8 22:09:50 2020 From: mesutaygn at gmail.com (=?UTF-8?B?bWVzdXQgYXlnw7xu?=) Date: Sun, 9 Feb 2020 01:09:50 +0300 Subject: openstack-dpdk installation Message-ID: Hey, I am working over dpdk installation with openstack ocata release. How can I use the neutron-openvswitch-agent-dpdk or anything else instead of neutron-openvswitch-agent? Which can I choose the correct packages with os dpdk versions? I couldn't use the devstack versions!! Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Feb 9 00:39:42 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 08 Feb 2020 18:39:42 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> Message-ID: <1702763455f.f46f54ac427417.3931200914462905314@ghanshyammann.com> ---- On Thu, 06 Feb 2020 18:23:51 -0600 Ghanshyam Mann wrote ---- > ---- On Wed, 05 Feb 2020 14:28:28 -0600 Ghanshyam Mann wrote ---- > > ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- > > > I am writing at top now for easy ready. > > > > > > * Gate status: > > > - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for > > - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. > > > - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. > > > - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates > > > the tempest tox env with master u-c needs to be fixed[2]. > > > > While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for > > stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems > > to cap the upper-constraint like it was proposed by chandan[1]. > > > > NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain > > such cap only for Tempest & its plugins till stable/rocky EOL. > > Updates: > ====== > Instead of capping requirement, Tempest role run-tempest fix on master branch work fine because of Zuul pickup the job definition > and playbooks from master branch. But there is another issue occurring here. grenade job fails and blocks the fixes on stable branches > to merge because old branch has to be fixed first. So we need to merge the stable/ocata fix first and then stab;e/pike and so on. All the fixes are merged now and good to recheck on stable branch jobs. I am working on bug#1862240 now. - https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) -gmann > > New bug: > ======= > Another bug on py2 jobs even on stable/train etc. - https://bugs.launchpad.net/tempest/+bug/1862240 > > Tempest tox env moved to py3 fail the py2 jobs on stable branch jobs who are still using deprecated tox env 'all-plugin'. > plugins are installed on py2 in py2 jobs and tempest create tox env with py3 and so does 'all-plugin' which is sitepackage=True > try to find plugins on py3 and fail. > The solution is to replace the 'all-plugin' tox env to 'all'. I am fixing it for designate. But due to grenade job nature, we need to > reverse the fixes here, first, merge the stable/stein and then stable/train. > > Stay tuned for updates and more bugs. The stable branches gate is still not green, I will update the status on ML about gate staus. > > -gmann > > > > > Background on this issue: > > -------------------------------- > > > > Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on > > stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with > > stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. > > > > I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. > > ------------------------------------------- > > > > [1] https://review.opendev.org/#/c/705685/ > > [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 > > [3] https://review.opendev.org/#/c/705089 > > [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml > > > > -gmann > > > > > > > - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. > > > > > > * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED > > > - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. > > > > > > [1] https://review.opendev.org/#/c/705089/ > > > [2] https://review.opendev.org/#/c/705870/ > > > > > > -gmann > > > > > > ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > > > > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > > > > Hello Everyone, > > > > > > > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > > > > of 'EOLing python2 drama' in subject :). > > > > > > > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > > > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > > > > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > > > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > > > > to install the latest neutron-lib and failing. > > > > > > > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > > > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > > > > from master and kepe testing py2 on stable bracnhes. > > > > > > > > > > We have two way to fix this: > > > > > > > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > > > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > > > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > > > > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > > > > I am trying this first[2] and testing[3]. > > > > > > > > > > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > > > > > > > Tried option#2: > > > > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > > > > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > > > > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > > > > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > > > > or distro-specific job like centos7 etc where we have < py3.6. > > > > > > > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > > > > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > > > > Testing your cloud with the latest Tempest is the best possible way. > > > > > > > > Going with option#1: > > > > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > > > > for all possible distro/py version. > > > > > > > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > > > > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > > > > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > > > > on >py3.6 env(venv or separate node). > > > > * Patch is up - https://review.opendev.org/#/c/704840/ > > > > > > > > 2.Modify Tempest tox env basepython to py3 > > > > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > > > > like fedora or future distro > > > > *Patch is up- https://review.opendev.org/#/c/704688/2 > > > > > > > > 3. Use compatible Tempest & its plugin tag for distro having > > > * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > > > > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > > > > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > > > > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > > > > > > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > > > > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > > > > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > > > > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > > > > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > > > > in-tree plugins for their stable branch testing. > > > > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > > > > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > > > > in neutron-vpnaas case. This will be easy for future maintenance also. > > > > > > > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > > > > > > > [1] https://review.opendev.org/#/c/703476/ > > > > [2] https://review.opendev.org/#/c/703011/ > > > > [3] https://releases.openstack.org/rocky/#rocky-tempest > > > > > > > > -gmann > > > > > > > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > > > > [2] https://review.opendev.org/#/c/703011/ > > > > > [3] https://review.opendev.org/#/c/703012/ > > > > > > > > > > > > > > > -gmanne > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Sun Feb 9 00:39:50 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 08 Feb 2020 18:39:50 -0600 Subject: [all] Gate status: Stable/ocata|pike|queens|rocky is broken: Avoid recheck In-Reply-To: <17015ec15d8.113d69dac313692.3141092730262956226@ghanshyammann.com> References: <17015ec15d8.113d69dac313692.3141092730262956226@ghanshyammann.com> Message-ID: <170276364b7.ec3c9d0f427418.6540164577624982390@ghanshyammann.com> All the stable branch gate is up now, you can recheck. Keep reporting the bug on #openstack-qa if you find new one about py2 drop things. -gmann ---- On Wed, 05 Feb 2020 09:15:58 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > Stable/ocata, pike, queens, rocky gate is broken now due to Temepst dependency require >=py3.6. I summarized the situation in ML[1]. > > Do not recheck on failed patches of those branches until the job is explicitly disabling Tempest. Fixes are in progress, I will update the status here once fixes are merged. > > [1]http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012371.html > > -gmann > From skaplons at redhat.com Sun Feb 9 09:16:04 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 9 Feb 2020 10:16:04 +0100 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <1702763455f.f46f54ac427417.3931200914462905314@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> <1702763455f.f46f54ac427417.3931200914462905314@ghanshyammann.com> Message-ID: <23DBE6DD-F6DA-4DEC-9C64-BF98558965BA@redhat.com> Hi, > On 9 Feb 2020, at 01:39, Ghanshyam Mann wrote: > > ---- On Thu, 06 Feb 2020 18:23:51 -0600 Ghanshyam Mann wrote ---- >> ---- On Wed, 05 Feb 2020 14:28:28 -0600 Ghanshyam Mann wrote ---- >>> ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- >>>> I am writing at top now for easy ready. >>>> >>>> * Gate status: >>>> - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for >>> - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. >>>> - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. >>>> - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates >>>> the tempest tox env with master u-c needs to be fixed[2]. >>> >>> While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for >>> stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems >>> to cap the upper-constraint like it was proposed by chandan[1]. >>> >>> NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain >>> such cap only for Tempest & its plugins till stable/rocky EOL. >> >> Updates: >> ====== >> Instead of capping requirement, Tempest role run-tempest fix on master branch work fine because of Zuul pickup the job definition >> and playbooks from master branch. But there is another issue occurring here. grenade job fails and blocks the fixes on stable branches >> to merge because old branch has to be fixed first. So we need to merge the stable/ocata fix first and then stab;e/pike and so on. > > All the fixes are merged now and good to recheck on stable branch jobs. I am working on bug#1862240 now. > > - https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) Thx. I see that neutron-tempest-plugin’s rocky jobs now works fine: https://review.opendev.org/#/c/706451/2 finally \o/ > > -gmann > >> >> New bug: >> ======= >> Another bug on py2 jobs even on stable/train etc. - https://bugs.launchpad.net/tempest/+bug/1862240 >> >> Tempest tox env moved to py3 fail the py2 jobs on stable branch jobs who are still using deprecated tox env 'all-plugin'. >> plugins are installed on py2 in py2 jobs and tempest create tox env with py3 and so does 'all-plugin' which is sitepackage=True >> try to find plugins on py3 and fail. >> The solution is to replace the 'all-plugin' tox env to 'all'. I am fixing it for designate. But due to grenade job nature, we need to >> reverse the fixes here, first, merge the stable/stein and then stable/train. >> >> Stay tuned for updates and more bugs. The stable branches gate is still not green, I will update the status on ML about gate staus. >> >> -gmann >> >>> >>> Background on this issue: >>> -------------------------------- >>> >>> Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on >>> stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with >>> stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. >>> >>> I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. >>> ------------------------------------------- >>> >>> [1] https://review.opendev.org/#/c/705685/ >>> [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 >>> [3] https://review.opendev.org/#/c/705089 >>> [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml >>> >>> -gmann >>> >>> >>>> - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. >>>> >>>> * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED >>>> - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. >>>> >>>> [1] https://review.opendev.org/#/c/705089/ >>>> [2] https://review.opendev.org/#/c/705870/ >>>> >>>> -gmann >>>> >>>> ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- >>>>> ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- >>>>>> Hello Everyone, >>>>>> >>>>>> This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement >>>>>> of 'EOLing python2 drama' in subject :). >>>>>> >>>>>> neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 >>>>>> is py3 only and so does u-c on the master has been updated to 2.0.0. >>>>>> >>>>>> All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin >>>>>> is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying >>>>>> to install the latest neutron-lib and failing. >>>>>> >>>>>> This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the >>>>>> py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop >>>>>> from master and kepe testing py2 on stable bracnhes. >>>>>> >>>>>> We have two way to fix this: >>>>>> >>>>>> 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. >>>>>> For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and >>>>>> use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. >>>>>> >>>>>> 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. >>>>>> I am trying this first[2] and testing[3]. >>>>>> >>>>> >>>>> I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. >>>>> >>>>> Tried option#2: >>>>> We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the >>>>> bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro >>>>> job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 >>>>> and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial >>>>> or distro-specific job like centos7 etc where we have < py3.6. >>>>> >>>>> Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our >>>>> CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. >>>>> Testing your cloud with the latest Tempest is the best possible way. >>>>> >>>>> Going with option#1: >>>>> IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working >>>>> for all possible distro/py version. >>>>> >>>>> 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). >>>>> * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. >>>>> * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest >>>>> on >py3.6 env(venv or separate node). >>>>> * Patch is up - https://review.opendev.org/#/c/704840/ >>>>> >>>>> 2.Modify Tempest tox env basepython to py3 >>>>> * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 >>>>> like fedora or future distro >>>>> *Patch is up- https://review.opendev.org/#/c/704688/2 >>>>> >>>>> 3. Use compatible Tempest & its plugin tag for distro having >>>> * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. >>>>> * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to >>>>> that branch can be used. For example Tempest 19.0.0 for rocky[3]. >>>>> * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ >>>>> >>>>> 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): >>>>> We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought >>>>> this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky >>>>> jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which >>>>> moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using >>>>> in-tree plugins for their stable branch testing. >>>>> We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these >>>>> stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins >>>>> in neutron-vpnaas case. This will be easy for future maintenance also. >>>>> >>>>> Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. >>>>> >>>>> [1] https://review.opendev.org/#/c/703476/ >>>>> [2] https://review.opendev.org/#/c/703011/ >>>>> [3] https://releases.openstack.org/rocky/#rocky-tempest >>>>> >>>>> -gmann >>>>> >>>>>> [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 >>>>>> [2] https://review.opendev.org/#/c/703011/ >>>>>> [3] https://review.opendev.org/#/c/703012/ >>>>>> >>>>>> >>>>>> -gmanne — Slawek Kaplonski Senior software engineer Red Hat From smooney at redhat.com Sun Feb 9 16:56:35 2020 From: smooney at redhat.com (Sean Mooney) Date: Sun, 09 Feb 2020 16:56:35 +0000 Subject: openstack-dpdk installation In-Reply-To: References: Message-ID: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> On Sun, 2020-02-09 at 01:09 +0300, mesut aygün wrote: > Hey, > > I am working over dpdk installation with openstack ocata release. > How can I use the neutron-openvswitch-agent-dpdk or anything else instead > of neutron-openvswitch-agent? in octa you can use the standared neutorn openvswitch agent with ovs-dpdk the only config option you need to set is [ovs]/datapath_type=netdev in the ml2_conf.ini on each of the compute nodes. https://github.com/openstack/neutron/blob/9aa9a097e6c9f765a59eb572a5a816169d83f2cd/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L107-L112 addtionally when installing ovs you need to configure it for dpdk. most distros now compile in support for dpdk into ovs so all you have to do to use it is configure it rahter then recompile ovs or install an addtional package. there is some upstream docs on how to do that here http://docs.openvswitch.org/en/latest/intro/install/dpdk/ the clip note version is you need to allcoated hugepages on the host for ovs-dpdk to use and addtional hugepages for your vms. then you need to bind the nic you intend to use with dpdk to the vfio-pci driver. next you need to define some config options to test ovs about that and what cores dpdk should use. sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=$OVS_PMD_CORE_MASK other_config:dpdk- init=True other_config:dpdk-lcore-mask=$OVS_CORE_MASK \ other_config:dpdk-mem-channels=4 other_config:dpdk-socket-mem=$OVS_SOCKET_MEM other_config:dpdk-hugepage- dir=$OVS_HUGEPAGE_MOUNT \ other_config:dpdk-extra=" --proc-type primary $pciAddressWhitelist " [ -n "$OVS_VHOST_USER_SOCKET_DIR" ] && sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-sock- dir=$OVS_VHOST_USER_SOCKET_DIR finally when you create the ovs bridges you need to set them to ues the netdev datapath although neutron will also do this its just a good habit to do it when first creating them once done you can add the physical nics to the br-ex or your provider bridge as type dpdk. ovs-vsctl --may-exist add-port $bridge $nic -- set Interface $nic type=dpdk options:dpdk-devargs=$addr with all that said that is how you install this manually if you are compiling form source. i would recomend looking at your openstack installer of choice to see if they have support or your disto vendors documentation as many installers have native support for ovs-dpdk and will automate it for you. > > Which can I choose the correct packages with os dpdk versions? > > > > I couldn't use the devstack versions!! > > Best regards From gmann at ghanshyammann.com Sun Feb 9 17:56:12 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Feb 2020 11:56:12 -0600 Subject: [nova] nova stable pike|queens|rocky is still failing on Tempest u-c: Do not recheck Message-ID: <1702b18377c.1077afa08433529.9050049755587344932@ghanshyammann.com> After fixing the stable/pike|queens|rocky integrated jobs[1], on using the stable u-c for pinned Tempest run, nova-live-migration fail on these stable branch on nova. I have proposed the fixes[2], wait for recheck till those are merged. [1] https://review.opendev.org/#/q/topic:fix-stable-gate+status:merged [2] https://review.opendev.org/#/q/topic:fix-stable-gate+status:open+projects:openstack/nova -gmann From mesutaygn at gmail.com Sun Feb 9 18:47:20 2020 From: mesutaygn at gmail.com (=?UTF-8?B?bWVzdXQgYXlnw7xu?=) Date: Sun, 9 Feb 2020 21:47:20 +0300 Subject: openstack-dpdk installation In-Reply-To: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> References: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> Message-ID: Hi Sean I did not prefer the vendor distro. I installed the native ocata release from scrath which i configured on myself. I downloaded dpdk relase from source and compiled it and also binded the interface with dpdk through ovs. While i was installing the neutron-ovs-agent, i couldnt start ovs db with neutron-ovs. Neutron ovs config files is not working with compiled ovs. How can i do that? Thank you for helping On Sun, 9 Feb 2020, 19:56 Sean Mooney, wrote: > On Sun, 2020-02-09 at 01:09 +0300, mesut aygün wrote: > > Hey, > > > > I am working over dpdk installation with openstack ocata release. > > How can I use the neutron-openvswitch-agent-dpdk or anything else > instead > > of neutron-openvswitch-agent? > in octa you can use the standared neutorn openvswitch agent with ovs-dpdk > the only config option you need to set is [ovs]/datapath_type=netdev in > the ml2_conf.ini on each of the compute nodes. > > > > https://github.com/openstack/neutron/blob/9aa9a097e6c9f765a59eb572a5a816169d83f2cd/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L107-L112 > > addtionally when installing ovs you need to configure it for dpdk. > most distros now compile in support for dpdk into ovs so all you have to > do to use it is configure it rahter then > recompile ovs or install an addtional package. > > there is some upstream docs on how to do that here > http://docs.openvswitch.org/en/latest/intro/install/dpdk/ > > the clip note version is you need to allcoated hugepages on the host for > ovs-dpdk to use and addtional hugepages for > your vms. then you need to bind the nic you intend to use with dpdk to the > vfio-pci driver. > next you need to define some config options to test ovs about that and > what cores dpdk should use. > > sudo ovs-vsctl --no-wait set Open_vSwitch . > other_config:pmd-cpu-mask=$OVS_PMD_CORE_MASK other_config:dpdk- > init=True other_config:dpdk-lcore-mask=$OVS_CORE_MASK \ > other_config:dpdk-mem-channels=4 > other_config:dpdk-socket-mem=$OVS_SOCKET_MEM other_config:dpdk-hugepage- > dir=$OVS_HUGEPAGE_MOUNT \ > other_config:dpdk-extra=" --proc-type primary $pciAddressWhitelist > " > [ -n "$OVS_VHOST_USER_SOCKET_DIR" ] && sudo ovs-vsctl --no-wait > set Open_vSwitch . other_config:vhost-sock- > dir=$OVS_VHOST_USER_SOCKET_DIR > > finally when you create the ovs bridges you need to set them to ues the > netdev datapath although neutron will also do > this its just a good habit to do it when first creating them once done you > can add the physical nics to the br-ex or > your provider bridge as type dpdk. > > ovs-vsctl --may-exist add-port $bridge $nic -- set Interface $nic > type=dpdk options:dpdk-devargs=$addr > > with all that said that is how you install this manually if you are > compiling form source. > i would recomend looking at your openstack installer of choice to see if > they have support or your disto vendors > documentation as many installers have native support for ovs-dpdk and will > automate it for you. > > > > > Which can I choose the correct packages with os dpdk versions? > > > > > > > > I couldn't use the devstack versions!! > > > > Best regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Sun Feb 9 19:24:12 2020 From: smooney at redhat.com (Sean Mooney) Date: Sun, 09 Feb 2020 19:24:12 +0000 Subject: openstack-dpdk installation In-Reply-To: References: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> Message-ID: On Sun, 2020-02-09 at 21:47 +0300, mesut aygün wrote: > Hi Sean > > > > I did not prefer the vendor distro. I installed the native ocata release > from scrath which i configured on myself. > > > I downloaded dpdk relase from source and compiled it and also binded the > interface with dpdk through ovs. > > While i was installing the neutron-ovs-agent, i couldnt start ovs db with > neutron-ovs. Neutron ovs config files is not working with compiled ovs. How > can i do that? > i would look at https://opendev.org/x/networking-ovs-dpdk for reference. specifcily the devstack plugin. i have not had time to update this problely for a year or two si it still has some legacy hacks like running ovs-dpdk under screen to mimic how devstack used to run services. i need to rewrite that at some point to use systemd but that plugins automates the process of compileing and instaling ovs-dpdk and the relevent ovsdb configuration and neutron/nova config file updates. if you are having issue with starting the ovs-db it is unrelated to openstack. but if you ment to say you wer having issue start the ovs neutron agent because it could not connect to the ovs db then its like that neutron is trying to connect over tcp and you have not exposed the ovs db over tcp as teh default is to use a unix socket, or vise vera the neutron agent can be configured to use ovs-vsctl instead of the python binding and connect via the unix socket and you may have only exposed it via tcp. i wont really have much time to debug this with you unfortunetly but hopefully networking-ovs-dpdk will help you figure out what you missed. i would focuse on ensureing ovs-dpdk is working propperly first before tryign to start teh neutron agent. > Thank you for helping > > > > > On Sun, 9 Feb 2020, 19:56 Sean Mooney, wrote: > > > On Sun, 2020-02-09 at 01:09 +0300, mesut aygün wrote: > > > Hey, > > > > > > I am working over dpdk installation with openstack ocata release. > > > How can I use the neutron-openvswitch-agent-dpdk or anything else > > > > instead > > > of neutron-openvswitch-agent? > > > > in octa you can use the standared neutorn openvswitch agent with ovs-dpdk > > the only config option you need to set is [ovs]/datapath_type=netdev in > > the ml2_conf.ini on each of the compute nodes. > > > > > > > > https://github.com/openstack/neutron/blob/9aa9a097e6c9f765a59eb572a5a816169d83f2cd/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L107-L112 > > > > addtionally when installing ovs you need to configure it for dpdk. > > most distros now compile in support for dpdk into ovs so all you have to > > do to use it is configure it rahter then > > recompile ovs or install an addtional package. > > > > there is some upstream docs on how to do that here > > http://docs.openvswitch.org/en/latest/intro/install/dpdk/ > > > > the clip note version is you need to allcoated hugepages on the host for > > ovs-dpdk to use and addtional hugepages for > > your vms. then you need to bind the nic you intend to use with dpdk to the > > vfio-pci driver. > > next you need to define some config options to test ovs about that and > > what cores dpdk should use. > > > > sudo ovs-vsctl --no-wait set Open_vSwitch . > > other_config:pmd-cpu-mask=$OVS_PMD_CORE_MASK other_config:dpdk- > > init=True other_config:dpdk-lcore-mask=$OVS_CORE_MASK \ > > other_config:dpdk-mem-channels=4 > > other_config:dpdk-socket-mem=$OVS_SOCKET_MEM other_config:dpdk-hugepage- > > dir=$OVS_HUGEPAGE_MOUNT \ > > other_config:dpdk-extra=" --proc-type primary $pciAddressWhitelist > > " > > [ -n "$OVS_VHOST_USER_SOCKET_DIR" ] && sudo ovs-vsctl --no-wait > > set Open_vSwitch . other_config:vhost-sock- > > dir=$OVS_VHOST_USER_SOCKET_DIR > > > > finally when you create the ovs bridges you need to set them to ues the > > netdev datapath although neutron will also do > > this its just a good habit to do it when first creating them once done you > > can add the physical nics to the br-ex or > > your provider bridge as type dpdk. > > > > ovs-vsctl --may-exist add-port $bridge $nic -- set Interface $nic > > type=dpdk options:dpdk-devargs=$addr > > > > with all that said that is how you install this manually if you are > > compiling form source. > > i would recomend looking at your openstack installer of choice to see if > > they have support or your disto vendors > > documentation as many installers have native support for ovs-dpdk and will > > automate it for you. > > > > > > > > Which can I choose the correct packages with os dpdk versions? > > > > > > > > > > > > I couldn't use the devstack versions!! > > > > > > Best regards > > > > From gmann at ghanshyammann.com Sun Feb 9 22:55:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Feb 2020 16:55:47 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-14 Update (1 week left to complete) Message-ID: <1702c2a80ef.12943337f436559.6195463107891828702@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-14 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * 1 week left to finish the work. * This week was very dramatic and full of failure as expected or more than that :). ** With that, I am expecting more issues to occur, dropping py2 from every repo asap is suggested so that we can stabilize the things well before m-3. * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. * Oslo dropped the py2, which caused multiple failures on 1. tempest testing on stable branches 2. projects still running py2 jobs on the master. ** we did not opt to cap the oslo lib for References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Wed, Feb 5, 2020 at 5:36 AM Mark Goddard wrote: > > On Sun, 2 Feb 2020 at 21:06, Neal Gompa wrote: > > > > On Wed, Jan 29, 2020 at 9:37 AM Mark Goddard wrote: > > > > > > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > > > wrote: > > > > > > > > > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > > > >> > > > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > > > >> > > > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > > > >> > wrote: > > > >> > > > > > >> > > I know it was for masakari. > > > >> > > Gaëtan had to grab crmsh from opensuse: > > > >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > > > >> > > > > > >> > > -yoctozepto > > > >> > > > > >> > Thanks Wes for getting this discussion going. I've been looking at > > > >> > CentOS 8 today and trying to assess where we are. I created an > > > >> > Etherpad to track status: > > > >> > https://etherpad.openstack.org/p/kolla-centos8 > > > >> > > > > > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. > > > > > > I found them, thanks. > > > > > > > > > > >> > > > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > > > >> code when installing packages. It often happens on the rabbitmq and > > > >> grafana images. There is a prompt about importing GPG keys prior to > > > >> the error. > > > >> > > > >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > > > >> > > > >> Related bug report? https://github.com/containers/libpod/issues/4431 > > > >> > > > >> Anyone familiar with it? > > > >> > > > > > > > > Didn't know about this issue. > > > > > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > > > > > > It seems to be due to the use of a GPG check on the repo (as opposed > > > to packages). DNF doesn't use keys imported via rpm --import for this > > > (I'm not sure what it uses), and prompts to add the key. This breaks > > > without a terminal. More explanation here: > > > https://review.opendev.org/#/c/704782. > > > > > > > librepo has its own keyring for repo signature verification. > > Thanks Neal. Any pointers on how to add keys to it? > There's no direct way other than to make sure that your repo files include the GPG public key in the gpgkey= entry, and that repo_gpgcheck=1 is enabled. DNF will automatically tell librepo to do the right thing here. Ideally, in the future, all of it will use the rpm keyring, but it doesn't for now... -- 真実はいつも一つ!/ Always, there's only one truth! From Tushar.Patil at nttdata.com Mon Feb 10 08:40:39 2020 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Mon, 10 Feb 2020 08:40:39 +0000 Subject: [heat][tacker] After Stack is Created, will it change nested stack Id? In-Reply-To: <72fa408d-072a-7f72-91b1-caba3a7d1d6a@redhat.com> References: , <72fa408d-072a-7f72-91b1-caba3a7d1d6a@redhat.com> Message-ID: Hi Zane, Thank you very much for the detailed explanation. >> Obviously if you scale down the AutoScalingGroup and then scale it back >> up again, you'll end up with the different grandchild stack there. Understood. In such case, tacker handles the scaling up/down alarms so it can query parent stack to get nested child stack ids along with it's resources. >>The only other time it gets replaced is if you use the "mark unhealthy" >> command on the template resource in the child stack (i.e. the >> autoscaling group stack), or on the AutoScalingGroup resource itself in >> the parent stack. That's not the case as of now. So I'm not worried about it at the moment. Regards, tpatil ________________________________________ From: Zane Bitter Sent: Friday, February 7, 2020 5:40 AM To: openstack-discuss at lists.openstack.org Subject: Re: [heat][tacker] After Stack is Created, will it change nested stack Id? Hi Tushar, Great question. On 4/02/20 2:53 am, Patil, Tushar wrote: > Hi All, > > In tacker project, we are using heat API to create stack. > Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2. OK, so the scaled unit is a stack containing two servers and two ports. > So internally heat will create two nested stacks and add following resources to it:- > > child stack1 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port > > child stack2 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port In fact, Heat will create 3 nested stacks - one child stack for the AutoScalingGroup that contains two Template resources, which each have a grandchild stack (the ones you list above). I'm sure you know this, but I mention it because it makes what I'm about to say below clearer. > Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack. > > Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly. That's entirely reasonable. > My question is after the stack is created for the first time, will it ever change the nested child stack id? Short answer: no. Long answer: yes ;) In general normal updates and such will never result in the (grand)child stack ID changing. Even if a resource inside the stack fails (so the stack gets left in UPDATE_FAILED state), the next update will just try to update it again in-place. Obviously if you scale down the AutoScalingGroup and then scale it back up again, you'll end up with the different grandchild stack there. The only other time it gets replaced is if you use the "mark unhealthy" command on the template resource in the child stack (i.e. the autoscaling group stack), or on the AutoScalingGroup resource itself in the parent stack. If you do this a whole new replacement (grand)child stack will get replaced. Marking only the resources within the grandchild stack (e.g. VDU1) will *not* cause the stack to be replaced, so you should be OK. In code: https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/stack_resource.py#L106-L135 Hope that helps. Feel free to ask if you need more clarification. cheers, Zane. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From ileixe at gmail.com Mon Feb 10 09:36:08 2020 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Mon, 10 Feb 2020 18:36:08 +0900 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? Message-ID: Hi stacker, I'm trying to use ironic routed network from stein. I do not sure how the normal workflow works in nova/neutron/ironic side for routed network, so I sent this mail to get more information. >From what I understand for ironic routed network, what I did - Enable segment plugin in Neutron - Add ironic 'node' - Add ironic port with physical network - network-baremetal plugin reports neutron from all ironic nodes. - It sent 'physical_network' and 'ironic node uuid' to Neutron - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') - Add segment for the subnet. At last step, I encountered the strange. In detail, - Neutron subnet update callback call nova inventory registration - Neutron ask placement to create resource provider for segment - Neutron ask nova to create aggregate for segment - Neutron request placement to associate nova aggregate to resource provider. - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova aggregate Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny to register the host to aggregate emitting the exception like below Returning 404 to user: Compute host 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ What's strange for me is why neutron ask 'ironic node uuid' for host when nova aggregate only look for host from HostMapping which came from 'host' in compute_nodes. I could not find the code related to how 'ironic node uuid' can be registered in nova aggregate. Please someone who knows what's going on shed light on me. Thanks. From thierry at openstack.org Mon Feb 10 10:05:17 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 10 Feb 2020 11:05:17 +0100 Subject: [largescale-sig] Next meeting: Feb 12, 9utc Message-ID: Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, Feb 12 at 9 UTC[1] in #openstack-meeting on IRC: [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200212T09 As always, the agenda for our meeting is available at: https://etherpad.openstack.org/p/large-scale-sig-meeting Feel free to add topics to it. We'll start with a show-and-tell by Stig (oneswig), followed by a status update on the various TODOs we had from last meeting, in particular: - Reviewing oslo.metrics draft at https://review.opendev.org/#/c/704733/ and comment so that we can iterate on it - Reading page on golden signals at https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/#xref_monitoring_golden-signals Talk to you all on Wednesday, -- Thierry Carrez From skaplons at redhat.com Mon Feb 10 11:15:19 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 10 Feb 2020 12:15:19 +0100 Subject: [neutron] 11.02.2020 team meeting cancelled Message-ID: Hi, I will not be able to chair our tomorrow’s team meeting. So lets cancel it and see You all on Monday 17.02.2020. — Slawek Kaplonski Senior software engineer Red Hat From katonalala at gmail.com Mon Feb 10 12:10:12 2020 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 10 Feb 2020 13:10:12 +0100 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: Hi, Actually I know only the "ironicless" usecases of routed provider networks. As I know neutron has the hostname from the agent (even from config or if not defined from socket.gethostname()). For a similar feature (which uses placement guaranteed minimum bandwidth) there was a change that come from a bug with perhaps Ironic usecases: https://review.opendev.org/696600 the bug: https://bugs.launchpad.net/neutron/+bug/1853840 The solution was to introduce a new config option called resource_provider_hypervisors ( https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py ) Without having experience with ironic based on your description your problem with routed provider nets is similar and the config option should be used to make segments plugin use that for resource provider / host aggregate creation. Regards Lajos 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): > Hi stacker, > > I'm trying to use ironic routed network from stein. > > I do not sure how the normal workflow works in nova/neutron/ironic > side for routed network, so I sent this mail to get more information. > From what I understand for ironic routed network, what I did > > - Enable segment plugin in Neutron > - Add ironic 'node' > - Add ironic port with physical network > - network-baremetal plugin reports neutron from all ironic nodes. > - It sent 'physical_network' and 'ironic node uuid' to Neutron > - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') > - Add segment for the subnet. > > At last step, I encountered the strange. In detail, > - Neutron subnet update callback call nova inventory registration > - Neutron ask placement to create resource provider for segment > - Neutron ask nova to create aggregate for segment > - Neutron request placement to associate nova aggregate to resource > provider. > - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova > aggregate > > Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny > to register the host to aggregate emitting the exception like below > > Returning 404 to user: Compute host > 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ > > What's strange for me is why neutron ask 'ironic node uuid' for host > when nova aggregate only look for host from HostMapping which came > from 'host' in compute_nodes. > > I could not find the code related to how 'ironic node uuid' can be > registered in nova aggregate. > > > Please someone who knows what's going on shed light on me. > > Thanks. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmail.com Mon Feb 10 15:41:02 2020 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Mon, 10 Feb 2020 09:41:02 -0600 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <80d27a9d-3a38-99c8-6d51-ceeacba58b55@gmail.com> On 2/10/20 9:08 AM, zuul at openstack.org wrote: > Build failed. > > - tag-releases https://zuul.opendev.org/t/openstack/build/8796e6c046a84688b3dd9dc756f4a15d : FAILURE in 6m 13s > - publish-tox-docs-static https://zuul.opendev.org/t/openstack/build/None : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures This failure was actually on the removal of a deliverable file for something that is no longer active. This can be safely ignored. Sean From nate.johnston at redhat.com Mon Feb 10 16:23:53 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 10 Feb 2020 11:23:53 -0500 Subject: [neutron] Bug deputy report Feb 3-10 Message-ID: <20200210162353.fm22k6t5c2byztn5@firewall> Nate bug deputy notes - 2020-02-03 to 2020-02-10 ------------------------------------------------ It was a very quiet week - almost half of these were filed today, this report was looking very short as recently as yesterday. Thanks to amotoki for the next shift as bug deputy. Untriaged: - "Neutron try to register invalid host to nova aggregate for ironic routed network" * URL: https://bugs.launchpad.net/bugs/1862611 * Version: stable/stein * lajoskatona responded to the bug but I was not able to reproduce it; I don't have an ironic setup to verify the bug report. Critical: - "Neutron incorrectly selects subnet" * URL: https://bugs.launchpad.net/bugs/1862374 * Marked Incomplete by haleyb but there is still a significant chance there is a real issue here, it might just be in the SDK as opposed to neutron. Waiting for reported to respond. High: - "Fullstack tests failing due to "hang" neutron-server process" (slaweq) * URL: https://bugs.launchpad.net/bugs/1862178 * Unassigned - "Fullstack tests failing due to problem with connection to the fake placement service" * URL: https://bugs.launchpad.net/bugs/1862177 * Assigned: lajoskatona * Fix proposed: https://review.opendev.org/706500 - "Sometimes VMs can't get IP when spawned concurrently" * URL: https://bugs.launchpad.net/bugs/1862315 * Version: stable/stein * Assigned: obondarev - "[OVN] Reduce the number of tables watched by MetadataProxyHandler" * URL: https://bugs.launchpad.net/bugs/1862648 * Assigned: lucasgomes Medium: - "[OVN] functional test test_virtual_port_delete_parents is unstable" * URL: https://bugs.launchpad.net/bugs/1862618 * Assigned: sapana45 Invalid: - "placement in neutron_lib could not process keystone exceptions" * URL: https://bugs.launchpad.net/bugs/1862565 * Marked as duplicate of https://bugs.launchpad.net/neutron/+bug/1828543 (lajoskatona) RFE: - "Add RBAC for subnet pools" * URL: https://bugs.launchpad.net/bugs/1862032 * Cross-project with the nova team, nova has a patch up for their part. Last week's report by tidwellr: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012332.html From sshnaidm at redhat.com Mon Feb 10 16:47:45 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Mon, 10 Feb 2020 18:47:45 +0200 Subject: [ansible-sig][openstack-ansible][tripleo] A new meeting time and place for Openstack Ansible modules discussions Message-ID: Hi, all according to our poll about meeting time[1] the winner is: Tuesday 15.00 - 16.00 UTC (3.00 PM - 4.00 PM UTC) Please be aware that we meet in different IRC channel - #openstack-ansible-sig Thanks for voting and waiting for you tomorrow in #openstack-ansible-sig at 15.00 UTC [1] https://xoyondo.com/dp/ITMGRZSvaZaONcz -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 10 16:52:10 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Feb 2020 10:52:10 -0600 Subject: [oslo] PTG Attendance Message-ID: <5db3510a-eac4-9567-b80b-86a19634537e@nemebean.com> Hi, As discussed in the meeting this morning, we need to decide if we want to request a room at the Vancouver PTG. Since I'm not expecting to be there this time someone else would need to organize it. I realize travel plans are not finalized for most people, but if you expect to be there and would like to have formal Oslo discussions please let me know ASAP. Thanks. -Ben From Albert.Braden at synopsys.com Mon Feb 10 17:01:10 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 10 Feb 2020 17:01:10 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' In-Reply-To: References: Message-ID: Can anyone help with this galera issue? If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see "Loaded 2 Fernet keys" every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Feb 10 17:07:43 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 10 Feb 2020 18:07:43 +0100 Subject: [kuryr] macvlan driver looking for an owner In-Reply-To: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> References: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> Message-ID: <53fd84f1c90c5deac82340662f9b284257ba798f.camel@redhat.com> On Fri, 2020-02-07 at 16:25 +0100, Roman Dobosz wrote: > Hi, > > Recently we migrated most of the drivers and handler to OpenStackSDK, > instead of neutron-client[1], and have a plan to drop neutron-client > usage. > > One of the drivers - MACVLAN based interfaces for nested containers - > wasn't migrated due to lack of confidence backed up with sufficient > tempest tests. Therefore we are looking for a maintainer, who will > take care about both - migration to the openstacksdk (which I can > help with) and to provide appropriate environment and tests. > > In case there is no interest on continuing support for this driver we > will deprecate it and remove from the source tree possibly in this > release. I think I'll give a shot and will try converting it to openstacksdk. We depend on revisions [2] feature and seems like openstacksdk lacks support for it, but maybe I'll be able to solve this. If I'll fail, I guess the driver will be on it's way out. [2] https://docs.openstack.org/api-ref/network/v2/#revisions > [1] https://review.opendev.org/#/q/topic:bp/switch-to-openstacksdk > From mdulko at redhat.com Mon Feb 10 17:12:55 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 10 Feb 2020 18:12:55 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core In-Reply-To: References: Message-ID: <1be6e0a2cb18fcd473663c0f5ed93b800f046d45.camel@redhat.com> A little bit overdue, but I now added Maysa to the group in Gerrit. Congratulations! On Mon, 2020-02-03 at 13:15 +0100, Luis Tomas Bolivar wrote: > Truly deserved! +2!! > > She has been doing an amazing work both implementing new features as well as chasing down bugs. > > On Mon, Feb 3, 2020 at 12:58 PM wrote: > > Hi, > > > > I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes > > project. > > > > Maysa shown numerous examples of diligent and valuable work in terms of > > code contribution (e.g. in network policy support), project maintenance > > and reviews [1]. > > > > Please express support or objections by replying to this email. > > Assuming that there will be no pushback, I'll proceed with granting > > Maysa core powers by the end of this week. > > > > Thanks, > > Michał > > > > [1] https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri > > > > > > From mdemaced at redhat.com Mon Feb 10 17:22:51 2020 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 10 Feb 2020 18:22:51 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core In-Reply-To: <1be6e0a2cb18fcd473663c0f5ed93b800f046d45.camel@redhat.com> References: <1be6e0a2cb18fcd473663c0f5ed93b800f046d45.camel@redhat.com> Message-ID: Thank you all. I'll do my best!! Maysa. On Mon, Feb 10, 2020 at 6:17 PM wrote: > A little bit overdue, but I now added Maysa to the group in Gerrit. > Congratulations! > > On Mon, 2020-02-03 at 13:15 +0100, Luis Tomas Bolivar wrote: > > Truly deserved! +2!! > > > > She has been doing an amazing work both implementing new features as > well as chasing down bugs. > > > > On Mon, Feb 3, 2020 at 12:58 PM wrote: > > > Hi, > > > > > > I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes > > > project. > > > > > > Maysa shown numerous examples of diligent and valuable work in terms of > > > code contribution (e.g. in network policy support), project maintenance > > > and reviews [1]. > > > > > > Please express support or objections by replying to this email. > > > Assuming that there will be no pushback, I'll proceed with granting > > > Maysa core powers by the end of this week. > > > > > > Thanks, > > > Michał > > > > > > [1] > https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Mon Feb 10 17:54:22 2020 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Mon, 10 Feb 2020 17:54:22 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' In-Reply-To: References: , Message-ID: <55B274D3-3D46-4BB4-97DC-7E3996D5189B@blizzard.com> Database Connections are kept open for one hour. Timeouts on haproxy needs to reflect that. Set timeout to at least 60 minutes. Sent from my iPhone On Feb 10, 2020, at 9:04 AM, Albert Braden wrote:  Can anyone help with this galera issue? If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see “Loaded 2 Fernet keys” every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Feb 10 19:16:22 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 10 Feb 2020 19:16:22 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance Message-ID: Hi all, networking-calico is the code that integrates Project Calico [1] with Neutron. It has been an OpenStack project for several years, but we, i.e. its developers [2], would like now to remove it from OpenStack governance and instead manage it like the other Project Calico projects under https://github.com/projectcalico/. In case anyone has any concerns about this, please let me know very soon! Please could infra folk confirm that https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project is the correct procedure for winding up the OpenStack side of this transition? Many thanks, Neil [1] https://www.projectcalico.org/ [2] That means mostly me, with some help from my colleagues here at Tigera. We do have some other contributors but I will take care that they are also on board with this change. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Feb 10 19:41:17 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Feb 2020 19:41:17 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <20200210194116.kmwlqww52lp6pi72@yuggoth.org> On 2020-02-10 19:16:22 +0000 (+0000), Neil Jerram wrote: [...] > Please could infra folk confirm that > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > is the correct procedure for winding up the OpenStack side of this > transition? [...] Yes, that's our recommended procedure for closing down development on a repository in the OpenDev infrastructure. Make sure your README.rst in step 3 includes a reference to the new location for the benefit of folks who arrive at the old copy. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mihalis68 at gmail.com Mon Feb 10 19:54:40 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Mon, 10 Feb 2020 14:54:40 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: what impact on users of this code do you foresee? What are the reasons to remove it from openstack governance? On Mon, Feb 10, 2020 at 2:24 PM Neil Jerram wrote: > Hi all, > > networking-calico is the code that integrates Project Calico [1] with > Neutron. It has been an OpenStack project for several years, but we, i.e. > its developers [2], would like now to remove it from OpenStack governance > and instead manage it like the other Project Calico projects under > https://github.com/projectcalico/. > > In case anyone has any concerns about this, please let me know very soon! > > Please could infra folk confirm that > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > is the correct procedure for winding up the OpenStack side of this > transition? > > Many thanks, > Neil > > [1] https://www.projectcalico.org/ > [2] That means mostly me, with some help from my colleagues here at > Tigera. We do have some other contributors but I will take care that they > are also on board with this change. > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Feb 10 20:00:41 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 10 Feb 2020 20:00:41 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: On Mon, Feb 10, 2020 at 7:54 PM Chris Morgan wrote: > what impact on users of this code do you foresee? What are the reasons to > remove it from openstack governance? > Thanks for asking. I would say somewhat easier maintenance and feature velocity (when needed), as the development processes will be aligned with those of other Calico/Tigera components. But please be assured that networking-calico and Calico for OpenStack will continue to be maintained and developed. Just with a different set of (still open) processes. Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Feb 10 20:10:41 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Feb 2020 20:10:41 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> Actually, it looks like it's been officially removed from Neutron since Ocata, when https://review.openstack.org/399320 merged (so nearly 3.5 years already). As such it's just the code hosting which needs to be retired, and there doesn't seem to be any governance follow-up needed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From neil at tigera.io Mon Feb 10 21:32:17 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 10 Feb 2020 21:32:17 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> References: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> Message-ID: On Mon, Feb 10, 2020 at 8:11 PM Jeremy Stanley wrote: > Actually, it looks like it's been officially removed from Neutron > since Ocata, when https://review.openstack.org/399320 merged (so > nearly 3.5 years already). As such it's just the code hosting which > needs to be retired, and there doesn't seem to be any governance > follow-up needed. > Thanks Jeremy, yes, that's correct, networking-calico has been a 'big tent' project but not 'Neutron stadium' for the last few years. I'll take care to leave a good pointer in the README.rst, as you advised. Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 10 22:52:11 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Feb 2020 16:52:11 -0600 Subject: [oslo] Meeting Time Poll Message-ID: <7be71b4b-7c5f-3d99-dcff-65cb629b77a7@nemebean.com> Hello again, We have a few regular attendees of the Oslo meeting who have conflicts with the current meeting time. As a result, we would like to find a new time to hold the meeting. I've created a Doodle poll[0] for everyone to give their input on times. It's mostly limited to times that reasonably overlap the working day in the US and Europe since that's where most of our attendees are located (yes, I know, that's a self-fulfilling prophecy). If you attend the Oslo meeting, please fill out the poll so we can hopefully find a time that works better for everyone. Thanks! -Ben /me finally checks this one off the action items for next week :-) 0: https://doodle.com/poll/zmyhrhewtes6x9ty From amy at demarco.com Mon Feb 10 23:03:17 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 10 Feb 2020 17:03:17 -0600 Subject: UC Elections Reminder Message-ID: Hey All! Just a reminder that the nomination period for the upcoming User Committee elections is open. See the original email below! Thanks, Amy (spotz) The nomination period for the February User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the two sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations can be made by sending an email to the user-committee at lists.openstack.org mailing-list[0], with the subject: “UC candidacy” by February 16, 23:59 UTC aa voting will begin on February 17. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. Criteria for AUC status can be found at https://superuser.openstack.org/articles/auc-community/. If you are still not sure of your status and would like to verify in advance please email myself(amy at demarco.com) and Rain Leander(rleander at redhat.com) as we are serving as the Election Officials. Thanks, Amy Marrich (spotz) 0 - Please make sure you are subscribed to this list before sending in your nomination. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Tue Feb 11 00:22:18 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 11 Feb 2020 01:22:18 +0100 Subject: [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: Message-ID: On 1/24/20 7:20 PM, Wesley Hayutin wrote: > Greetings, > > I know the ceph repo is in progress. the ceph [1] package and storage8-centos-nautilus-* repo [2] are now done not installable yet because we can't build the centos-release-ceph-nautilus package for el8 yet, but we're trying to solve that in centos-devel [3] 1. https://cbs.centos.org/koji/buildinfo?buildID=28564 2. https://cbs.centos.org/koji/builds?tagID=1891 3. https://lists.centos.org/pipermail/centos-devel/2020-February/036544.html -- Giulio Fidente GPG KEY: 08D733BA From ileixe at gmail.com Tue Feb 11 02:13:12 2020 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Tue, 11 Feb 2020 11:13:12 +0900 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: The spec you sent is similar to my problem in that agent sent the 'host' which does not compatible with nova 'host'. But one more things which confuse me is for routed network (ironicless / + ironic) where the resource providers "IPV4_ADDRESS" used in nova side? I could not find any code in nova/placement and from the past conversation (https://review.opendev.org/#/c/656885/), now I suspect it does not implemented. How then nova choose right compute node in a segment? Am i missing something or 'resource_provider_hypervisor' you mentioned are now used for general routed network? Thanks 2020년 2월 10일 (월) 오후 9:10, Lajos Katona 님이 작성: > > Hi, > > Actually I know only the "ironicless" usecases of routed provider networks. > As I know neutron has the hostname from the agent (even from config or if not defined from socket.gethostname()). > For a similar feature (which uses placement guaranteed minimum bandwidth) there was a change that come from a > bug with perhaps Ironic usecases: > https://review.opendev.org/696600 > the bug: https://bugs.launchpad.net/neutron/+bug/1853840 > > The solution was to introduce a new config option called resource_provider_hypervisors > (https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py ) > Without having experience with ironic based on your description your problem with routed provider nets is > similar and the config option should be used to make segments plugin use that for resource provider / host aggregate creation. > > Regards > Lajos > > 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): >> >> Hi stacker, >> >> I'm trying to use ironic routed network from stein. >> >> I do not sure how the normal workflow works in nova/neutron/ironic >> side for routed network, so I sent this mail to get more information. >> From what I understand for ironic routed network, what I did >> >> - Enable segment plugin in Neutron >> - Add ironic 'node' >> - Add ironic port with physical network >> - network-baremetal plugin reports neutron from all ironic nodes. >> - It sent 'physical_network' and 'ironic node uuid' to Neutron >> - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') >> - Add segment for the subnet. >> >> At last step, I encountered the strange. In detail, >> - Neutron subnet update callback call nova inventory registration >> - Neutron ask placement to create resource provider for segment >> - Neutron ask nova to create aggregate for segment >> - Neutron request placement to associate nova aggregate to resource provider. >> - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova aggregate >> >> Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny >> to register the host to aggregate emitting the exception like below >> >> Returning 404 to user: Compute host >> 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ >> >> What's strange for me is why neutron ask 'ironic node uuid' for host >> when nova aggregate only look for host from HostMapping which came >> from 'host' in compute_nodes. >> >> I could not find the code related to how 'ironic node uuid' can be >> registered in nova aggregate. >> >> >> Please someone who knows what's going on shed light on me. >> >> Thanks. >> From whayutin at redhat.com Tue Feb 11 05:24:25 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 10 Feb 2020 22:24:25 -0700 Subject: [tripleo] CI is RED py27 related Message-ID: Greetings, Most of the jobs went RED a few minutes ago. Again it's related to python27. Nothing is going to pass CI until this is fixed. See: https://etherpad.openstack.org/p/ruckroversprint21 We'll update the list when we have the required patches in. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Feb 10 21:24:11 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 10 Feb 2020 14:24:11 -0700 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> References: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> Message-ID: Top post... OK.. so I'm going to propose we experiment with the following and if we don't like it, we can change it. I'm breaking the squads into two groups, active squads and moderately active squads. *My expectation is that the active squads are posting at the very least reviews that need attention.* #topic Active Squad status ci #link https://hackmd.io/IhMCTNMBSF6xtqiEd9Z0Kw?both validations #link https://etherpad.openstack.org/p/tripleo-validations-squad-status ceph-integration #link https://etherpad.openstack.org/p/tripleo-integration-squad-status transformation #link https://etherpad.openstack.org/p/tripleo-ansible-agenda mistral-to-ansible #link https://etherpad.openstack.org/p/tripleo-mistral-to-ansible I've added ironic integration to moderately active as it's a new request. I have no expectations for the moderately active bunch :) #topic Moderately Active Squads Ironic-integration https://etherpad.openstack.org/p/tripleo-ironic-integration-squad-status upgrade #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status edge #link https://etherpad.openstack.org/p/tripleo-edge-squad-status networking #link https://etherpad.openstack.org/p/tripleo-networking-squad-status Let's see how this plays out.. Thanks all!! On Wed, Feb 5, 2020 at 9:58 AM wrote: > Is there a need for Ironic integration one? > > > > *From:* Alan Bishop > *Sent:* Tuesday, February 4, 2020 2:20 PM > *To:* Francesco Pantano > *Cc:* John Fulton; Wesley Hayutin; OpenStack Discuss; Phil Weeks > *Subject:* Re: [tripleo] rework of triple squads and the tripleo mtg. > > > > [EXTERNAL EMAIL] > > > > On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano > wrote: > > > > > > On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > > On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > > > Currently we have the following squads.. > > 1. upgrades > > 2. edge > > 3. integration > > 4. validations > > 5. networking > > 6. transformation > > 7. ci > > > > A reasonable update could include the following.. > > > > 1. validations > > 2. transformation > > 3. mistral-to-ansible > > 4. CI > > 5. Ceph / Integration?? maybe just Ceph? > > I'm fine with "Ceph". The original intent of going from "Ceph > Integration" to the more generic "Integration" was that it could > include anyone using external-deploy-steps to deploy non-openstack > projects with TripleO (k8s, skydive, etc). Though those things > happened, we didn't really get anyone else to join the squad or update > our etherpad so I'm fine with renaming it to Ceph. We're still active > but our etherpad was getting old. I updated it just now. > > > Gulio? Francesco? Alan? > > > Agree here and I also updated the etherpad as well [1] w/ > our current status and the open topics we still have on > ceph side. > Not sure if we want to use "Integration" since the topics > couldn't be only ceph related but can involve other storage > components. > > Giulio, Alan, wdyt? > > > > "Ceph integration" makes the most sense to me, but I'm fine with just > naming it "Ceph" as we all know what that means. > > > > Alan > > > > > > > John > > > 6. others?? > > > > The squads should reflect major current efforts by the TripleO team IMHO. > > > > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > > > Thanks all!!! > > > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > > > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status > > > > -- > > Francesco Pantano > GPG KEY: F41BD75C > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue Feb 11 08:32:11 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 11 Feb 2020 09:32:11 +0100 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: Hi, For routed prov. nets I am not sure how it was originally designed, and what worked at that time, but I have some insight to min guaranteed bandwidth feature. For that neutron (based on agent config) creates placement stuff like resource providers and inventories (there the available bandwidth is the thing saved to placement RP inventories). When a port is created with QoS minimum bandwidth rule the port will have an extra field (see: https://docs.openstack.org/api-ref/network/v2/index.html?expanded=show-port-details-detail#show-port-details resource_request field) When nova fetch the port for booting a VM read this field and when asking placement for hosts which are available, in the request this information will be included and placement will do the search with this extra resource need. So this is how it should work, but as I know now routed provider nets doesn't have this in place, so nova can't do the scheduling based on ipv4_address needs. The info is there in placement, but during boot the neutron-nova pair doesn't know how to use it. Sorry for the kitchen language :-) Regards Lajos 양유석 ezt írta (időpont: 2020. febr. 11., K, 3:13): > The spec you sent is similar to my problem in that agent sent the > 'host' which does not compatible with nova 'host'. > > But one more things which confuse me is for routed network (ironicless > / + ironic) where the resource providers "IPV4_ADDRESS" used in nova > side? > I could not find any code in nova/placement and from the past > conversation (https://review.opendev.org/#/c/656885/), now I suspect > it does not implemented. > > How then nova choose right compute node in a segment? Am i missing > something or 'resource_provider_hypervisor' you mentioned are now used > for general routed network? > > Thanks > > 2020년 2월 10일 (월) 오후 9:10, Lajos Katona 님이 작성: > > > > Hi, > > > > Actually I know only the "ironicless" usecases of routed provider > networks. > > As I know neutron has the hostname from the agent (even from config or > if not defined from socket.gethostname()). > > For a similar feature (which uses placement guaranteed minimum > bandwidth) there was a change that come from a > > bug with perhaps Ironic usecases: > > https://review.opendev.org/696600 > > the bug: https://bugs.launchpad.net/neutron/+bug/1853840 > > > > The solution was to introduce a new config option called > resource_provider_hypervisors > > ( > https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py > ) > > Without having experience with ironic based on your description your > problem with routed provider nets is > > similar and the config option should be used to make segments plugin use > that for resource provider / host aggregate creation. > > > > Regards > > Lajos > > > > 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): > >> > >> Hi stacker, > >> > >> I'm trying to use ironic routed network from stein. > >> > >> I do not sure how the normal workflow works in nova/neutron/ironic > >> side for routed network, so I sent this mail to get more information. > >> From what I understand for ironic routed network, what I did > >> > >> - Enable segment plugin in Neutron > >> - Add ironic 'node' > >> - Add ironic port with physical network > >> - network-baremetal plugin reports neutron from all ironic nodes. > >> - It sent 'physical_network' and 'ironic node uuid' to Neutron > >> - It make 'segmenthostmapping' entry with ('node uuid', > 'segment_id') > >> - Add segment for the subnet. > >> > >> At last step, I encountered the strange. In detail, > >> - Neutron subnet update callback call nova inventory registration > >> - Neutron ask placement to create resource provider for segment > >> - Neutron ask nova to create aggregate for segment > >> - Neutron request placement to associate nova aggregate to resource > provider. > >> - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova > aggregate > >> > >> Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny > >> to register the host to aggregate emitting the exception like below > >> > >> Returning 404 to user: Compute host > >> 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ > >> > >> What's strange for me is why neutron ask 'ironic node uuid' for host > >> when nova aggregate only look for host from HostMapping which came > >> from 'host' in compute_nodes. > >> > >> I could not find the code related to how 'ironic node uuid' can be > >> registered in nova aggregate. > >> > >> > >> Please someone who knows what's going on shed light on me. > >> > >> Thanks. > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Tue Feb 11 08:41:44 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 11 Feb 2020 09:41:44 +0100 Subject: [ospurge] looking for project owners / considering adoption In-Reply-To: <533e6243-037e-bd06-5c9b-98c3316a47ab@citynetwork.eu> References: <342983ed-1d22-8f3a-3335-f153512ec2b2@catalyst.net.nz> <576E74EB-ED80-497F-9706-482FE0433208@gmail.com> <2ca832bb-4b71-b775-160a-e1868dcb21d2@citynetwork.eu> <533e6243-037e-bd06-5c9b-98c3316a47ab@citynetwork.eu> Message-ID: <2A9BD8D7-0F47-4B85-ABDB-366A32B8577B@gmail.com> Hi, I am thinking to submit (not found possibility so far) a forum-like session for that again in Vancouver, where I would present current status of implementation and we can plan further steps. Unfortunately I have still no confirmation from my employer, that I will be allowed to go. Any ideas/objections? Regards, Artem > On 3. Nov 2019, at 02:34, Tobias Rydberg wrote: > > Hi, > > Sounds really good Artem! Will you be at the session at the Summit? If not, I will bring the information from you to the session... > > Cheers, > Tobias > > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > On 2019-11-02 16:26, Artem Goncharov wrote: >> Hi Tobby, >> >> As I mentioned, if Monty does not start work, I will start it in few weeks latest (mid November). I need this now in my project, therefore I will be definitely able to spend time on implementation in both SDK and OSC. >> >> P.S. mailing this to you, since I will not be on the Summit. >> >> Regards, >> Artem >> >>> On 2. Nov 2019, at 09:19, Tobias Rydberg > wrote: >>> >>> Hi, >>> >>> A Forum session is planned for this topic, Monday 11:40. Suites perfect to continue the discussions there as well. >>> >>> https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24407/project-resource-cleanup-followup >>> BR, >>> Tobias >>> >>> Tobias Rydberg >>> Senior Developer >>> Twitter & IRC: tobberydberg >>> >>> www.citynetwork.eu | www.citycloud.com >>> >>> INNOVATION THROUGH OPEN IT INFRASTRUCTURE >>> ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED >>> On 2019-10-30 15:43, Artem Goncharov wrote: >>>> Hi Adam, >>>> >>>> Since I need this now as well I will start working on implementation how it was agreed (in SDK and in OSC) during last summit by mid of November. There is no need for discussing this further, it just need to be implemented. Sad that we got no progress in half a year. >>>> >>>> Regards, >>>> Artem (gtema). >>>> >>>>> On 30. Oct 2019, at 14:26, Adam Harwell > wrote: >>>>> >>>>> That's too bad that you won't be at the summit, but I think there may still be some discussion planned about this topic. >>>>> >>>>> Yeah, I understand completely about priorities and such internally. Same for me... It just happens that this IS priority work for us right now. :) >>>>> >>>>> >>>>> On Tue, Oct 29, 2019, 07:48 Adrian Turjak > wrote: >>>>> My apologies I missed this email. >>>>> >>>>> Sadly I won't be at the summit this time around. There may be some public cloud focused discussions, and some of those often have this topic come up. Also if Monty from the SDK team is around, I'd suggest finding him and having a chat. >>>>> >>>>> I'll help if I can but we are swamped with internal work and I can't dedicate much time to do upstream work that isn't urgent. :( >>>>> >>>>> On 17/10/19 8:48 am, Adam Harwell wrote: >>>>>> That's interesting -- we have already started working to add features and improve ospurge, and it seems like a plenty useful tool for our needs, but I think I agree that it would be nice to have that functionality built into the sdk. I might be able to help with both, since one is immediately useful and we (like everyone) have deadlines to meet, and the other makes sense to me as a possible future direction that could be more widely supported. >>>>>> >>>>>> Will you or someone else be hosting and discussion about this at the Shanghai summit? I'll be there and would be happy to join and discuss. >>>>>> >>>>>> --Adam >>>>>> >>>>>> On Tue, Oct 15, 2019, 22:04 Adrian Turjak > wrote: >>>>>> I tried to get a community goal to do project deletion per project, but >>>>>> we ended up deciding that a community goal wasn't ideal unless we did >>>>>> build a bulk delete API in each service: >>>>>> https://review.opendev.org/#/c/639010/ >>>>>> https://etherpad.openstack.org/p/community-goal-project-deletion >>>>>> https://etherpad.openstack.org/p/DEN-Deletion-of-resources >>>>>> https://etherpad.openstack.org/p/DEN-Train-PublicCloudWG-brainstorming >>>>>> >>>>>> What we decided on, but didn't get a chance to work on, was building >>>>>> into the OpenstackSDK OS-purge like functionality, as well as reporting >>>>>> functionality (of all project resources to be deleted). That way we >>>>>> could have per project per resource deletion logic, and all of that >>>>>> defined in the SDK. >>>>>> >>>>>> I was up for doing some of the work, but ended up swamped with internal >>>>>> work and just didn't drive or push for the deletion work upstream. >>>>>> >>>>>> If you want to do something useful, don't pursue OS-Purge, help us add >>>>>> that official functionality to the SDK, and then we can push for bulk >>>>>> deletion APIs in each project to make resource deletion more pleasant. >>>>>> >>>>>> I'd be happy to help with the work, and Monty on the SDK team will most >>>>>> likely be happy to as well. :) >>>>>> >>>>>> Cheers, >>>>>> Adrian >>>>>> >>>>>> On 1/10/19 11:48 am, Adam Harwell wrote: >>>>>> > I haven't seen much activity on this project in a while, and it's been >>>>>> > moved to opendev/x since the opendev migration... Who is the current >>>>>> > owner of this project? Is there anyone who actually is maintaining it, >>>>>> > or would mind if others wanted to adopt the project to move it forward? >>>>>> > >>>>>> > Thanks, >>>>>> > --Adam Harwell >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Feb 11 09:47:42 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 09:47:42 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> Message-ID: On Mon, Feb 10, 2020 at 9:32 PM Neil Jerram wrote: > On Mon, Feb 10, 2020 at 8:11 PM Jeremy Stanley wrote: > >> Actually, it looks like it's been officially removed from Neutron >> since Ocata, when https://review.openstack.org/399320 merged (so >> nearly 3.5 years already). As such it's just the code hosting which >> needs to be retired, and there doesn't seem to be any governance >> follow-up needed. >> > > Thanks Jeremy, yes, that's correct, networking-calico has been a 'big > tent' project but not 'Neutron stadium' for the last few years. > > I'll take care to leave a good pointer in the README.rst, as you advised. > FYI the project-config change is up at https://review.opendev.org/#/c/707086/. (There isn't any mention of "networking-calico" in the requirements repo, so that step was a no-op.) Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Tue Feb 11 09:56:41 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Tue, 11 Feb 2020 10:56:41 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <97420918.593071.1581052477180@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> Message-ID: <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> Hi, So from this run you need the kuryr-controller logs. Apparently the pod never got annotated with an information about the VIF. Thanks, Michał On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > Hi mdulko, > Thanks for your support. > > As you mention i removed readinessProbe and > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > Attached kubelet and kuryr-cni logs. > > > > Regards, > Veera. > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > Hm, nothing too troubling there too, besides Kubernetes not answering > on /healthz endpoint. Are those full logs, including the moment you > tried spawning a container there? It seems like you only pasted the > fragments with tracebacks regarding failures to read /healthz endpoint > of kube-apiserver. That is another problem you should investigate - > that causes Kuryr pods to restart. > > At first I'd disable the healthchecks (remove readinessProbe and > livenessProbe from Kuryr pod definitions) and try to get fresh set of > logs. > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > Hi mdulko, > > Please find kuryr-cni logs > > http://paste.openstack.org/show/789209/ > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > Hi, > > > > The logs you provided doesn't seem to indicate any issues. Please > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > Thanks, > > Michał > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > Hi, > > > I am trying to run kubelet in arm64 platform > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > 2. Generated kuryr-cni-arm64 container. > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > My master node in x86 installed successfully using devstack > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > Please help me to fix the issue > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > Veera. > > > > > > From dtantsur at redhat.com Tue Feb 11 10:13:59 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 11 Feb 2020 11:13:59 +0100 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> References: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> Message-ID: Late to the party, but still: what's the proposed goal of the ironic integration squad? On Wed, Feb 5, 2020 at 6:00 PM wrote: > Is there a need for Ironic integration one? > > > > *From:* Alan Bishop > *Sent:* Tuesday, February 4, 2020 2:20 PM > *To:* Francesco Pantano > *Cc:* John Fulton; Wesley Hayutin; OpenStack Discuss; Phil Weeks > *Subject:* Re: [tripleo] rework of triple squads and the tripleo mtg. > > > > [EXTERNAL EMAIL] > > > > On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano > wrote: > > > > > > On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > > On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > > > Currently we have the following squads.. > > 1. upgrades > > 2. edge > > 3. integration > > 4. validations > > 5. networking > > 6. transformation > > 7. ci > > > > A reasonable update could include the following.. > > > > 1. validations > > 2. transformation > > 3. mistral-to-ansible > > 4. CI > > 5. Ceph / Integration?? maybe just Ceph? > > I'm fine with "Ceph". The original intent of going from "Ceph > Integration" to the more generic "Integration" was that it could > include anyone using external-deploy-steps to deploy non-openstack > projects with TripleO (k8s, skydive, etc). Though those things > happened, we didn't really get anyone else to join the squad or update > our etherpad so I'm fine with renaming it to Ceph. We're still active > but our etherpad was getting old. I updated it just now. > > > Gulio? Francesco? Alan? > > > Agree here and I also updated the etherpad as well [1] w/ > our current status and the open topics we still have on > ceph side. > Not sure if we want to use "Integration" since the topics > couldn't be only ceph related but can involve other storage > components. > > Giulio, Alan, wdyt? > > > > "Ceph integration" makes the most sense to me, but I'm fine with just > naming it "Ceph" as we all know what that means. > > > > Alan > > > > > > > John > > > 6. others?? > > > > The squads should reflect major current efforts by the TripleO team IMHO. > > > > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > > > Thanks all!!! > > > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > > > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status > > > > -- > > Francesco Pantano > GPG KEY: F41BD75C > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ileixe at gmail.com Tue Feb 11 12:05:23 2020 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Tue, 11 Feb 2020 21:05:23 +0900 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: Thanks, from my understanding about QoS feature it looks like using pre-created port which enable nova scheduling. Now, I do not avoid thinking routed network does not implemented yet since nova does not know the necessary information. Comment from a guy in nova irc channel make my think more stronger. It was so confused since neutron doc says it works well though.. 2020년 2월 11일 (화) 오후 5:32, Lajos Katona 님이 작성: > > Hi, > > For routed prov. nets I am not sure how it was originally designed, and what worked at that time, > but I have some insight to min guaranteed bandwidth feature. > > For that neutron (based on agent config) creates placement stuff like resource providers and inventories > (there the available bandwidth is the thing saved to placement RP inventories). > When a port is created with QoS minimum bandwidth rule the port will have an extra field > (see: https://docs.openstack.org/api-ref/network/v2/index.html?expanded=show-port-details-detail#show-port-details resource_request field) > When nova fetch the port for booting a VM read this field and when asking placement for hosts which are > available, in the request this information will be included and placement will do the search with this extra resource need. > So this is how it should work, but as I know now routed provider nets doesn't have this in place, so nova can't do the scheduling > based on ipv4_address needs. > The info is there in placement, but during boot the neutron-nova pair doesn't know how to use it. > > Sorry for the kitchen language :-) > > Regards > Lajos > > 양유석 ezt írta (időpont: 2020. febr. 11., K, 3:13): >> >> The spec you sent is similar to my problem in that agent sent the >> 'host' which does not compatible with nova 'host'. >> >> But one more things which confuse me is for routed network (ironicless >> / + ironic) where the resource providers "IPV4_ADDRESS" used in nova >> side? >> I could not find any code in nova/placement and from the past >> conversation (https://review.opendev.org/#/c/656885/), now I suspect >> it does not implemented. >> >> How then nova choose right compute node in a segment? Am i missing >> something or 'resource_provider_hypervisor' you mentioned are now used >> for general routed network? >> >> Thanks >> >> 2020년 2월 10일 (월) 오후 9:10, Lajos Katona 님이 작성: >> > >> > Hi, >> > >> > Actually I know only the "ironicless" usecases of routed provider networks. >> > As I know neutron has the hostname from the agent (even from config or if not defined from socket.gethostname()). >> > For a similar feature (which uses placement guaranteed minimum bandwidth) there was a change that come from a >> > bug with perhaps Ironic usecases: >> > https://review.opendev.org/696600 >> > the bug: https://bugs.launchpad.net/neutron/+bug/1853840 >> > >> > The solution was to introduce a new config option called resource_provider_hypervisors >> > (https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py ) >> > Without having experience with ironic based on your description your problem with routed provider nets is >> > similar and the config option should be used to make segments plugin use that for resource provider / host aggregate creation. >> > >> > Regards >> > Lajos >> > >> > 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): >> >> >> >> Hi stacker, >> >> >> >> I'm trying to use ironic routed network from stein. >> >> >> >> I do not sure how the normal workflow works in nova/neutron/ironic >> >> side for routed network, so I sent this mail to get more information. >> >> From what I understand for ironic routed network, what I did >> >> >> >> - Enable segment plugin in Neutron >> >> - Add ironic 'node' >> >> - Add ironic port with physical network >> >> - network-baremetal plugin reports neutron from all ironic nodes. >> >> - It sent 'physical_network' and 'ironic node uuid' to Neutron >> >> - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') >> >> - Add segment for the subnet. >> >> >> >> At last step, I encountered the strange. In detail, >> >> - Neutron subnet update callback call nova inventory registration >> >> - Neutron ask placement to create resource provider for segment >> >> - Neutron ask nova to create aggregate for segment >> >> - Neutron request placement to associate nova aggregate to resource provider. >> >> - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova aggregate >> >> >> >> Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny >> >> to register the host to aggregate emitting the exception like below >> >> >> >> Returning 404 to user: Compute host >> >> 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ >> >> >> >> What's strange for me is why neutron ask 'ironic node uuid' for host >> >> when nova aggregate only look for host from HostMapping which came >> >> from 'host' in compute_nodes. >> >> >> >> I could not find the code related to how 'ironic node uuid' can be >> >> registered in nova aggregate. >> >> >> >> >> >> Please someone who knows what's going on shed light on me. >> >> >> >> Thanks. >> >> From trang.le at berkeley.edu Tue Feb 11 10:24:36 2020 From: trang.le at berkeley.edu (Trang Le) Date: Tue, 11 Feb 2020 02:24:36 -0800 Subject: [UX] Contributing to OpenStack's User Interface Message-ID: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Dear OpenStack Discussion Team, I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. All the best, Trang Trang Le UC Berkeley Extension - Professional Diploma - UX/UI Design Smith College - Bachelors of Arts - Mathematics and Statistics Phone: +1 (650) 300 9007 Github: https://github.com/trangreyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdopiera at redhat.com Tue Feb 11 14:51:10 2020 From: rdopiera at redhat.com (Radek Dopieralski) Date: Tue, 11 Feb 2020 15:51:10 +0100 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Hello Trang Le, I assume that by OpenStack User Interface you mean the Horizon Dashboard. In that case, this documentation should get you started: https://docs.openstack.org/horizon/latest/contributor/index.html Regards, Radomir Dopieralski On Tue, Feb 11, 2020 at 3:04 PM Trang Le wrote: > Dear OpenStack Discussion Team, > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > All the best, > Trang > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Feb 11 16:29:19 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 11 Feb 2020 10:29:19 -0600 Subject: [TC] W release naming Message-ID: Hello TC (and other interested parties), This is a reminder that we have this week set aside for the 'W' release naming for any necessary discussion, campaigning, or other activities before the official polling starts. https://governance.openstack.org/tc/reference/release-naming.html#polls We have a set of names collected from the community. There may be some trademark concerns, but I think we can leave that for the Foundation review after the election polling completes unless anyone has a strong reason to exclude any now. If so, please state so here before I create the poll for next week. https://wiki.openstack.org/wiki/Release_Naming/W_Proposals As a reminder for everyone, this naming poll is the first that follows our new process of having the electorate being members of the Technical Committee. More details can be found in the governance documentation for release naming: https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process I will prepare the poll to send out next Monday. Since we have a limited electorate this time, if we collect all votes ahead of the published deadline I will check in with the TC if there is any need to wait, and if not close the poll early and get the results ready to publish. We can then move ahead with the legal review that is required before we can officially declare a winner. Thanks! Sean From Albert.Braden at synopsys.com Tue Feb 11 16:30:45 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 11 Feb 2020 16:30:45 +0000 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Hi Trang, This document has been useful for me as I work on becoming an Openstack contributor. https://docs.openstack.org/infra/manual/developers.html From: Trang Le Sent: Tuesday, February 11, 2020 2:25 AM To: openstack-discuss at lists.openstack.org Subject: [UX] Contributing to OpenStack's User Interface Dear OpenStack Discussion Team, I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. All the best, Trang Trang Le UC Berkeley Extension - Professional Diploma - UX/UI Design Smith College - Bachelors of Arts - Mathematics and Statistics Phone: +1 (650) 300 9007 Github: https://github.com/trangreyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 11 16:39:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 11 Feb 2020 17:39:50 +0100 Subject: [TC] W release naming In-Reply-To: References: Message-ID: Sean McGinnis wrote: > [...] > We have a set of names collected from the community. There may be some > trademark concerns, but I think we can leave that for the Foundation > review after the election polling completes unless anyone has a strong > reason to exclude any now. If so, please state so here before I create > the poll for next week. > > https://wiki.openstack.org/wiki/Release_Naming/W_Proposals > [...] That's a lot of good names, and before voting I'd like to get wide feedback from the community... So if there is any name you strongly like or dislike, please follow up here ! The TC is also supposed to discuss potential cultural sensibility and try to avoid those names, so if you see anything that could be considered culturally offensive for some human groups, let me know or reply on this thread. Personally I could see how 'Wodewick' could be perceived as a joke on speech-impaired people, and 'Whiskey'/'Whisky' could be seen as promoting the alcohol-drinking culture in open source events. Also 'Wuhan' is likely to not be neutral -- either seen as a positive supportive move for our friends in China struggling with the virus, or as a bit of a weird choice, but I'm not sure which. In summary: please voice concerns and preferences here, before the vote starts! -- Thierry Carrez (ttx) From whayutin at redhat.com Tue Feb 11 17:07:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 11 Feb 2020 10:07:14 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: Message-ID: On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin wrote: > Greetings, > > Most of the jobs went RED a few minutes ago. > Again it's related to python27. Nothing is going to pass CI until this is > fixed. > > See: https://etherpad.openstack.org/p/ruckroversprint21 > > We'll update the list when we have the required patches in. > Thanks > Patches are up.. - https://review.opendev.org/707204 - https://review.opendev.org/707054 - https://review.opendev.org/#/c/707062/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Feb 11 17:16:08 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 11 Feb 2020 18:16:08 +0100 Subject: [TC] W release naming In-Reply-To: References: Message-ID: I agree with your opinion on those 3 names, Thierry. All other are fine. -yoctozepto wt., 11 lut 2020 o 17:50 Thierry Carrez napisał(a): > > Personally I could see how 'Wodewick' could be perceived as a joke on > speech-impaired people, and 'Whiskey'/'Whisky' could be seen as > promoting the alcohol-drinking culture in open source events. Also > 'Wuhan' is likely to not be neutral -- either seen as a positive > supportive move for our friends in China struggling with the virus, or > as a bit of a weird choice, but I'm not sure which. > > In summary: please voice concerns and preferences here, before the vote > starts! > > -- > Thierry Carrez (ttx) > From gmann at ghanshyammann.com Tue Feb 11 17:23:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 11 Feb 2020 11:23:21 -0600 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: Message-ID: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin wrote ---- > > > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin wrote: > > Patches are up..https://review.opendev.org/707204 > https://review.opendev.org/707054 > https://review.opendev.org/#/c/707062/ Do we still have py27 jobs on tripleo CI ? If so I think it is time to drop it completely as the deadline is 13th Feb. Because there might be more incompatible dependencies as py2 drop from them is speedup. NOTE: stable branch testing till rocky use the stable u-c now to avoid these issues. -gmann > > Greetings, > > Most of the jobs went RED a few minutes ago.Again it's related to python27. Nothing is going to pass CI until this is fixed. > See: https://etherpad.openstack.org/p/ruckroversprint21 > We'll update the list when we have the required patches in.Thanks From david.comay at gmail.com Tue Feb 11 18:36:27 2020 From: david.comay at gmail.com (David Comay) Date: Tue, 11 Feb 2020 13:36:27 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance Message-ID: Neil, > networking-calico is the code that integrates Project Calico [1] with > Neutron. It has been an OpenStack project for several years, but we, i.e. > its developers [2], would like now to remove it from OpenStack governance > and instead manage it like the other Project Calico projects under > https://github.com/projectcalico/. My primary concern which isn't really governance would be around making sure the components in `networking-calico` are kept in-sync with the parent classes it inherits from Neutron itself. Is there a plan to keep these in-sync together going forward? -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Tue Feb 11 18:42:10 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 11 Feb 2020 10:42:10 -0800 Subject: [keystone] YVR PTG planning Message-ID: Hi all, It's time to start planning the next PTG. Unlike Shanghai, I'm tentatively assuming that we will be able to hold this one in person. I've created an etherpad to start tracking it: https://etherpad.openstack.org/p/yvr-ptg-keystone Please add your name to the etherpad if you have a hunch you'll be attending and participating in keystone-related discussions, and please also add your topic ideas to the list. Colleen From colleen at gazlene.net Tue Feb 11 18:51:05 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 11 Feb 2020 10:51:05 -0800 Subject: [ptl][keystone] cmurphy afk week of February 17 Message-ID: <4f8d428d-249b-4376-9347-a2c2be711965@www.fastmail.com> I will be on vacation the week of February 17 and will not be computering. Kristi Nikolla (knikolla) has graciously agreed to act as stand-in PTL for keystone during that time and will chair next week's meeting. If necessary I will be reachable by email or IRC but with some delay (hours or days). Colleen From Albert.Braden at synopsys.com Tue Feb 11 18:54:34 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 11 Feb 2020 18:54:34 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' In-Reply-To: <55B274D3-3D46-4BB4-97DC-7E3996D5189B@blizzard.com> References: , <55B274D3-3D46-4BB4-97DC-7E3996D5189B@blizzard.com> Message-ID: Erik, thank you! I did this, and made some other changes based on your other email, and my errors are gone. I’m still working on isolating the specific changes that fixed the problem, but it definitely was one of your ideas. Many thanks! From: Erik Olof Gunnar Andersson Sent: Monday, February 10, 2020 9:54 AM To: Albert Braden Cc: openstack-discuss at lists.openstack.org Subject: Re: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' Database Connections are kept open for one hour. Timeouts on haproxy needs to reflect that. Set timeout to at least 60 minutes. Sent from my iPhone On Feb 10, 2020, at 9:04 AM, Albert Braden > wrote:  Can anyone help with this galera issue? If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see “Loaded 2 Fernet keys” every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 11 19:07:57 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 11 Feb 2020 19:07:57 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <7d4b95227ae3378c6ff04548927837e8dabf7792.camel@redhat.com> On Tue, 2020-02-11 at 13:36 -0500, David Comay wrote: > Neil, > > > networking-calico is the code that integrates Project Calico [1] with > > Neutron. It has been an OpenStack project for several years, but we, i.e. > > its developers [2], would like now to remove it from OpenStack governance > > and instead manage it like the other Project Calico projects under > > https://github.com/projectcalico/. > > My primary concern which isn't really governance would be around making > sure the components in `networking-calico` are kept in-sync with the parent > classes it inherits from Neutron itself. Is there a plan to keep these > in-sync together going forward? networking-calico should not be inheriting form neutron. netuon-lib is fine but the networking-* project should not import form neturon directly. From neil at tigera.io Tue Feb 11 19:31:28 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 19:31:28 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: Hi David, On Tue, Feb 11, 2020 at 6:43 PM David Comay wrote: > Neil, > > > networking-calico is the code that integrates Project Calico [1] with > > Neutron. It has been an OpenStack project for several years, but we, > i.e. > > its developers [2], would like now to remove it from OpenStack governance > > and instead manage it like the other Project Calico projects under > > https://github.com/projectcalico/. > > My primary concern which isn't really governance would be around making > sure the components in `networking-calico` are kept in-sync with the parent > classes it inherits from Neutron itself. Is there a plan to keep these > in-sync together going forward? > Thanks for this question. I think the answer is that it will be a planned effort, from now on, for us to support new OpenStack versions. From Kilo through to Rocky we have aimed (and managed, so far as I know) to maintain a unified networking-calico codebase that works with all of those versions. However our code does not support Python 3, and OpenStack master now requires Python 3, so we have to invest work in order to have even the possibility of working with Train and later. More generally, it has been frustrating, over the last 2 years or so, to track OpenStack master as the CI requires, because breaking changes (in other OpenStack code) are made frequently and we get hit by them when trying to fix or enhance something (typically unrelated) in networking-calico. With that in mind, my plan from now on is: - Continue to stay in touch with our users and customers, so we know what OpenStack versions they want us to support. - As we fix and enhance Calico-specific things, continue CI against the versions that we say we test with. (Currently that means Queens and Rocky - https://docs.projectcalico.org/getting-started/openstack/requirements) - As and when needed, work to support new versions. (Where the first package of work here will be Python 3 support.) WDYT? Does that sounds sensible? Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 11 19:36:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 11 Feb 2020 19:36:53 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <20200211193652.bhzd4uxb64ggufpa@yuggoth.org> On 2020-02-11 19:31:28 +0000 (+0000), Neil Jerram wrote: > Thanks for this question. I think the answer is that it will be a planned > effort, from now on, for us to support new OpenStack versions. From Kilo > through to Rocky we have aimed (and managed, so far as I know) to maintain > a unified networking-calico codebase that works with all of those > versions. However our code does not support Python 3, and OpenStack master > now requires Python 3, so we have to invest work in order to have even the > possibility of working with Train and later. [...] It's probably known to all involved in the conversation here, but just for clarity, the two releases immediately following Rocky (that is, Stein and Train) are still supposed to support Python 2.7. Only as of the Ussuri release (due out in a few more months) will OpenStack be Python3-only. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From neil at tigera.io Tue Feb 11 19:37:53 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 19:37:53 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: <7d4b95227ae3378c6ff04548927837e8dabf7792.camel@redhat.com> References: <7d4b95227ae3378c6ff04548927837e8dabf7792.camel@redhat.com> Message-ID: On Tue, Feb 11, 2020 at 7:08 PM Sean Mooney wrote: > On Tue, 2020-02-11 at 13:36 -0500, David Comay wrote: > > Neil, > > > > > networking-calico is the code that integrates Project Calico [1] with > > > Neutron. It has been an OpenStack project for several years, but we, > i.e. > > > its developers [2], would like now to remove it from OpenStack > governance > > > and instead manage it like the other Project Calico projects under > > > https://github.com/projectcalico/. > > > > My primary concern which isn't really governance would be around making > > sure the components in `networking-calico` are kept in-sync with the > parent > > classes it inherits from Neutron itself. Is there a plan to keep these > > in-sync together going forward? > networking-calico should not be inheriting form neutron. > netuon-lib is fine but the networking-* project should not import form > neturon directly. Right, mostly. I think we still inherit from some DHCP agent code that hasn't been lib-ified yet, but otherwise I think it's neutron-lib as you say. (It's difficult to be sure because our code is also written to work with older versions when there was more neutron and less neutron-lib, but that's an orthogonal point.) Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Feb 11 19:43:38 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 19:43:38 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: <20200211193652.bhzd4uxb64ggufpa@yuggoth.org> References: <20200211193652.bhzd4uxb64ggufpa@yuggoth.org> Message-ID: On Tue, Feb 11, 2020 at 7:37 PM Jeremy Stanley wrote: > On 2020-02-11 19:31:28 +0000 (+0000), Neil Jerram wrote: > > Thanks for this question. I think the answer is that it will be a > planned > > effort, from now on, for us to support new OpenStack versions. From Kilo > > through to Rocky we have aimed (and managed, so far as I know) to > maintain > > a unified networking-calico codebase that works with all of those > > versions. However our code does not support Python 3, and OpenStack > master > > now requires Python 3, so we have to invest work in order to have even > the > > possibility of working with Train and later. > [...] > > It's probably known to all involved in the conversation here, but > just for clarity, the two releases immediately following Rocky (that > is, Stein and Train) are still supposed to support Python 2.7. Only > as of the Ussuri release (due out in a few more months) will > OpenStack be Python3-only. > Sorry; thanks for this correction. (So I should have said "... even the possibility of working with Ussuri and later.") Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Tue Feb 11 21:17:45 2020 From: flux.adam at gmail.com (Adam Harwell) Date: Tue, 11 Feb 2020 13:17:45 -0800 Subject: [ospurge] looking for project owners / considering adoption In-Reply-To: <2A9BD8D7-0F47-4B85-ABDB-366A32B8577B@gmail.com> References: <342983ed-1d22-8f3a-3335-f153512ec2b2@catalyst.net.nz> <576E74EB-ED80-497F-9706-482FE0433208@gmail.com> <2ca832bb-4b71-b775-160a-e1868dcb21d2@citynetwork.eu> <533e6243-037e-bd06-5c9b-98c3316a47ab@citynetwork.eu> <2A9BD8D7-0F47-4B85-ABDB-366A32B8577B@gmail.com> Message-ID: Sounds like we can keep the tradition going! But seriously, I'm disappointed that I haven't been able to help look at this at all, since I've had a major priority shift since the summit when I volunteered to help. Such is the way of sponsored Openstack development, I guess. I should be able to attend another meeting in Vancouver if you do schedule one. --Adam On Tue, Feb 11, 2020, 00:41 Artem Goncharov wrote: > Hi, > > I am thinking to submit (not found possibility so far) a forum-like > session for that again in Vancouver, where I would present current status > of implementation and we can plan further steps. > Unfortunately I have still no confirmation from my employer, that I will > be allowed to go. > > Any ideas/objections? > > Regards, > Artem > > > On 3. Nov 2019, at 02:34, Tobias Rydberg > wrote: > > Hi, > > Sounds really good Artem! Will you be at the session at the Summit? If > not, I will bring the information from you to the session... > > Cheers, > Tobias > > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > On 2019-11-02 16:26, Artem Goncharov wrote: > > Hi Tobby, > > As I mentioned, if Monty does not start work, I will start it in few weeks > latest (mid November). I need this now in my project, therefore I will be > definitely able to spend time on implementation in both SDK and OSC. > > P.S. mailing this to you, since I will not be on the Summit. > > Regards, > Artem > > On 2. Nov 2019, at 09:19, Tobias Rydberg > wrote: > > Hi, > > A Forum session is planned for this topic, Monday 11:40. Suites perfect to > continue the discussions there as well. > > > https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24407/project-resource-cleanup-followup > > BR, > Tobias > > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > On 2019-10-30 15:43, Artem Goncharov wrote: > > Hi Adam, > > Since I need this now as well I will start working on implementation how > it was agreed (in SDK and in OSC) during last summit by mid of November. > There is no need for discussing this further, it just need to be > implemented. Sad that we got no progress in half a year. > > Regards, > Artem (gtema). > > On 30. Oct 2019, at 14:26, Adam Harwell wrote: > > That's too bad that you won't be at the summit, but I think there may > still be some discussion planned about this topic. > > Yeah, I understand completely about priorities and such internally. Same > for me... It just happens that this IS priority work for us right now. :) > > > On Tue, Oct 29, 2019, 07:48 Adrian Turjak wrote: > >> My apologies I missed this email. >> >> Sadly I won't be at the summit this time around. There may be some public >> cloud focused discussions, and some of those often have this topic come up. >> Also if Monty from the SDK team is around, I'd suggest finding him and >> having a chat. >> >> I'll help if I can but we are swamped with internal work and I can't >> dedicate much time to do upstream work that isn't urgent. :( >> On 17/10/19 8:48 am, Adam Harwell wrote: >> >> That's interesting -- we have already started working to add features and >> improve ospurge, and it seems like a plenty useful tool for our needs, but >> I think I agree that it would be nice to have that functionality built into >> the sdk. I might be able to help with both, since one is immediately useful >> and we (like everyone) have deadlines to meet, and the other makes sense to >> me as a possible future direction that could be more widely supported. >> >> Will you or someone else be hosting and discussion about this at the >> Shanghai summit? I'll be there and would be happy to join and discuss. >> >> --Adam >> >> On Tue, Oct 15, 2019, 22:04 Adrian Turjak >> wrote: >> >>> I tried to get a community goal to do project deletion per project, but >>> we ended up deciding that a community goal wasn't ideal unless we did >>> build a bulk delete API in each service: >>> https://review.opendev.org/#/c/639010/ >>> https://etherpad.openstack.org/p/community-goal-project-deletion >>> https://etherpad.openstack.org/p/DEN-Deletion-of-resources >>> https://etherpad.openstack.org/p/DEN-Train-PublicCloudWG-brainstorming >>> >>> What we decided on, but didn't get a chance to work on, was building >>> into the OpenstackSDK OS-purge like functionality, as well as reporting >>> functionality (of all project resources to be deleted). That way we >>> could have per project per resource deletion logic, and all of that >>> defined in the SDK. >>> >>> I was up for doing some of the work, but ended up swamped with internal >>> work and just didn't drive or push for the deletion work upstream. >>> >>> If you want to do something useful, don't pursue OS-Purge, help us add >>> that official functionality to the SDK, and then we can push for bulk >>> deletion APIs in each project to make resource deletion more pleasant. >>> >>> I'd be happy to help with the work, and Monty on the SDK team will most >>> likely be happy to as well. :) >>> >>> Cheers, >>> Adrian >>> >>> On 1/10/19 11:48 am, Adam Harwell wrote: >>> > I haven't seen much activity on this project in a while, and it's been >>> > moved to opendev/x since the opendev migration... Who is the current >>> > owner of this project? Is there anyone who actually is maintaining it, >>> > or would mind if others wanted to adopt the project to move it forward? >>> > >>> > Thanks, >>> > --Adam Harwell >>> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Feb 11 23:07:22 2020 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 11 Feb 2020 18:07:22 -0500 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: <5B0D5C6D-E7CD-49C1-A5BA-B9B744499F6F@gmail.com> Thank you so much for taking lead on it, openstack is great piece of software but missing some really good UI interface. Sent from my iPhone > On Feb 11, 2020, at 9:00 AM, Trang Le wrote: > > Dear OpenStack Discussion Team, > > I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. > > All the best, > Trang > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Wed Feb 12 02:31:56 2020 From: licanwei_cn at 163.com (licanwei) Date: Wed, 12 Feb 2020 10:31:56 +0800 (GMT+08:00) Subject: [Watcher]IRC meeting at 8:00 UTC today Message-ID: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> licanwei_cn 邮箱:licanwei_cn at 163.com 签名由 网易邮箱大师 定制 | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 29936982-8cf0-4d64-9d3e-4df171e58e00.jpg Type: image/jpeg Size: 4126 bytes Desc: not available URL: From veeraready at yahoo.co.in Wed Feb 12 09:38:18 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Wed, 12 Feb 2020 09:38:18 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> Message-ID: <2135735478.1625599.1581500298473@mail.yahoo.com> Hi Mdulko,Below are log files: Controller Log:http://paste.openstack.org/show/789457/cni : http://paste.openstack.org/show/789456/kubelet : http://paste.openstack.org/show/789453/ Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) Please let me know the issue Regards, Veera. On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: Hi, So from this run you need the kuryr-controller logs. Apparently the pod never got annotated with an information about the VIF. Thanks, Michał On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > Hi mdulko, > Thanks for your support. > > As you mention i removed readinessProbe and > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > Attached kubelet and kuryr-cni logs. > > > > Regards, > Veera. > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > Hm, nothing too troubling there too, besides Kubernetes not answering > on /healthz endpoint. Are those full logs, including the moment you > tried spawning a container there? It seems like you only pasted the > fragments with tracebacks regarding failures to read /healthz endpoint > of kube-apiserver. That is another problem you should investigate - > that causes Kuryr pods to restart. > > At first I'd disable the healthchecks (remove readinessProbe and > livenessProbe from Kuryr pod definitions) and try to get fresh set of > logs. > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > Hi mdulko, > > Please find kuryr-cni logs > > http://paste.openstack.org/show/789209/ > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > Hi, > > > > The logs you provided doesn't seem to indicate any issues. Please > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > Thanks, > > Michał > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > Hi, > > > I am trying to run kubelet in arm64 platform > > > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > 2.    Generated kuryr-cni-arm64 container. > > > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > My master node in x86 installed successfully using devstack > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > Please help me to fix the issue > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > Veera. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at dantalion.nl Wed Feb 12 09:58:50 2020 From: info at dantalion.nl (info at dantalion.nl) Date: Wed, 12 Feb 2020 10:58:50 +0100 Subject: [Watcher]IRC meeting at 8:00 UTC today In-Reply-To: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> References: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> Message-ID: <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> Hello Lican, Sorry but I am unable to attend at such short notice as in my timezone I won't be awake unless it is for the Watcher meeting. To be able to adjust my transit schedule I will have to know a day in advance. Hope to be there next time. Kind regards, Corne Lukken On 2/12/20 3:31 AM, licanwei wrote: > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > 签名由 网易邮箱大师 定制 > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > From pradeepantil at gmail.com Wed Feb 12 10:36:33 2020 From: pradeepantil at gmail.com (Pradeep Antil) Date: Wed, 12 Feb 2020 16:06:33 +0530 Subject: How to Migrate OpenStack Instance from one Tenant to another Message-ID: Hi Folks, I am using Mitaka OpenStack and VLAN as network type driver for tenant VMs. Initially i have provisioned all the VMs (Including Customer VMs) inside the admin. I want to segregate my internal VMs and Customer VMs, for this to accomplish i have created different tenant names and now i want customer VMs from admin tenant to Customer tenant, On Google , i have a found a script which migrates the VMs but that script doesn't update network and security group details, Can anyone help me and suggest what steps needs to executed to update network , security group and attached Volumes ..? Below is the exact script: #!/bin/bash for i do if [ "$i" != "$1" ]; then echo "moving instance id " $i " to project id" $1; mysql -uroot -s -N < From stig.openstack at telfer.org Wed Feb 12 10:43:24 2020 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 12 Feb 2020 10:43:24 +0000 Subject: [scientific-sig] No Scientific SIG meeting today Message-ID: <64733F3E-5484-4E36-955E-20DB62DF484D@telfer.org> Hi All - Unfortunately there will not be a Scientific SIG IRC meeting today, owing to other commitments and availability. Apologies, Stig From rdopiera at redhat.com Wed Feb 12 11:05:40 2020 From: rdopiera at redhat.com (Radek Dopieralski) Date: Wed, 12 Feb 2020 12:05:40 +0100 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <5B0D5C6D-E7CD-49C1-A5BA-B9B744499F6F@gmail.com> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> <5B0D5C6D-E7CD-49C1-A5BA-B9B744499F6F@gmail.com> Message-ID: I would like to note here that everybody is equally welcome to contribute, not just Trang Le. If you have ideas for improving the software, Satish, we would gladly welcome your patches. On Wed, Feb 12, 2020 at 12:14 AM Satish Patel wrote: > Thank you so much for taking lead on it, openstack is great piece of > software but missing some really good UI interface. > > Sent from my iPhone > > On Feb 11, 2020, at 9:00 AM, Trang Le wrote: > > Dear OpenStack Discussion Team, > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > All the best, > Trang > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Feb 12 11:57:41 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 12 Feb 2020 12:57:41 +0100 Subject: [TC] W release naming In-Reply-To: References: Message-ID: On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: > Personally I could see how 'Wodewick' could be perceived as a joke on > speech-impaired people I agree that at first sight this looks very bad. Which should be taken into consideration. However, an extra fact/trivia for Wodewick (for those who didn't know, like I was, as it's not mentioned on the wiki): Terry Jones (director of the movie) had himself rhotacism. So IMO, the presence of the (bad?) joke in the movie itself is a proof of Jones' openness. Regards, Jean-Philippe Evrard (evrardjp) From mdulko at redhat.com Wed Feb 12 12:01:17 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Wed, 12 Feb 2020 13:01:17 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <2135735478.1625599.1581500298473@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> Message-ID: The controller logs are from an hour later than the CNI one. The issues seems not to be present. Isn't your controller still restarting? If so try to use -p option on `kubectl logs` to get logs from previous run. On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > Hi Mdulko, > Below are log files: > > Controller Log:http://paste.openstack.org/show/789457/ > cni : http://paste.openstack.org/show/789456/ > kubelet : http://paste.openstack.org/show/789453/ > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > Please let me know the issue > > > > Regards, > Veera. > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > Hi, > > So from this run you need the kuryr-controller logs. Apparently the pod > never got annotated with an information about the VIF. > > Thanks, > Michał > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > Hi mdulko, > > Thanks for your support. > > > > As you mention i removed readinessProbe and > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > on /healthz endpoint. Are those full logs, including the moment you > > tried spawning a container there? It seems like you only pasted the > > fragments with tracebacks regarding failures to read /healthz endpoint > > of kube-apiserver. That is another problem you should investigate - > > that causes Kuryr pods to restart. > > > > At first I'd disable the healthchecks (remove readinessProbe and > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > logs. > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Please find kuryr-cni logs > > > http://paste.openstack.org/show/789209/ > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > Hi, > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > Thanks, > > > Michał > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > Hi, > > > > I am trying to run kubelet in arm64 platform > > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > 2. Generated kuryr-cni-arm64 container. > > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > Please help me to fix the issue > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > From thierry at openstack.org Wed Feb 12 14:52:30 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 12 Feb 2020 15:52:30 +0100 Subject: [largescale-sig] Meeting summary and next actions Message-ID: Hi everyone, The Large Scale SIG held a meeting today. We had a small attendance, and blame holidays for that. But you can catch up with the summary and logs of the meeting at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-02-12-09.00.html We started with a presentation by oneswig of the integration of HAProxy telemetry via Monasca on a Kolla-Ansible-based deployment, and how that can help us derive latency information based on the API server called, the HTTP method and the HTTP code result across all HAProxy-fronted servers in a deployment. Slides are available at: http://www.stackhpc.com/resources/HAproxy-telemetry.pdf Regarding progress on our "Documenting large scale operations" goal, some doc links were proposed and all seem relevant to our collection. On the "Scaling within one cluster, and instrumentation of the bottlenecks" goal, the draft of the oslo.metrics spec got a first round of comments. masahito will refresh it based on those initial comments, but it would be good for SIG members to also weigh in at that early stage. In other news, we discussed the SIG's involvement in the "Large-scale Usage of Open Source Infrastructure Software" track at the OpenDev event in Vancouver in June. We plan to have SIG members directly involved and have discussions around our two goals for 2020. TODOs between now and next meeting: - masahito to update spec based on initial feedback - everyone to review and comment on https://review.opendev.org/#/c/704733/ The next meeting will happen on February 26, at 9:00 UTC on #openstack-meeting. Cheers, -- Thierry Carrez (ttx) From rosmaita.fossdev at gmail.com Wed Feb 12 15:38:44 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 12 Feb 2020 10:38:44 -0500 Subject: [cinder][ptg] Victoria (Vancouver 2020) PTG headcount Message-ID: I need to get a rough headcount to submit to the PTG planning committee. Please take a minute to put your name on this etherpad if you think/hope/plan to attend (no commitment--we just want to increase the likelihood that we'll have enough seats for the Cinder team): https://etherpad.openstack.org/p/cinder-victoria-ptg-planning While you're on that etherpad, if you have a topic idea already, feel free to add it. I need the headcount info by 19:00 UTC on 28 February. (But do it now before you forget.) thanks, brian From david.comay at gmail.com Wed Feb 12 16:52:40 2020 From: david.comay at gmail.com (David Comay) Date: Wed, 12 Feb 2020 11:52:40 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: > >> My primary concern which isn't really governance would be around making >> sure the components in `networking-calico` are kept in-sync with the parent >> classes it inherits from Neutron itself. Is there a plan to keep these >> in-sync together going forward? >> > > Thanks for this question. I think the answer is that it will be a planned > effort, from now on, for us to support new OpenStack versions. From Kilo > through to Rocky we have aimed (and managed, so far as I know) to maintain > a unified networking-calico codebase that works with all of those > versions. However our code does not support Python 3, and OpenStack master > now requires Python 3, so we have to invest work in order to have even the > possibility of working with Train and later. More generally, it has been > frustrating, over the last 2 years or so, to track OpenStack master as the > CI requires, because breaking changes (in other OpenStack code) are made > frequently and we get hit by them when trying to fix or enhance something > (typically unrelated) in networking-calico. > I don't know the history here around `calico-dhcp-agent` but has there been previous efforts to propose integrating the changes made to it into `neutron-dhcp-agent`? It seems the best solution would be to make the functionality provided by the former into the latter rather than relying on parent classes from the former. I suspect there are details here on why that might be difficult but it seems solving that would be helpful in the long-term. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Feb 12 17:07:26 2020 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 12 Feb 2020 12:07:26 -0500 Subject: [TC] W release naming In-Reply-To: References: Message-ID: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> On 12/02/20 6:57 am, Jean-Philippe Evrard wrote: > On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: >> Personally I could see how 'Wodewick' could be perceived as a joke on >> speech-impaired people > > I agree that at first sight this looks very bad. Which should be taken > into consideration. However, an extra fact/trivia for Wodewick (for > those who didn't know, like I was, as it's not mentioned on the wiki): > Terry Jones (director of the movie) had himself rhotacism. Also another fun fact: if you're explaining, you're losing ;) > So IMO, the presence of the (bad?) joke in the movie itself is a proof > of Jones' openness. TIL, thanks! Another reason to love the movie, while also not choosing this as the release name. From gmann at ghanshyammann.com Wed Feb 12 17:24:39 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 12 Feb 2020 11:24:39 -0600 Subject: [qa][ptg] QA PTG, Vancouver Planning Message-ID: <1703a6e6b61.108eda5ad575788.8856924692503612142@ghanshyammann.com> Hello Everyone, I have started the Vancouver PTG planning for QA[1]. The very first step is to reserve the space based on number of attendees. For that, please write your name in the etherpad. I know it might not be confirmed yet or the travel process has not started, still, you can write your probability of attending. It is not necessary to be 100% in QA but if you are planning to spend some time in QA PTG, still write your name with comment. [1] https://etherpad.openstack.org/p/qa-victoria-ptg -gmann From Albert.Braden at synopsys.com Wed Feb 12 17:52:03 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 12 Feb 2020 17:52:03 +0000 Subject: [TC] W release naming In-Reply-To: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> References: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> Message-ID: A few years ago I would have thought that this was nonsense. I still do, but nonsense is the order of the day. We have to be careful to not give the professionally offended an opportunity. -----Original Message----- From: Zane Bitter Sent: Wednesday, February 12, 2020 9:07 AM To: openstack-discuss at lists.openstack.org Subject: Re: [TC] W release naming On 12/02/20 6:57 am, Jean-Philippe Evrard wrote: > On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: >> Personally I could see how 'Wodewick' could be perceived as a joke on >> speech-impaired people > > I agree that at first sight this looks very bad. Which should be taken > into consideration. However, an extra fact/trivia for Wodewick (for > those who didn't know, like I was, as it's not mentioned on the wiki): > Terry Jones (director of the movie) had himself rhotacism. Also another fun fact: if you're explaining, you're losing ;) > So IMO, the presence of the (bad?) joke in the movie itself is a proof > of Jones' openness. TIL, thanks! Another reason to love the movie, while also not choosing this as the release name. From nate.johnston at redhat.com Wed Feb 12 17:56:43 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 12 Feb 2020 12:56:43 -0500 Subject: [TC] W release naming In-Reply-To: References: Message-ID: <20200212175643.fjdsn2lr2tq3lbcx@firewall> On Wed, Feb 12, 2020 at 12:57:41PM +0100, Jean-Philippe Evrard wrote: > On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: > > Personally I could see how 'Wodewick' could be perceived as a joke on > > speech-impaired people > > I agree that at first sight this looks very bad. Which should be taken > into consideration. However, an extra fact/trivia for Wodewick (for > those who didn't know, like I was, as it's not mentioned on the wiki): > Terry Jones (director of the movie) had himself rhotacism. > > So IMO, the presence of the (bad?) joke in the movie itself is a proof > of Jones' openness. One thing that the Internet - social media particularly but not exclusively - does exceptionally well is to present ideas, concepts, or occurrences with all of the surrounding context stripped away. I think that the only way to be successful is to select names that stand solid without any context and judged from multiple cultural perspectives. I personally object to Wuhan for this reason. Even if we were to say that we choose the name to honor the victims of this tragedy, that context would be stripped away in the transmission and there would be plenty of people who would come to the conclusion that we were making light of what is happening, or worse that companies that build products based on OpenStack would be making profits from that name. These are terrible thoughts, for sure, but regrettably we have to look at how people could percieve it, not how we mean it to be. Nate From neil at tigera.io Wed Feb 12 18:03:37 2020 From: neil at tigera.io (Neil Jerram) Date: Wed, 12 Feb 2020 18:03:37 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: On Wed, Feb 12, 2020 at 4:52 PM David Comay wrote: > > >>> My primary concern which isn't really governance would be around making >>> sure the components in `networking-calico` are kept in-sync with the parent >>> classes it inherits from Neutron itself. Is there a plan to keep these >>> in-sync together going forward? >>> >> >> Thanks for this question. I think the answer is that it will be a >> planned effort, from now on, for us to support new OpenStack versions. >> From Kilo through to Rocky we have aimed (and managed, so far as I know) to >> maintain a unified networking-calico codebase that works with all of those >> versions. However our code does not support Python 3, and OpenStack master >> now requires Python 3, so we have to invest work in order to have even the >> possibility of working with Train and later. More generally, it has been >> frustrating, over the last 2 years or so, to track OpenStack master as the >> CI requires, because breaking changes (in other OpenStack code) are made >> frequently and we get hit by them when trying to fix or enhance something >> (typically unrelated) in networking-calico. >> > > I don't know the history here around `calico-dhcp-agent` but has there > been previous efforts to propose integrating the changes made to it into > `neutron-dhcp-agent`? It seems the best solution would be to make the > functionality provided by the former into the latter rather than relying on > parent classes from the former. I suspect there are details here on why > that might be difficult but it seems solving that would be helpful in the > long-term. > No efforts that I know of. The difference is that calico-dhcp-agent is driven by information in the Calico etcd datastore, where neutron-dhcp-agent is driven via a message queue from the Neutron server. I think it has improved since, but when we originated calico-dhcp-agent a few years ago, the message queue wasn't scaling very well to hundreds of nodes. We can certainly keep reintegrating in mind as a possibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 12 18:22:56 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 12 Feb 2020 10:22:56 -0800 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Also the contributor guide: https://docs.openstack.org/contributors/ And if you have any questions about getting started let me or anyone else in the First Contact SIG[1] know! -Kendall [1] https://wiki.openstack.org/wiki/First_Contact_SIG On Tue, Feb 11, 2020 at 8:31 AM Albert Braden wrote: > Hi Trang, > > > > This document has been useful for me as I work on becoming an Openstack > contributor. > > > > https://docs.openstack.org/infra/manual/developers.html > > > > *From:* Trang Le > *Sent:* Tuesday, February 11, 2020 2:25 AM > *To:* openstack-discuss at lists.openstack.org > *Subject:* [UX] Contributing to OpenStack's User Interface > > > > Dear OpenStack Discussion Team, > > > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > > > All the best, > > Trang > > > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Feb 12 18:34:31 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 12 Feb 2020 18:34:31 +0000 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Also, when I get stuck somewhere in the signup process, I’ve found help on IRC, on the Freenode network, in #openstack-mentoring From: Kendall Nelson Sent: Wednesday, February 12, 2020 10:23 AM To: Albert Braden Cc: Trang Le ; openstack-discuss at lists.openstack.org Subject: Re: [UX] Contributing to OpenStack's User Interface Also the contributor guide: https://docs.openstack.org/contributors/ And if you have any questions about getting started let me or anyone else in the First Contact SIG[1] know! -Kendall [1] https://wiki.openstack.org/wiki/First_Contact_SIG On Tue, Feb 11, 2020 at 8:31 AM Albert Braden > wrote: Hi Trang, This document has been useful for me as I work on becoming an Openstack contributor. https://docs.openstack.org/infra/manual/developers.html From: Trang Le > Sent: Tuesday, February 11, 2020 2:25 AM To: openstack-discuss at lists.openstack.org Subject: [UX] Contributing to OpenStack's User Interface Dear OpenStack Discussion Team, I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. All the best, Trang Trang Le UC Berkeley Extension - Professional Diploma - UX/UI Design Smith College - Bachelors of Arts - Mathematics and Statistics Phone: +1 (650) 300 9007 Github: https://github.com/trangreyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Feb 12 18:50:46 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 12 Feb 2020 12:50:46 -0600 Subject: [TC] W release naming In-Reply-To: <20200212175643.fjdsn2lr2tq3lbcx@firewall> References: <20200212175643.fjdsn2lr2tq3lbcx@firewall> Message-ID: On 2/12/2020 11:56 AM, Nate Johnston wrote: > On Wed, Feb 12, 2020 at 12:57:41PM +0100, Jean-Philippe Evrard wrote: >> On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: >>> Personally I could see how 'Wodewick' could be perceived as a joke on >>> speech-impaired people >> I agree that at first sight this looks very bad. Which should be taken >> into consideration. However, an extra fact/trivia for Wodewick (for >> those who didn't know, like I was, as it's not mentioned on the wiki): >> Terry Jones (director of the movie) had himself rhotacism. >> >> So IMO, the presence of the (bad?) joke in the movie itself is a proof >> of Jones' openness. > One thing that the Internet - social media particularly but not exclusively - > does exceptionally well is to present ideas, concepts, or occurrences with all > of the surrounding context stripped away. I think that the only way to be > successful is to select names that stand solid without any context and judged > from multiple cultural perspectives. > > I personally object to Wuhan for this reason. Even if we were to say that we > choose the name to honor the victims of this tragedy, that context would be > stripped away in the transmission and there would be plenty of people who would > come to the conclusion that we were making light of what is happening, or worse > that companies that build products based on OpenStack would be making profits > from that name. These are terrible thoughts, for sure, but regrettably we have > to look at how people could percieve it, not how we mean it to be. > > Nate > Nate, I am in agreement that the name needs to hold up without context and under scrutiny from multiple perspectives.  So, Wuhan is not a good choice.  It also is dangerous to use a name associated with a still evolving situation. All, I think we need to choose a name that doesn't require explanation.  I think that Wodewick fails that test. Jay From whayutin at redhat.com Wed Feb 12 19:39:54 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 12 Feb 2020 12:39:54 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> References: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> Message-ID: On Tue, Feb 11, 2020 at 10:23 AM Ghanshyam Mann wrote: > > ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin < > whayutin at redhat.com> wrote ---- > > > > > > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin > wrote: > > > > Patches are up..https://review.opendev.org/707204 > > https://review.opendev.org/707054 > > https://review.opendev.org/#/c/707062/ > > Do we still have py27 jobs on tripleo CI ? If so I think it is time to > drop it > completely as the deadline is 13th Feb. Because there might be more > incompatible > dependencies as py2 drop from them is speedup. > > NOTE: stable branch testing till rocky use the stable u-c now to avoid > these issues. > > -gmann > > > > > > Greetings, > > > > Most of the jobs went RED a few minutes ago.Again it's related to > python27. Nothing is going to pass CI until this is fixed. > > See: https://etherpad.openstack.org/p/ruckroversprint21 > > We'll update the list when we have the required patches in.Thanks > > OK.. folks, Thanks for your patience! The final patch to restore sanity [1] is in the gate. Most everything was working since yesterday unless you were patching tripleo-common. This is the final item we've identified. Be safe out there! CI == Green [1] https://review.opendev.org/#/c/707330/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Feb 12 22:10:56 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 12 Feb 2020 17:10:56 -0500 Subject: [operators][cinder] RSD and Swordfish drivers being abandoned Message-ID: <0eedacda-ae16-b64c-57d7-32f169955c45@gmail.com> Greetings operators, The current maintainers of the Intel Rack Scale Design (RSD) driver announced at this week's Cinder meeting that the driver will be marked as UNSUPPORTED in Ussuri and that the driver will no longer be maintained. The Swordfish driver, which was under active development, but not yet merged to Cinder, is also being abandoned as well as the os-brick modifications to support NVMeoF. Finally, the third-party CI system for the RSD driver is being dismantled. We want you to be aware of these changes, but more optimistically, we are hoping that there would be some interest in the community in picking these up. If you are or know of such a person, please contact us right away so that we can get a technology transfer going before the current development group has been dispersed to other projects. For more information, please contact: Ivens Zambrano (ivens.zambrano at intel.com / IRC:IvensZambrano) David Shaughnessy (david.shaughnessy at intel.com / IRC: davidsha) thanks, brian From whayutin at redhat.com Thu Feb 13 03:51:15 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 12 Feb 2020 20:51:15 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> Message-ID: On Wed, Feb 12, 2020 at 12:39 PM Wesley Hayutin wrote: > > > On Tue, Feb 11, 2020 at 10:23 AM Ghanshyam Mann > wrote: > >> >> ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin < >> whayutin at redhat.com> wrote ---- >> > >> > >> > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin >> wrote: >> > >> > Patches are up..https://review.opendev.org/707204 >> > https://review.opendev.org/707054 >> > https://review.opendev.org/#/c/707062/ >> >> Do we still have py27 jobs on tripleo CI ? If so I think it is time to >> drop it >> completely as the deadline is 13th Feb. Because there might be more >> incompatible >> dependencies as py2 drop from them is speedup. >> >> NOTE: stable branch testing till rocky use the stable u-c now to avoid >> these issues. >> >> -gmann >> >> >> > >> > Greetings, >> > >> > Most of the jobs went RED a few minutes ago.Again it's related to >> python27. Nothing is going to pass CI until this is fixed. >> > See: https://etherpad.openstack.org/p/ruckroversprint21 >> > We'll update the list when we have the required patches in.Thanks >> >> > OK.. folks, > Thanks for your patience! > The final patch to restore sanity [1] is in the gate. Most everything was > working since yesterday unless you were patching tripleo-common. This is > the final item we've identified. > > Be safe out there! > CI == Green > > Well.. a new upstream centos-7 image broke the last patch before it merged. So.. now there are two patches that need to merge. https://review.opendev.org/#/c/707525 https://review.opendev.org/#/c/707330/ Thank you lord for bounty of work you have provided. We are blessed. > > [1] https://review.opendev.org/#/c/707330/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Thu Feb 13 06:52:53 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 13 Feb 2020 06:52:53 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> Message-ID: <1317590151.1993706.1581576773856@mail.yahoo.com> Thanks mdulko,Issue is in openvswitch, iam getting following error in switchd logs./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message Do we need to patch openvswitch to support above flow? My ovs version [root at node-2088 ~]# ovs-vsctl --versionovs-vsctl (Open vSwitch) 2.11.0DB Schema 7.16.1 Regards, Veera. On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: The controller logs are from an hour later than the CNI one. The issues seems not to be present. Isn't your controller still restarting? If so try to use -p option on `kubectl logs` to get logs from previous run. On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > Hi Mdulko, > Below are log files: > > Controller Log:http://paste.openstack.org/show/789457/ > cni : http://paste.openstack.org/show/789456/ > kubelet : http://paste.openstack.org/show/789453/ > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > Please let me know the issue > > > > Regards, > Veera. > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > Hi, > > So from this run you need the kuryr-controller logs. Apparently the pod > never got annotated with an information about the VIF. > > Thanks, > Michał > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > Hi mdulko, > > Thanks for your support. > > > > As you mention i removed readinessProbe and > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > on /healthz endpoint. Are those full logs, including the moment you > > tried spawning a container there? It seems like you only pasted the > > fragments with tracebacks regarding failures to read /healthz endpoint > > of kube-apiserver. That is another problem you should investigate - > > that causes Kuryr pods to restart. > > > > At first I'd disable the healthchecks (remove readinessProbe and > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > logs. > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Please find kuryr-cni logs > > > http://paste.openstack.org/show/789209/ > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > Hi, > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > Thanks, > > > Michał > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > Hi, > > > > I am trying to run kubelet in arm64 platform > > > > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > 2.    Generated kuryr-cni-arm64 container. > > > > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > Please help me to fix the issue > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Thu Feb 13 07:12:42 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 13 Feb 2020 07:12:42 +0000 (UTC) Subject: [openstack-dev][kuryr] Kubelet error in ARM64 References: <1417617907.1999997.1581577962230.ref@mail.yahoo.com> Message-ID: <1417617907.1999997.1581577962230@mail.yahoo.com> Hi I am getting following error in kubelet in arm64 platform Feb 13 07:16:32 node-2088 kubelet[16776]: E0213 07:16:32.493458   16776 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache/index0/size: no such file or directoryFeb 13 07:16:32 node-2088 kubelet[16776]: I0213 07:16:32.493600   16776 gce.go:44] Error while reading product_name: open /sys/class/dmi/id/product_name: no such file or directory Help me in fixing above issue. Regards, Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guoyongxhzhf at 163.com Thu Feb 13 08:20:28 2020 From: guoyongxhzhf at 163.com (guoyongxhzhf at 163.com) Date: Thu, 13 Feb 2020 16:20:28 +0800 Subject: [keystone][middleware] why can not middle-ware support redis? Message-ID: <289D1C6884E043999D843F93BED4332E@guoyongPC> In my environment, there is a redis. I want the keystone client in nova to cache token and use redis as a cache server. But after reading keystone middle-ware code, now keystone middle-ware just support swift and memcached server. Can I just modify keystone middleware code to use dogpile.cache.redis directly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Thu Feb 13 09:01:48 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 13 Feb 2020 10:01:48 +0100 Subject: [openstack-dev][kuryr] Kubelet error in ARM64 In-Reply-To: <1417617907.1999997.1581577962230@mail.yahoo.com> References: <1417617907.1999997.1581577962230.ref@mail.yahoo.com> <1417617907.1999997.1581577962230@mail.yahoo.com> Message-ID: <7dcd512afa1e1c879fe66a74908f3a0c7adeeedb.camel@redhat.com> This is clearly a Kubernetes issue, you should try Kubernetes help resources [1]. [1] https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/#help-my-question-isn-t-covered-i-need-help-now On Thu, 2020-02-13 at 07:12 +0000, VeeraReddy wrote: > Hi > I am getting following error in kubelet in arm64 platform > > Feb 13 07:16:32 node-2088 kubelet[16776]: E0213 07:16:32.493458 16776 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache/index0/size: no such file or directory > Feb 13 07:16:32 node-2088 kubelet[16776]: I0213 07:16:32.493600 16776 gce.go:44] Error while reading product_name: open /sys/class/dmi/id/product_name: no such file or directory > > Help me in fixing above issue. > > > Regards, > Veera. From licanwei_cn at 163.com Thu Feb 13 09:01:52 2020 From: licanwei_cn at 163.com (licanwei) Date: Thu, 13 Feb 2020 17:01:52 +0800 (GMT+08:00) Subject: [Watcher]IRC meeting at 8:00 UTC today In-Reply-To: <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> References: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> Message-ID: <7190d46f.2b59.1703dc876c2.Coremail.licanwei_cn@163.com> Hi, next time i will send the mail one day in advance. hope to meet you next time! Thanks licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 On 02/12/2020 17:58, info at dantalion.nl wrote: Hello Lican, Sorry but I am unable to attend at such short notice as in my timezone I won't be awake unless it is for the Watcher meeting. To be able to adjust my transit schedule I will have to know a day in advance. Hope to be there next time. Kind regards, Corne Lukken On 2/12/20 3:31 AM, licanwei wrote: > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > 签名由 网易邮箱大师 定制 > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Thu Feb 13 09:07:06 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 13 Feb 2020 10:07:06 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <1317590151.1993706.1581576773856@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> <1317590151.1993706.1581576773856@mail.yahoo.com> Message-ID: <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> We're using 2.9.5 on the x86_64 gate and that works fine. I'm not sure if downgrading could help. This is a Neutron issue and I don't have much experience on such a low level. You can try asking on IRC, e.g. on #openstack-neutron. On Thu, 2020-02-13 at 06:52 +0000, VeeraReddy wrote: > Thanks mdulko, > Issue is in openvswitch, iam getting following error in switchd logs > ./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message > > Do we need to patch openvswitch to support above flow? > > My ovs version > > [root at node-2088 ~]# ovs-vsctl --version > ovs-vsctl (Open vSwitch) 2.11.0 > DB Schema 7.16.1 > > > > Regards, > Veera. > > > On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: > > > The controller logs are from an hour later than the CNI one. The issues > seems not to be present. > > Isn't your controller still restarting? If so try to use -p option on > `kubectl logs` to get logs from previous run. > > On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > > Hi Mdulko, > > Below are log files: > > > > Controller Log:http://paste.openstack.org/show/789457/ > > cni : http://paste.openstack.org/show/789456/ > > kubelet : http://paste.openstack.org/show/789453/ > > > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > > > Please let me know the issue > > > > > > > > Regards, > > Veera. > > > > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > > > > Hi, > > > > So from this run you need the kuryr-controller logs. Apparently the pod > > never got annotated with an information about the VIF. > > > > Thanks, > > Michał > > > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Thanks for your support. > > > > > > As you mention i removed readinessProbe and > > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > > on /healthz endpoint. Are those full logs, including the moment you > > > tried spawning a container there? It seems like you only pasted the > > > fragments with tracebacks regarding failures to read /healthz endpoint > > > of kube-apiserver. That is another problem you should investigate - > > > that causes Kuryr pods to restart. > > > > > > At first I'd disable the healthchecks (remove readinessProbe and > > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > > logs. > > > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > > Hi mdulko, > > > > Please find kuryr-cni logs > > > > http://paste.openstack.org/show/789209/ > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > > > > Hi, > > > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > > > Thanks, > > > > Michał > > > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > > Hi, > > > > > I am trying to run kubelet in arm64 platform > > > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > > 2. Generated kuryr-cni-arm64 container. > > > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > > > Please help me to fix the issue > > > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > Veera. > > > > > > > > > > > > > > > > > > > > From merlin.blom at bertelsmann.de Thu Feb 13 09:49:17 2020 From: merlin.blom at bertelsmann.de (merlin.blom at bertelsmann.de) Date: Thu, 13 Feb 2020 09:49:17 +0000 Subject: [neutron][metering] Dublicated Neutron Meter Rules in different projects kills metering Message-ID: I want to use Neutron Meter with gnocchi to report the egress bandwidth used for public traffic. So I created neutron meter labels and neutron meter rules to include all ipv4 traffic: +-------------------+------------------------------------------------------- ---------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------- ---------------------------------------------+ | direction | egress | | id | f2c9b9a8-0af3-40a5-a718-6e841bad111d | | is_excluded | False | | location | cloud='', project.domain_id='default', project.domain_name=, | | | project.id='80120067cd7949908e44dce45aeb7712', project.name='billing', region_name='xxx', | | | zone= | | metering_label_id | d0068fc8-4a3e-4108-aa11-e3c171d4d1e1 | | name | None | | project_id | None | | remote_ip_prefix | 0.0.0.0/0 | +-------------------+------------------------------------------------------- ---------------------------------------------+ And excluded all private nets: +-------------------+------------------------------------------------------- ---------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------- ---------------------------------------------+ | direction | egress | | id | 838c9631-665b-42b6-b1e9-539983a38573 | | is_excluded | True | | location | cloud='', project.domain_id='default', project.domain_name=, | | | project.id='80120067cd7949908e44dce45aeb7712', project.name='billing', region_name='xxx', | | | zone= | | metering_label_id | 435652e6-e985-4351-a31a-954bace9eea0 | | name | None | | project_id | None | | remote_ip_prefix | 10.0.0.0/8 | +-------------------+------------------------------------------------------- ---------------------------------------------+ It works fine for just one project but if I apply it to all projects it fails and no measures are recorded in gnocchi. The neutron-metering-agent.log shows the following warning: Feb 13 09:14:18 xxx_host neutron-metering-agent: 2020-02-13 09:14:09.648 4732 WARNING neutron.agent.linux.iptables_manager [req-4c38f1f5-2db4-4d4a-9c1f-9585b1b50427 65c6d4bdcbc7469a910f6361b7f70f27 80120067cd7949908e44dce45aeb7712 - - -] Duplicate iptables rule detected. This may indicate a bug in the iptables rule generation code. Line: -A neutron-meter-r-28155d45-d16 -s 10.0.0.0/8 -o qg-c61bafef-ea -j RETURN I would expect that it is possible to have similar rules for different projects. What do you think? Is it part of the rule creation code? In the iptables_manager code the function is criticized: https://github.com/openstack/neutron/blob/86e4f141159072421a19080455caba1b0e fef776/neutron/agent/linux/iptables_manager.py # TODO(kevinbenton): remove this function and the next one. They are # just oversized brooms to sweep bugs under the rug!!! We generate the # rules and we shouldn't be generating duplicates. def _weed_out_duplicates(line): if line in seen_lines: thing = 'chain' if line.startswith(':') else 'rule' LOG.warning("Duplicate iptables %(thing)s detected. This " "may indicate a bug in the iptables " "%(thing)s generation code. Line: %(line)s", {'thing': thing, 'line': line}) return False seen_lines.add(line) # Leave it alone return True https://bugs.launchpad.net/neutron/+bug/1863068 Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From lyarwood at redhat.com Thu Feb 13 09:51:02 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 13 Feb 2020 09:51:02 +0000 Subject: [nova][cinder] What should the behaviour of extend_volume be with attached encrypted volumes? Message-ID: <20200213095102.qrvvdqbxa22jceo7@lyarwood.usersys.redhat.com> Hello all, The following bug was raised recently regarding a failure to extend attached encrypted volumes: Failing to extend an attached encrypted volume https://bugs.launchpad.net/nova/+bug/1861071 I've worked up a series below that resolves this for LUKSv1 volumes by taking the LUKSv1 header into account before calling Libvirt to resize the block device within the instance: https://review.opendev.org/#/q/topic:bug/1861071 This results in the instance visable block device being resized to a size just smaller than that requested through Cinder's API. My question to the list is if that behaviour is acceptable given the same call to extend an attached unencrypted volume *will* grow the instance visable block device to the requested size? Many thanks in advance, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From ignaziocassano at gmail.com Thu Feb 13 10:19:06 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 11:19:06 +0100 Subject: [queens]neutron][metadata] configuration Message-ID: Hello everyone, in my installation of Queees I am using many provider networks. I don't use openstack router but only dhcp. I would like my instances to reach the metadata agent without the 169.154.169.254 route, so I would like the provider networks to directly reach the metadata agent on the internal api vip. How can I get this configuration? Thank you Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Feb 13 10:38:57 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 13 Feb 2020 11:38:57 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: Message-ID: Hi, In case of such isolated networks, You can configure neutron to serve metadata in dhcp namespace and that it will set route to 169.254.169.254 via dhcp port’s IP address. Please check config options: [1] and [2] [1] https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.enable_isolated_metadata [2] https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.force_metadata > On 13 Feb 2020, at 11:19, Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many provider networks. I don't use openstack router but only dhcp. > I would like my instances to reach the metadata agent without the 169.154.169.254 route, so I would like the provider networks to directly reach the metadata agent on the internal api vip. > How can I get this configuration? > Thank you > Ignazio — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Thu Feb 13 11:15:34 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 12:15:34 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: Message-ID: Hello Slawek, I do not want to use metadata in dhcp namespace. It forces option 121 and I receive all subnet routes on my instances. If I use more than 10 subnets on same vlan , it does not work because I do not receive the 169.254.169.254 routing tables due to the following error: dnsmasq-dhcp[52165]: cannot send DHCP/BOOTP option 121: no space left in packet Ignazio Il giorno gio 13 feb 2020 alle ore 11:39 Slawek Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > In case of such isolated networks, You can configure neutron to serve > metadata in dhcp namespace and that it will set route to 169.254.169.254 > via dhcp port’s IP address. > Please check config options: [1] and [2] > > [1] > https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.enable_isolated_metadata > [2] > https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.force_metadata > > > On 13 Feb 2020, at 11:19, Ignazio Cassano > wrote: > > > > Hello everyone, in my installation of Queees I am using many provider > networks. I don't use openstack router but only dhcp. > > I would like my instances to reach the metadata agent without the > 169.154.169.254 route, so I would like the provider networks to directly > reach the metadata agent on the internal api vip. > > How can I get this configuration? > > Thank you > > Ignazio > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Thu Feb 13 12:40:51 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Thu, 13 Feb 2020 13:40:51 +0100 Subject: [CINDER] Distributed storage alternatives Message-ID: Hi all. we 'd like to explore storage back end alternatives to CEPH for Openstack I am aware of GlusterFS but what would you recommend for distributed storage like Ceph and specifically for block device provisioning? Of course must be: 1. *Reliable* 2. *Fast* 3. *Capable of good performance over WAN given a good network back end* Both open source and commercial technologies and ideas are welcome. Cheers -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Feb 13 13:09:37 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 13 Feb 2020 13:09:37 +0000 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: Message-ID: <20200213130936.3ainb4cift5euslw@yuggoth.org> On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > Hello everyone, in my installation of Queees I am using many > provider networks. I don't use openstack router but only dhcp. I > would like my instances to reach the metadata agent without the > 169.154.169.254 route, so I would like the provider networks to > directly reach the metadata agent on the internal api vip. How can > I get this configuration? Have you tried using configdrive instead of the metadata service? It's generally more reliable. The main downside is that it doesn't change while the instance is running, so if you're wanting to use this to update routes for active instances between reboots then I suppose it wouldn't solve your problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Thu Feb 13 13:29:56 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 14:29:56 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: <20200213130936.3ainb4cift5euslw@yuggoth.org> References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: We are going to try it. Ignazio Il giorno gio 13 feb 2020 alle ore 14:16 Jeremy Stanley ha scritto: > On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many > > provider networks. I don't use openstack router but only dhcp. I > > would like my instances to reach the metadata agent without the > > 169.154.169.254 route, so I would like the provider networks to > > directly reach the metadata agent on the internal api vip. How can > > I get this configuration? > > Have you tried using configdrive instead of the metadata service? > It's generally more reliable. The main downside is that it doesn't > change while the instance is running, so if you're wanting to use > this to update routes for active instances between reboots then I > suppose it wouldn't solve your problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Thu Feb 13 13:52:53 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 13 Feb 2020 14:52:53 +0100 Subject: [TC] W release naming In-Reply-To: References: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> Message-ID: <4c614eaf7a21acc6d3453835d4e87518463d36be.camel@evrard.me> On Wed, 2020-02-12 at 17:52 +0000, Albert Braden wrote: > A few years ago I would have thought that this was nonsense. I still > do, but nonsense is the order of the day. We have to be careful to > not give the professionally offended an opportunity. Agreed. > > -----Original Message----- > From: Zane Bitter > Also another fun fact: if you're explaining, you're losing ;) Correct. I love the recursivity of this conversation. > TIL, thanks! Another reason to love the movie, while also not > choosing > this as the release name. Agreed. From ignaziocassano at gmail.com Thu Feb 13 15:31:40 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 16:31:40 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: <20200213130936.3ainb4cift5euslw@yuggoth.org> References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: Hello, config drive is the best solution for our situation. Thanks Ignazio Il giorno gio 13 feb 2020 alle ore 14:16 Jeremy Stanley ha scritto: > On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many > > provider networks. I don't use openstack router but only dhcp. I > > would like my instances to reach the metadata agent without the > > 169.154.169.254 route, so I would like the provider networks to > > directly reach the metadata agent on the internal api vip. How can > > I get this configuration? > > Have you tried using configdrive instead of the metadata service? > It's generally more reliable. The main downside is that it doesn't > change while the instance is running, so if you're wanting to use > this to update routes for active instances between reboots then I > suppose it wouldn't solve your problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Thu Feb 13 17:02:28 2020 From: james.page at canonical.com (James Page) Date: Thu, 13 Feb 2020 17:02:28 +0000 Subject: [charms] Peter Matulis -> charms-core Message-ID: Hi Team I'd like to proposed Peter Matulis for membership of the charms-core team. Although he's not be focussed on developing the codebase since he started contributing to the OpenStack Charms he's made a number of significant contributions to our documentation as well as regularly providing feedback on updates to README's and the charm deployment guide. I think he would make a valuable addition to the charms-core review team! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.ames at canonical.com Thu Feb 13 17:10:45 2020 From: david.ames at canonical.com (David Ames) Date: Thu, 13 Feb 2020 09:10:45 -0800 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: +1 Peter has made a significant impact on the quality of our documentation. Having Peter in the charms-core team will allow him to be more efficient in doing his work and continuing the positive impact on our documentation. -- David Ames On Thu, Feb 13, 2020 at 9:08 AM James Page wrote: > > Hi Team > > I'd like to proposed Peter Matulis for membership of the charms-core team. > > Although he's not be focussed on developing the codebase since he started contributing to the OpenStack Charms he's made a number of significant contributions to our documentation as well as regularly providing feedback on updates to README's and the charm deployment guide. > > I think he would make a valuable addition to the charms-core review team! > > Cheers > > James > From ignaziocassano at gmail.com Thu Feb 13 17:20:29 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 18:20:29 +0100 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: Hello Alfredo, I think best opensource solution is ceph. As far as commercial solutions are concerned we are working with network appliance (netapp) and emc unity. Regards Ignazio Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha scritto: > Hi all. > we 'd like to explore storage back end alternatives to CEPH for Openstack > > I am aware of GlusterFS but what would you recommend for distributed > storage like Ceph and specifically for block device provisioning? > Of course must be: > > 1. *Reliable* > 2. *Fast* > 3. *Capable of good performance over WAN given a good network back end* > > Both open source and commercial technologies and ideas are welcome. > > Cheers > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Feb 13 17:38:30 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 13 Feb 2020 11:38:30 -0600 Subject: [nova] API updates week 20-7 Message-ID: <1703fa17316.cd28ef0113738.2723432885367961729@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. Please add if I missed any BPs/API related work. API Related BP : ============ COMPLETED: 1. Add image-precache-support spec: - https://blueprints.launchpad.net/nova/+spec/image-precache-support Code Ready for Review: ------------------------------ 1. Nova API policy improvement - Topic: https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+(status:open+OR+status:merged) - Weekly Progress: API till D is covered and ready for review. Fixed 5 bugs in policy while working on new defaults. - Review guide over ML - http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008504.html 2. Non-Admin user can filter their instance by availability zone: - Topic: https://review.opendev.org/#/q/topic:bp/non-admin-filter-instance-by-az+(status:open+OR+status:merged) - Weekly Progress: Code under review. efried needs to remove his -2 now as spec is merge 3. Boot from volume instance rescue - Topic: https://review.opendev.org/#/q/topic:bp/virt-bfv-instance-rescue+(status:open+OR+status:merged) - Weekly Progress: Code is in progress. Lee Yarwood has removed the WIP from patches so ready for review. 4. Add action event fault details -Topic: https://review.opendev.org/#/q/topic:bp/action-event-fault-details+(status:open+OR+status:merged) - Weekly Progress: Spec is merged and code is ready for review. Specs are merged and code in-progress: ------------------------------ ------------------ - None Spec Ready for Review or Action from Author: --------------------------------------------------------- 1. Support specifying vnic_type to boot a server -Spec: https://review.opendev.org/#/c/672400/ - Weekly Progress: Stephen summarized the comment about not in scope of nova. I will remove it from this list from next report. 2. Allow specify user to reset password -Spec: https://review.opendev.org/#/c/682302/5 - Weekly Progress: One +2 on this but other are disagree on this idea. More discussion on review. 3. Support re-configure deleted_on_termination in server -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: This is old spec and there is no consensus on this till now. Others: 1. None Bugs: ==== I started fixing policy bugs while working on policy-defaults-refresh BP. 5 bugs have been identified till now and fix up for review. - https://bugs.launchpad.net/nova/+bugs?field.tag=policy-defaults-refresh NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. -gmann From sean.mcginnis at gmx.com Thu Feb 13 19:22:55 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 13 Feb 2020 13:22:55 -0600 Subject: [release] Release countdown for week R-13, February 10-14 In-Reply-To: <20200207135017.GA1488525@sm-workstation> References: <20200207135017.GA1488525@sm-workstation> Message-ID: <11e76513-ff3a-ab3a-4ca3-808550a17d6a@gmx.com> > General Information > ------------------- > Libraries need to be released at least once per milestone period. Next week, > the release team will propose releases for any library that has not been > otherwise released since milestone 1. PTL's and release liaisons, please watch > for these and give a +1 to acknowledge them. If there is some reason to hold > off on a release, let us know that as well. A +1 would be appreciated, but if > we do not hear anything at all by the end of the week, we will assume things > are OK to proceed. Thank you to all who have given the explicit go ahead for these releases so far. We have been able to process the majority of the milestone-2 release requests. There are still quite a few out there that we will approve tomorrow if there is no response. Please do let us know if your team is ready for these to go ahead. If you need a little more time (as in, by very early next week), please comment on the proposed patch to let us know to hold off on processing until we've gotten an update from you. Thanks! Sean From rosmaita.fossdev at gmail.com Thu Feb 13 20:07:52 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 13 Feb 2020 15:07:52 -0500 Subject: [cinder][nova] volume-local-cache meeting results Message-ID: <39695ce2-fe35-6bfb-17f1-bbfaef4293d1@gmail.com> Thanks to everyone who attended this morning, we had a productive meeting. If you missed it and want to know what happened: etherpad: https://etherpad.openstack.org/p/volume-local-cache recording: https://youtu.be/P9bouCCoqVo Liang Fang will be updating the specs to reflect what was discussed: cinder spec: https://review.opendev.org/#/c/684556/ nova spec: https://review.opendev.org/#/c/689070/ cheers, brian From whayutin at redhat.com Thu Feb 13 21:07:36 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 13 Feb 2020 14:07:36 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> Message-ID: On Wed, Feb 12, 2020 at 8:51 PM Wesley Hayutin wrote: > > > On Wed, Feb 12, 2020 at 12:39 PM Wesley Hayutin > wrote: > >> >> >> On Tue, Feb 11, 2020 at 10:23 AM Ghanshyam Mann >> wrote: >> >>> >>> ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin < >>> whayutin at redhat.com> wrote ---- >>> > >>> > >>> > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin >>> wrote: >>> > >>> > Patches are up..https://review.opendev.org/707204 >>> > https://review.opendev.org/707054 >>> > https://review.opendev.org/#/c/707062/ >>> >>> Do we still have py27 jobs on tripleo CI ? If so I think it is time to >>> drop it >>> completely as the deadline is 13th Feb. Because there might be more >>> incompatible >>> dependencies as py2 drop from them is speedup. >>> >>> NOTE: stable branch testing till rocky use the stable u-c now to avoid >>> these issues. >>> >>> -gmann >>> >>> >>> > >>> > Greetings, >>> > >>> > Most of the jobs went RED a few minutes ago.Again it's related to >>> python27. Nothing is going to pass CI until this is fixed. >>> > See: https://etherpad.openstack.org/p/ruckroversprint21 >>> > We'll update the list when we have the required patches in.Thanks >>> >>> >> OK.. folks, >> Thanks for your patience! >> The final patch to restore sanity [1] is in the gate. Most everything >> was working since yesterday unless you were patching tripleo-common. This >> is the final item we've identified. >> >> Be safe out there! >> CI == Green >> >> > Well.. a new upstream centos-7 image broke the last patch before it merged. > So.. now there are two patches that need to merge. > > https://review.opendev.org/#/c/707525 > https://review.opendev.org/#/c/707330/ > > Thank you lord for bounty of work you have provided. We are blessed. > Thanks for your patience.. All the patches that needed to merge have merged. Careful w/ rechecks at peak times in the next few days as there have been NO promotions and it will take longer and longer for each node and each container to yum update. Take care > > >> >> [1] https://review.opendev.org/#/c/707330/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Burak.Hoban at iag.com.au Thu Feb 13 21:19:16 2020 From: Burak.Hoban at iag.com.au (Burak Hoban) Date: Thu, 13 Feb 2020 21:19:16 +0000 Subject: [CINDER] Distributed storage alternatives Message-ID: Hi guys, We use Dell EMC VxFlex OS, which in its current version allows for free use and commercial (in version 3.5 a licence is needed, but its perpetual). It's similar to Ceph but more geared towards scale and performance etc (it use to be called ScaleIO). Other than that, I know of a couple sites using SAN storage, but a lot of people just seem to use Ceph. Cheers, Burak ------------------------------ Message: 2 Date: Thu, 13 Feb 2020 18:20:29 +0100 From: Ignazio Cassano To: Alfredo De Luca Cc: openstack-discuss Subject: Re: [CINDER] Distributed storage alternatives Message-ID: Content-Type: text/plain; charset="utf-8" Hello Alfredo, I think best opensource solution is ceph. As far as commercial solutions are concerned we are working with network appliance (netapp) and emc unity. Regards Ignazio Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha scritto: > Hi all. > we 'd like to explore storage back end alternatives to CEPH for > Openstack > > I am aware of GlusterFS but what would you recommend for distributed > storage like Ceph and specifically for block device provisioning? > Of course must be: > > 1. *Reliable* > 2. *Fast* > 3. *Capable of good performance over WAN given a good network back > end* > > Both open source and commercial technologies and ideas are welcome. > > Cheers > > -- > *Alfredo* > > _____________________________________________________________________ The information transmitted in this message and its attachments (if any) is intended only for the person or entity to which it is addressed. The message may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information, by persons or entities other than the intended recipient is prohibited. If you have received this in error, please contact the sender and delete this e-mail and associated material from any computer. The intended recipient of this e-mail may only use, reproduce, disclose or distribute the information contained in this e-mail and any attached files, with the permission of the sender. This message has been scanned for viruses. _____________________________________________________________________ From doka.ua at gmx.com Thu Feb 13 23:02:51 2020 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 14 Feb 2020 01:02:51 +0200 Subject: RHOSP-like installation Message-ID: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Dear colleagues, while having a good experience with Openstack on Ubuntu, we're facing a plenty of questions re RHOSP installation. The primary requirement for our team re RHOSP is to get a knowledge on RHOSP - how to install it and maintain. As far as I understand, RDO is the closest way to reach this target but which kind of installation it's better to use? - * plain RDO installation as described in generic Openstack guide at https://docs.openstack.org/install-guide/index.html (specifics in RHEL/CentOS sections) * or TripleO installation as described in http://tripleo.org/install/ * or, may be, it is possible to use RHOSP in kind of trial mode to get enough knowledge on this platform? Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're going to use in "ultraconverged" mode - as both controller and agent (compute/network/storage) nodes (controllers, though, can be in virsh-controlled VMs). In case of TripleO scenario, 4th server can be used for undercloud role. This installation is intended not for production use, but rather for learning purposes, so no special requirements for productivity. The only special requirement - to be functionally as much as close to canonical RHOSP platform. I will highly appreciate your suggestions on this issue. Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From amy at demarco.com Thu Feb 13 23:17:17 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 13 Feb 2020 17:17:17 -0600 Subject: RHOSP-like installation In-Reply-To: References: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Message-ID: Adding the discuss list back in On Thu, Feb 13, 2020 at 5:16 PM Amy Marrich wrote: > Volodymyr, > > I'm sure someone from the TripleO team will pipe in, but TripleO is closer > to RHOSP then RDO. When I was playing with it in the past I found Keith > Tenzer's blogs helpful. There might be more recent ones then this but > here's a link to one: > > > https://keithtenzer.com/2015/10/14/howto-openstack-deployment-using-tripleo-and-the-red-hat-openstack-director/ > > Thanks, > > Amy Marrich (spotz) > > On Thu, Feb 13, 2020 at 5:05 PM Volodymyr Litovka wrote: > >> Dear colleagues, >> >> while having a good experience with Openstack on Ubuntu, we're facing a >> plenty of questions re RHOSP installation. >> >> The primary requirement for our team re RHOSP is to get a knowledge on >> RHOSP - how to install it and maintain. As far as I understand, RDO is >> the closest way to reach this target but which kind of installation it's >> better to use? - >> * plain RDO installation as described in generic Openstack guide at >> https://docs.openstack.org/install-guide/index.html (specifics in >> RHEL/CentOS sections) >> * or TripleO installation as described in http://tripleo.org/install/ >> * or, may be, it is possible to use RHOSP in kind of trial mode to get >> enough knowledge on this platform? >> >> Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're >> going to use in "ultraconverged" mode - as both controller and agent >> (compute/network/storage) nodes (controllers, though, can be in >> virsh-controlled VMs). In case of TripleO scenario, 4th server can be >> used for undercloud role. This installation is intended not for >> production use, but rather for learning purposes, so no special >> requirements for productivity. The only special requirement - to be >> functionally as much as close to canonical RHOSP platform. >> >> I will highly appreciate your suggestions on this issue. >> >> Thank you. >> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 13 23:31:39 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 13 Feb 2020 15:31:39 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #2 Message-ID: Hello! First off I want to say thank you to the 4 projects that have already gotten started/completed the goal! Good work Glance, Neutron, Searchlight, and Watcher :) For the rest of the projects, thankfully we have more time to work on this goal than for the dropping py2 goal given that its documentation we aren't restricted to it being completed by feature freeze. That said, the sooner the better :) One thing I did want to draw attention to was the discussion around adding an include for CONTRIBUTING.rst to doc/source/contributor/contributing.rst[4] and the patch I have to add it to the template[5]. I also pushed a patch to clarify doc/source/contributor/contributing.rst vs CONTRIBUTING.rst -Kendall (diablo_rojo) [1] Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html [2] Docs Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 [4] TC Discussion: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2020-02-06.log.html#t2020-02-06T16:37:23 [5] Patch to add include: https://review.opendev.org/#/c/707735/ [6] contributing.rst vs CONTRIBUTING.rst clarification patch: https://review.opendev.org/#/c/707736/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Thu Feb 13 23:54:43 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 13 Feb 2020 18:54:43 -0500 Subject: RHOSP-like installation In-Reply-To: References: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Message-ID: We use RHOSP at $day_job and my TripleO lab is extremely close to our Production environment. It features the Undercloud and Overcloud topology along the same-ish templates for the deployment. You can use Ironic for metal management or config download. On Thu, Feb 13, 2020, 6:24 PM Amy Marrich wrote: > Adding the discuss list back in > > On Thu, Feb 13, 2020 at 5:16 PM Amy Marrich wrote: > >> Volodymyr, >> >> I'm sure someone from the TripleO team will pipe in, but TripleO is >> closer to RHOSP then RDO. When I was playing with it in the past I found >> Keith Tenzer's blogs helpful. There might be more recent ones then this but >> here's a link to one: >> >> >> https://keithtenzer.com/2015/10/14/howto-openstack-deployment-using-tripleo-and-the-red-hat-openstack-director/ >> >> Thanks, >> >> Amy Marrich (spotz) >> >> On Thu, Feb 13, 2020 at 5:05 PM Volodymyr Litovka >> wrote: >> >>> Dear colleagues, >>> >>> while having a good experience with Openstack on Ubuntu, we're facing a >>> plenty of questions re RHOSP installation. >>> >>> The primary requirement for our team re RHOSP is to get a knowledge on >>> RHOSP - how to install it and maintain. As far as I understand, RDO is >>> the closest way to reach this target but which kind of installation it's >>> better to use? - >>> * plain RDO installation as described in generic Openstack guide at >>> https://docs.openstack.org/install-guide/index.html (specifics in >>> RHEL/CentOS sections) >>> * or TripleO installation as described in http://tripleo.org/install/ >>> * or, may be, it is possible to use RHOSP in kind of trial mode to get >>> enough knowledge on this platform? >>> >>> Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're >>> going to use in "ultraconverged" mode - as both controller and agent >>> (compute/network/storage) nodes (controllers, though, can be in >>> virsh-controlled VMs). In case of TripleO scenario, 4th server can be >>> used for undercloud role. This installation is intended not for >>> production use, but rather for learning purposes, so no special >>> requirements for productivity. The only special requirement - to be >>> functionally as much as close to canonical RHOSP platform. >>> >>> I will highly appreciate your suggestions on this issue. >>> >>> Thank you. >>> >>> -- >>> Volodymyr Litovka >>> "Vision without Execution is Hallucination." -- Thomas Edison >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From agarwalvishakha18 at gmail.com Fri Feb 14 06:57:45 2020 From: agarwalvishakha18 at gmail.com (Vishakha Agarwal) Date: Fri, 14 Feb 2020 12:27:45 +0530 Subject: [keystone] Keystone Team Update - Week of 10 February 2020 Message-ID: # Keystone Team Update - Week of 10 February 2020 ## News ### YVR PTG Keystone People can write down the topics in etherpad [1] to have a brief discussion with the team. [1] https://etherpad.openstack.org/p/yvr-ptg-keystone ### User Support and Bug Duty Every week the duty is being rotated between the members. The person-in-charge for bug duty for current and upcoming week can be seen on the etherpad [2] [2] https://etherpad.openstack.org/p/keystone-l1-duty ## Open Specs Ussuri specs: https://bit.ly/2XDdpkU Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 10 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 21 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Ussuri Roadmap Stories - Groups in keystone SAML assertion https://tree.taiga.io/project/keystone-ussuri-roadmap/us/33 https://review.opendev.org/#/c/588211/ Add openstack_groups to assertion - Add support for modifying resource options to CLI tool https://tree.taiga.io/project/keystone-ussuri-roadmap/us/53 https://review.opendev.org/#/c/697444/ Adding options to user cli ## Bugs This week we opened 3 new bugs and closed 2. Bugs opened (3) Bug #1862802 (keystone:Wishlist): Avoid the default domain usage when the Domain is not specified in the project creation - Opened by Raildo Mascena de Sousa Filho https://bugs.launchpad.net/keystone/+bug/1862802 Bug #1862606 (keystone:Undecided): LDAP support broken if UTF8 characters in DN (python2) - Opened by Rafal Ramocki https://bugs.launchpad.net/keystone/+bug/1862606 Bug #1863098 (keystone:Undecided): Install and configure in keystone - Opened by Assassins! https://bugs.launchpad.net/keystone/+bug/1863098 Bugs closed (2) Bug #1808305 (python-keystoneclient:Won't Fix) https://bugs.launchpad.net/python-keystoneclient/+bug/1808305 Bug #1823258 (keystone:Undecided): RFE: Immutable Resources - Fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1823258 ## Milestone Outlook https://releases.openstack.org/ussuri/schedule.html This week marks the milestone 2 spec freeze. Feature proposal freeze is in week of March 9. This means code implementing approved specs needs to be in a reviewable, non-WIP state. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From eblock at nde.ag Fri Feb 14 07:37:52 2020 From: eblock at nde.ag (Eugen Block) Date: Fri, 14 Feb 2020 07:37:52 +0000 Subject: VPNaaS with multiple endpoint groups Message-ID: <20200214073752.Horde.fEamBT4aCkEnszSfxNwdzso@webmail.nde.ag> Hi all, is anyone here able to help with a vpn issue? It's not really my strong suit but I'll try to explain. In a Rocky environment (using openvswitch) a customer has setup a VPN service successfully, but that only seems to work if there's only one local and one peer endpoint group. According to the docs it should work with multiple endpoint groups, as far as I could tell the setup looks fine and matches the docs (don't create the subnet when creating the vpn service but use said endpoint groups). What we're seeing is that as soon as the vpn site connection is created with multiple endpoints only one of the destination IPs is reachable. And it seems as if it's always the first in the list of EPs (see below). This seems to be reflected in the iptables where we also only see one of the required IP ranges. Also neutron reports duplicate rules if we try to use both EPs: 2020-02-12 14:14:27.638 16275 WARNING neutron.agent.linux.iptables_manager [req-92ff6f06-3a92-4daa-aeea-9c02dc9a31c3 ba9bf239530d461baea2f6f60bd301e6 850dad648ce94dbaa5c0ea2fb450bbda - - -] Duplicate iptables rule detected. This may indicate a bug in the iptables rule generation code. Line: -A neutron-l3-agent-POSTROUTING -s X.X.252.0/24 -d Y.Y.0.0/16 -m policy --dir out --pol ipsec -j ACCEPT These are the configured endpoints: root at control:~ # openstack vpn endpoint group list +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ | ID | Name | Type | Endpoints | +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ | 0f853567-e4bf-4019-9290-4cd9f94a9793 | peer-ep-group-1 | cidr | [u'X.X.253.0/24'] | | 152a0f9e-ce49-4769-94f1-bc0bebedd3ec | peer-ep-group-2 | cidr | [u'X.X.253.0/24', u'Y.Y.0.0/16'] | | 791ab8ef-e150-4ba0-ac2c-c044659f509e | local-ep-group1 | subnet | [u'38efad5e-0f1e-4e36-8995-74a611bfef41'] | | 810b0bf2-d258-459b-9b57-ae5b491ea612 | local-ep-group2 | subnet | [u'38efad5e-0f1e-4e36-8995-74a611bfef41', u'9e35d80f-029e-4cc1-a30b-1753f7683e16'] | | b5c79e08-41e4-441c-9ed3-9b02c2654173 | peer-ep-group-3 | cidr | [u'Y.Y.0.0/16'] | +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ Has anyone experience with this and could help me out? Another follow-up question: how can we gather some information regarding the ipsec status? Althoug there is a active tunnel we don't see anything with 'ipsec statusall', I've checked all namespaces on the control node. Any help is highly appreciated! Best regards, Eugen From samueldmq at gmail.com Fri Feb 14 09:43:55 2020 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Fri, 14 Feb 2020 06:43:55 -0300 Subject: [Outreachy] Call for mentors, projects by Feb 25 Message-ID: Hi all, TL;DR OpenStack is participating of Outreachy! Mentors, please submit project proposals by Feb 25. More info: https://www.outreachy.org/communities/cfp/openstack/ Outreachy's goal is to support people from groups underrepresented in the technology industry. The upcoming round runs from May to August 2020. Now that we are confirmed as a participating community, we welcome experienced community contributors to help out as mentors, who need to submit project proposals. Therefore, if you have an interesting project that meets the criteria, please submit it as soon as possible. You can find more information about the program, project criteria, mentor requirements, and submit your project at the Call for mentors page [1]. I have participated twice of Outreachy as a mentor and I can tell you it was a great opportunity to get someone onboard the community and, of course, to learn from them. Besides getting work done (which is great), we happen to make our community more diverse, while helping with its continuity. Please let me know if you have any questions. Thanks, Samuel [1] https://www.outreachy.org/communities/cfp/openstack/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Fri Feb 14 13:38:35 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 14 Feb 2020 11:38:35 -0200 Subject: [tripleo] TripleO CI Summary: Sprint 42 Message-ID: Greetings, The TripleO CI team has just completed Sprint 42 / Unified Sprint 21 (Jan 23 thru Feb 12). The following is a summary of completed work during this sprint cycle: - Started refactoring Promoter code into a modular implementation w/ testing oriented design and accommodating the changes for the new promotion pipeline. - Completed building CentOS8 containers with required repositories. This is still an unofficial build as some of the repositories (like Ceph) are not from RDO/TripleO. - Refined the component pipeline design [3] w/ the new aggregated hash containing all promoted components. Continued to implement the downstream version of the component pipeline. - Translated get-hash into a separated role in ci-config repo, de-attaching from promote-hash role in config. Added support for the new component and integration jobs. - Made improvements to the collect-logs plugin as part of the shared goals for the combined CI team. - Built CI workflow to follow successful manual standalone deployment using an IPA server. The TLS CI job is not running yet and still needs to be activated. Ruck/Rover Notes: - There were at least four upstream gate outages during this sprint. All have been resolved at this time. Notes are here [4]. The planned work for the next sprint [1] extends the work started in the previous sprint and focuses on the following: - Build the CentOS8 pipeline starting with the base jobs to build containers and promote hashes. - Replicate the component jobs to all available components and build the new promotion pipeline in CentOS8. - Continue the promoter code refactoring and converting the legacy code in a modular implementation that can be tested more efficiently. - Continue the collect-logs effort to make it a shared tool used by multiple teams. - Collaborate with the RDO team in migrating 3rd party CI to vexxhost. The Ruck and Rover for this sprint are Wesley Hayutin (weshay) and Marios Andreou (marios). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-22 [2] https://etherpad.openstack.org/p/ruckroversprint22 [3] https://hackmd.io/5uYRmLaOTI2raTbHWsaiSQ [4] https://etherpad.openstack.org/p/ruckroversprint21 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 14 14:41:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 14 Feb 2020 08:41:04 -0600 Subject: [release] Release countdown for week R-12, February 17-21 Message-ID: Development Focus ----------------- We are now past the Ussuri-2 milestone, and entering the last development phase of the cycle. Teams should be focused on implementing planned work for the cycle. Now is a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- Looking ahead to the end of the release cycle, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last feature release before Non-client library freeze (April 2). Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their last feature release before Client library freeze (April 9) * Deliverables following a cycle-with-rc model (that would be most services) observe a Feature freeze on that same date, April 9. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. After feature freeze, cycle-with-rc deliverables need to produce a first release candidate (and a stable branch) before RC1 deadline (April 23) * Deliverables following cycle-with-intermediary model can release as necessary, but in all cases before Final RC deadline (May 7) Upcoming Deadlines & Dates -------------------------- Non-client library freeze: April 2 (R-6 week) Client library freeze: April 9 (R-5 week) Ussuri-3 milestone: April 9 (R-5 week) OpenDev+PTG Vancouver: June 8-11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Fri Feb 14 15:13:42 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Fri, 14 Feb 2020 09:13:42 -0600 Subject: [security] Vancouver PTG Planning - Security SIG Message-ID: Hello, It's about that time again, to start planning for the next PTG. To help gauge interest and track any topics that anyone is interested in, I've created an etherpad: https://etherpad.openstack.org/p/yvr-ptg-security-sig Please add your name if you plan to attend and any topic ideas that you are interested in. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpawlik at redhat.com Fri Feb 14 15:47:12 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Fri, 14 Feb 2020 16:47:12 +0100 Subject: [tripleo] RDO image server migration Message-ID: Hello, We are going to migrate images.rdoproject.org to our new cloud provider, starting on February 17th at 10 AM UTC. Migration should be transparent to the end user (scripts and job definition are using new image server). However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik - Javier Pena Regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Fri Feb 14 16:00:23 2020 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Fri, 14 Feb 2020 16:00:23 +0000 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: I'm +1 too. Having Peter in the charms-core team will allow him to merge documentation for the team. On Thu, Feb 13, 2020 at 5:11 PM James Page wrote: > Hi Team > > I'd like to proposed Peter Matulis for membership of the charms-core team. > > Although he's not be focussed on developing the codebase since he started > contributing to the OpenStack Charms he's made a number of significant > contributions to our documentation as well as regularly providing feedback > on updates to README's and the charm deployment guide. > > I think he would make a valuable addition to the charms-core review team! > > Cheers > > James > > -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Feb 14 16:51:01 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 17:51:01 +0100 Subject: [queens][config_drive] not working for ubuntu bionic Message-ID: Hello, I configured config drive and centos instances works fine with it. Ubuntu bionic tries to get metadata from network and cloud-init does not set hostname and does not insert keys for ssh. Any help, please Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Feb 14 17:36:35 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 14 Feb 2020 10:36:35 -0700 Subject: [tripleo] RDO image server migration In-Reply-To: References: Message-ID: FYI... Possible interruption on 3rd party ovb jobs. ---------- Forwarded message --------- From: Daniel Pawlik Date: Fri, Feb 14, 2020 at 8:47 AM Subject: [rdo-dev] [rdo-users] [infra] RDO image server migration To: Hello, We are going to migrate images.rdoproject.org to our new cloud provider, starting on February 17th at 10 AM UTC. Migration should be transparent to the end user (scripts and job definition are using new image server). However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik - Javier Pena Regards, Daniel _______________________________________________ dev mailing list dev at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Feb 14 18:08:03 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 19:08:03 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: <20200213130936.3ainb4cift5euslw@yuggoth.org> References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: Hello Jeremy, I disabled isolate metadata in dhcp agent an configura config drive. It works fine with centos 7 but it does not with ubuntu 18. On ubuntu 18 cloud init tries ti contact metadata on 169.254.169.254 and does not object ssh keys :-( Ignazio Il Gio 13 Feb 2020, 14:16 Jeremy Stanley ha scritto: > On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many > > provider networks. I don't use openstack router but only dhcp. I > > would like my instances to reach the metadata agent without the > > 169.154.169.254 route, so I would like the provider networks to > > directly reach the metadata agent on the internal api vip. How can > > I get this configuration? > > Have you tried using configdrive instead of the metadata service? > It's generally more reliable. The main downside is that it doesn't > change while the instance is running, so if you're wanting to use > this to update routes for active instances between reboots then I > suppose it wouldn't solve your problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 14 18:10:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 Feb 2020 18:10:32 +0000 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: Message-ID: <20200214181032.p462ick2esyex3tv@yuggoth.org> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > Hello, I configured config drive and centos instances works fine > with it. Ubuntu bionic tries to get metadata from network and > cloud-init does not set hostname and does not insert keys for ssh. The cloud-init changelog indicates that initial support for configdrive first appeared in 0.6.3, so the versions in Ubuntu Bionic (and even Xenial) should be new enough to make use of it. In fact, the official Ubuntu bionic-updates package suite includes cloud-init 19.4 (the most recent release), so missing features/support seem unlikely. Detailed log entries from cloud-init running at boot might help in diagnosing the problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Fri Feb 14 19:14:55 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 20:14:55 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: <20200214181032.p462ick2esyex3tv@yuggoth.org> References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. Note I can mount /dev/cdrom and see metadata: mount -o ro /dev/cdrom /mnt ls -laR /mnt total 10 dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack /mnt/ec2: total 8 dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest /mnt/ec2/2009-04-04: total 5 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json /mnt/ec2/latest: total 5 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json /mnt/openstack: total 22 dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest /mnt/openstack/2012-08-10: total 6 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json /mnt/openstack/2013-04-04: total 6 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json /mnt/openstack/2013-10-17: total 7 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json /mnt/openstack/2015-10-15: total 7 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json /mnt/openstack/2016-06-30: total 7 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json /mnt/openstack/2016-10-06: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json /mnt/openstack/2017-02-22: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json /mnt/openstack/2018-08-27: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json /mnt/openstack/latest: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json Thanks Ignazio Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley ha scritto: > On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > > Hello, I configured config drive and centos instances works fine > > with it. Ubuntu bionic tries to get metadata from network and > > cloud-init does not set hostname and does not insert keys for ssh. > > The cloud-init changelog indicates that initial support for > configdrive first appeared in 0.6.3, so the versions in Ubuntu > Bionic (and even Xenial) should be new enough to make use of it. In > fact, the official Ubuntu bionic-updates package suite includes > cloud-init 19.4 (the most recent release), so missing > features/support seem unlikely. Detailed log entries from cloud-init > running at boot might help in diagnosing the problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cloud-init.log Type: text/x-log Size: 106195 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cloud-init-output.log Type: text/x-log Size: 6403 bytes Desc: not available URL: From ignaziocassano at gmail.com Fri Feb 14 19:26:27 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 20:26:27 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Hello, at the following link you can find the cloud init logs file: https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva PS I can mount and read metadata mount manually the cdrom and I do not understand why cloud-init cannot. Ignazio Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. > Note I can mount /dev/cdrom and see metadata: > mount -o ro /dev/cdrom /mnt > ls -laR /mnt > total 10 > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > > /mnt/ec2: > total 8 > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > /mnt/ec2/2009-04-04: > total 5 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > /mnt/ec2/latest: > total 5 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > /mnt/openstack: > total 22 > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > /mnt/openstack/2012-08-10: > total 6 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > > /mnt/openstack/2013-04-04: > total 6 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > > /mnt/openstack/2013-10-17: > total 7 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > /mnt/openstack/2015-10-15: > total 7 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > /mnt/openstack/2016-06-30: > total 7 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > /mnt/openstack/2016-10-06: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > /mnt/openstack/2017-02-22: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > /mnt/openstack/2018-08-27: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > /mnt/openstack/latest: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > Thanks > Ignazio > > Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley > ha scritto: > >> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: >> > Hello, I configured config drive and centos instances works fine >> > with it. Ubuntu bionic tries to get metadata from network and >> > cloud-init does not set hostname and does not insert keys for ssh. >> >> The cloud-init changelog indicates that initial support for >> configdrive first appeared in 0.6.3, so the versions in Ubuntu >> Bionic (and even Xenial) should be new enough to make use of it. In >> fact, the official Ubuntu bionic-updates package suite includes >> cloud-init 19.4 (the most recent release), so missing >> features/support seem unlikely. Detailed log entries from cloud-init >> running at boot might help in diagnosing the problem. >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Feb 14 21:56:49 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 14 Feb 2020 13:56:49 -0800 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: I haven't had time to look at your logs, but I can tell you that configdrive has worked in the Ubuntu images since at least Xenial (I am pretty sure trusty, but I can't remember for sure). The Octavia project uses it exclusively for the amphora instances. I'm not sure where you got your image, but there is a setting for cloud-init that defines which data sources it will use. For Octavia we explicitly set this in our images to only poll configdrive to speed the boot process. We build our images using diskimage-builder, and this element (script) is the component we use to set the cloud-init datasource: https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources Maybe check the settings for cloud-init inside your image by grep for "datasource_list" in /etc/cloud/cloud.cfg.d/* ? Michael On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano wrote: > > Hello, at the following link you can find the cloud init logs file: > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > PS > I can mount and read metadata mount manually the cdrom and I do not understand why cloud-init cannot. > Ignazio > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano ha scritto: >> >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. >> Note I can mount /dev/cdrom and see metadata: >> mount -o ro /dev/cdrom /mnt >> ls -laR /mnt >> total 10 >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack >> >> /mnt/ec2: >> total 8 >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest >> >> /mnt/ec2/2009-04-04: >> total 5 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json >> >> /mnt/ec2/latest: >> total 5 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json >> >> /mnt/openstack: >> total 22 >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest >> >> /mnt/openstack/2012-08-10: >> total 6 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json >> >> /mnt/openstack/2013-04-04: >> total 6 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json >> >> /mnt/openstack/2013-10-17: >> total 7 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> >> /mnt/openstack/2015-10-15: >> total 7 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> >> /mnt/openstack/2016-06-30: >> total 7 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> >> /mnt/openstack/2016-10-06: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> /mnt/openstack/2017-02-22: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> /mnt/openstack/2018-08-27: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> /mnt/openstack/latest: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> Thanks >> Ignazio >> >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley ha scritto: >>> >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: >>> > Hello, I configured config drive and centos instances works fine >>> > with it. Ubuntu bionic tries to get metadata from network and >>> > cloud-init does not set hostname and does not insert keys for ssh. >>> >>> The cloud-init changelog indicates that initial support for >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu >>> Bionic (and even Xenial) should be new enough to make use of it. In >>> fact, the official Ubuntu bionic-updates package suite includes >>> cloud-init 19.4 (the most recent release), so missing >>> features/support seem unlikely. Detailed log entries from cloud-init >>> running at boot might help in diagnosing the problem. >>> -- >>> Jeremy Stanley From donny at fortnebula.com Fri Feb 14 22:07:57 2020 From: donny at fortnebula.com (Donny Davis) Date: Fri, 14 Feb 2020 17:07:57 -0500 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: On Fri, Feb 14, 2020 at 5:02 PM Michael Johnson wrote: > > I haven't had time to look at your logs, but I can tell you that > configdrive has worked in the Ubuntu images since at least Xenial (I > am pretty sure trusty, but I can't remember for sure). > The Octavia project uses it exclusively for the amphora instances. > > I'm not sure where you got your image, but there is a setting for > cloud-init that defines which data sources it will use. For Octavia we > explicitly set this in our images to only poll configdrive to speed > the boot process. > > We build our images using diskimage-builder, and this element (script) > is the component we use to set the cloud-init datasource: > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > Maybe check the settings for cloud-init inside your image by grep for > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > Michael > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > wrote: > > > > Hello, at the following link you can find the cloud init logs file: > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > PS > > I can mount and read metadata mount manually the cdrom and I do not understand why cloud-init cannot. > > Ignazio > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano ha scritto: > >> > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. > >> Note I can mount /dev/cdrom and see metadata: > >> mount -o ro /dev/cdrom /mnt > >> ls -laR /mnt > >> total 10 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > >> > >> /mnt/ec2: > >> total 8 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/ec2/2009-04-04: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/ec2/latest: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/openstack: > >> total 22 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/openstack/2012-08-10: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-04-04: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-10-17: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2015-10-15: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-06-30: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-10-06: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2017-02-22: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2018-08-27: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/latest: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> Thanks > >> Ignazio > >> > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley ha scritto: > >>> > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > >>> > Hello, I configured config drive and centos instances works fine > >>> > with it. Ubuntu bionic tries to get metadata from network and > >>> > cloud-init does not set hostname and does not insert keys for ssh. > >>> > >>> The cloud-init changelog indicates that initial support for > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > >>> Bionic (and even Xenial) should be new enough to make use of it. In > >>> fact, the official Ubuntu bionic-updates package suite includes > >>> cloud-init 19.4 (the most recent release), so missing > >>> features/support seem unlikely. Detailed log entries from cloud-init > >>> running at boot might help in diagnosing the problem. > >>> -- > >>> Jeremy Stanley > Lately I have been using glean for my own images and I do believe the openstack CI uses it as well. Works great for me. https://docs.openstack.org/infra/glean/ -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" From amy at demarco.com Fri Feb 14 23:58:54 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 14 Feb 2020 17:58:54 -0600 Subject: UC Nominations now open In-Reply-To: References: Message-ID: Just a reminder that UC nominations end aat February 16, 23:59 UTC Thanks, Amy (spotz) On Mon, Feb 3, 2020 at 9:50 AM Amy Marrich wrote: > The nomination period for the February User Committee elections is now > open. > > Any individual member of the Foundation who is an Active User Contributor > (AUC) can propose their candidacy (except the two sitting UC members > elected in the previous election). > > Self-nomination is common; no third party nomination is required. > Nominations can be made by sending an email to the > user-committee at lists.openstack.org mailing-list[0], with the subject: “UC > candidacy” by February 16, 23:59 UTC aa voting will begin on February 17. > The email can include a description of the candidate platform. The > candidacy is then confirmed by one of the election officials, after > verification of the electorate status of the candidate. > > Criteria for AUC status can be found at > https://superuser.openstack.org/articles/auc-community/. If you are > still not sure of your status and would like to verify in advance > please email myself(amy at demarco.com) and Rain Leander(rleander at redhat.com) > as we are serving as the Election Officials. > > Thanks, > > Amy Marrich (spotz) > > 0 - Please make sure you are subscribed to this list before sending in > your nomination. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 08:08:25 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 09:08:25 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Thanks Michael. I have just tried to use dpkg to reconfigure cloud init source forcing ConfigDrive it did not solve. I am going to force it with with diskimage builder. I hope this will work. Ignazio Il Ven 14 Feb 2020, 22:57 Michael Johnson ha scritto: > I haven't had time to look at your logs, but I can tell you that > configdrive has worked in the Ubuntu images since at least Xenial (I > am pretty sure trusty, but I can't remember for sure). > The Octavia project uses it exclusively for the amphora instances. > > I'm not sure where you got your image, but there is a setting for > cloud-init that defines which data sources it will use. For Octavia we > explicitly set this in our images to only poll configdrive to speed > the boot process. > > We build our images using diskimage-builder, and this element (script) > is the component we use to set the cloud-init datasource: > > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > Maybe check the settings for cloud-init inside your image by grep for > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > Michael > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > wrote: > > > > Hello, at the following link you can find the cloud init logs file: > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > PS > > I can mount and read metadata mount manually the cdrom and I do not > understand why cloud-init cannot. > > Ignazio > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 > instance. > >> Note I can mount /dev/cdrom and see metadata: > >> mount -o ro /dev/cdrom /mnt > >> ls -laR /mnt > >> total 10 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > >> > >> /mnt/ec2: > >> total 8 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/ec2/2009-04-04: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/ec2/latest: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/openstack: > >> total 22 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/openstack/2012-08-10: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-04-04: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-10-17: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2015-10-15: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-06-30: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-10-06: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2017-02-22: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2018-08-27: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/latest: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> Thanks > >> Ignazio > >> > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley < > fungi at yuggoth.org> ha scritto: > >>> > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > >>> > Hello, I configured config drive and centos instances works fine > >>> > with it. Ubuntu bionic tries to get metadata from network and > >>> > cloud-init does not set hostname and does not insert keys for ssh. > >>> > >>> The cloud-init changelog indicates that initial support for > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > >>> Bionic (and even Xenial) should be new enough to make use of it. In > >>> fact, the official Ubuntu bionic-updates package suite includes > >>> cloud-init 19.4 (the most recent release), so missing > >>> features/support seem unlikely. Detailed log entries from cloud-init > >>> running at boot might help in diagnosing the problem. > >>> -- > >>> Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 08:10:21 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 09:10:21 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Donny, thank you.I will try it. Ignazio Il Ven 14 Feb 2020, 23:08 Donny Davis ha scritto: > On Fri, Feb 14, 2020 at 5:02 PM Michael Johnson > wrote: > > > > I haven't had time to look at your logs, but I can tell you that > > configdrive has worked in the Ubuntu images since at least Xenial (I > > am pretty sure trusty, but I can't remember for sure). > > The Octavia project uses it exclusively for the amphora instances. > > > > I'm not sure where you got your image, but there is a setting for > > cloud-init that defines which data sources it will use. For Octavia we > > explicitly set this in our images to only poll configdrive to speed > > the boot process. > > > > We build our images using diskimage-builder, and this element (script) > > is the component we use to set the cloud-init datasource: > > > > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > > > Maybe check the settings for cloud-init inside your image by grep for > > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > > > Michael > > > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > > wrote: > > > > > > Hello, at the following link you can find the cloud init logs file: > > > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > > > PS > > > I can mount and read metadata mount manually the cdrom and I do not > understand why cloud-init cannot. > > > Ignazio > > > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > > >> > > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 > instance. > > >> Note I can mount /dev/cdrom and see metadata: > > >> mount -o ro /dev/cdrom /mnt > > >> ls -laR /mnt > > >> total 10 > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > > >> > > >> /mnt/ec2: > > >> total 8 > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > >> > > >> /mnt/ec2/2009-04-04: > > >> total 5 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > >> > > >> /mnt/ec2/latest: > > >> total 5 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > >> > > >> /mnt/openstack: > > >> total 22 > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > >> > > >> /mnt/openstack/2012-08-10: > > >> total 6 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > > >> > > >> /mnt/openstack/2013-04-04: > > >> total 6 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > > >> > > >> /mnt/openstack/2013-10-17: > > >> total 7 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> > > >> /mnt/openstack/2015-10-15: > > >> total 7 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> > > >> /mnt/openstack/2016-06-30: > > >> total 7 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> > > >> /mnt/openstack/2016-10-06: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> /mnt/openstack/2017-02-22: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> /mnt/openstack/2018-08-27: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> /mnt/openstack/latest: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> Thanks > > >> Ignazio > > >> > > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley < > fungi at yuggoth.org> ha scritto: > > >>> > > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > > >>> > Hello, I configured config drive and centos instances works fine > > >>> > with it. Ubuntu bionic tries to get metadata from network and > > >>> > cloud-init does not set hostname and does not insert keys for ssh. > > >>> > > >>> The cloud-init changelog indicates that initial support for > > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > > >>> Bionic (and even Xenial) should be new enough to make use of it. In > > >>> fact, the official Ubuntu bionic-updates package suite includes > > >>> cloud-init 19.4 (the most recent release), so missing > > >>> features/support seem unlikely. Detailed log entries from cloud-init > > >>> running at boot might help in diagnosing the problem. > > >>> -- > > >>> Jeremy Stanley > > > > Lately I have been using glean for my own images and I do believe the > openstack CI uses it as well. Works great for me. > > https://docs.openstack.org/infra/glean/ > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 12:10:49 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 13:10:49 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Hi Michael, solve using DIB_CLOUD_INIT_DATASOURCES in diskimage builder Regards Ignazio Il Ven 14 Feb 2020, 22:57 Michael Johnson ha scritto: > I haven't had time to look at your logs, but I can tell you that > configdrive has worked in the Ubuntu images since at least Xenial (I > am pretty sure trusty, but I can't remember for sure). > The Octavia project uses it exclusively for the amphora instances. > > I'm not sure where you got your image, but there is a setting for > cloud-init that defines which data sources it will use. For Octavia we > explicitly set this in our images to only poll configdrive to speed > the boot process. > > We build our images using diskimage-builder, and this element (script) > is the component we use to set the cloud-init datasource: > > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > Maybe check the settings for cloud-init inside your image by grep for > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > Michael > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > wrote: > > > > Hello, at the following link you can find the cloud init logs file: > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > PS > > I can mount and read metadata mount manually the cdrom and I do not > understand why cloud-init cannot. > > Ignazio > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 > instance. > >> Note I can mount /dev/cdrom and see metadata: > >> mount -o ro /dev/cdrom /mnt > >> ls -laR /mnt > >> total 10 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > >> > >> /mnt/ec2: > >> total 8 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/ec2/2009-04-04: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/ec2/latest: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/openstack: > >> total 22 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/openstack/2012-08-10: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-04-04: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-10-17: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2015-10-15: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-06-30: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-10-06: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2017-02-22: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2018-08-27: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/latest: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> Thanks > >> Ignazio > >> > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley < > fungi at yuggoth.org> ha scritto: > >>> > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > >>> > Hello, I configured config drive and centos instances works fine > >>> > with it. Ubuntu bionic tries to get metadata from network and > >>> > cloud-init does not set hostname and does not insert keys for ssh. > >>> > >>> The cloud-init changelog indicates that initial support for > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > >>> Bionic (and even Xenial) should be new enough to make use of it. In > >>> fact, the official Ubuntu bionic-updates package suite includes > >>> cloud-init 19.4 (the most recent release), so missing > >>> features/support seem unlikely. Detailed log entries from cloud-init > >>> running at boot might help in diagnosing the problem. > >>> -- > >>> Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 12:14:43 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 13:14:43 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: Hi Jeremy, on ubuntu works if in disk rimane builder we use the variabile DIB_CLOUD_INIT_DATASOURCES with value ConfigDrive Ignazio Il Gio 13 Feb 2020, 16:31 Ignazio Cassano ha scritto: > Hello, config drive is the best solution for our situation. > Thanks > Ignazio > > Il giorno gio 13 feb 2020 alle ore 14:16 Jeremy Stanley > ha scritto: > >> On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: >> > Hello everyone, in my installation of Queees I am using many >> > provider networks. I don't use openstack router but only dhcp. I >> > would like my instances to reach the metadata agent without the >> > 169.154.169.254 route, so I would like the provider networks to >> > directly reach the metadata agent on the internal api vip. How can >> > I get this configuration? >> >> Have you tried using configdrive instead of the metadata service? >> It's generally more reliable. The main downside is that it doesn't >> change while the instance is running, so if you're wanting to use >> this to update routes for active instances between reboots then I >> suppose it wouldn't solve your problem. >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Feb 15 12:51:03 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 15 Feb 2020 12:51:03 +0000 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: <20200215125103.fj3ygmjj7p4yes5k@yuggoth.org> On 2020-02-15 13:10:49 +0100 (+0100), Ignazio Cassano wrote: > solve using DIB_CLOUD_INIT_DATASOURCES in diskimage builder [...] I see, I didn't realize from your earlier posts that you're building your own images, but it certainly makes sense that you'd need to configure cloud-init appropriately when doing that. Thanks for following up! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Sat Feb 15 13:47:58 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 14:47:58 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: <20200215125103.fj3ygmjj7p4yes5k@yuggoth.org> References: <20200214181032.p462ick2esyex3tv@yuggoth.org> <20200215125103.fj3ygmjj7p4yes5k@yuggoth.org> Message-ID: I realize my image for inserting heat tools inside them. Some heat features are not included in standard images. Ignazio Il Sab 15 Feb 2020, 13:57 Jeremy Stanley ha scritto: > On 2020-02-15 13:10:49 +0100 (+0100), Ignazio Cassano wrote: > > solve using DIB_CLOUD_INIT_DATASOURCES in diskimage builder > [...] > > I see, I didn't realize from your earlier posts that you're building > your own images, but it certainly makes sense that you'd need to > configure cloud-init appropriately when doing that. Thanks for > following up! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ziaul.ict2018 at gmail.com Sun Feb 16 06:21:36 2020 From: ziaul.ict2018 at gmail.com (Md. Ziaul Haque) Date: Sun, 16 Feb 2020 12:21:36 +0600 Subject: Automatically remove /var/log directory from the openstack instance Message-ID: Hello, During few days we have faced that from the instance /var/log directory log have been removed automatically after the unexpected reboot. We have faced this issue with centos and ubuntu images. Openstack version is rocky and qemu-kvm version 2.12.0 . If anybody has faced the same issue, kindly help us for solving this issue Thanks & Regards Ziaul -------------- next part -------------- An HTML attachment was scrubbed... URL: From liang.a.fang at intel.com Sun Feb 16 15:43:46 2020 From: liang.a.fang at intel.com (Fang, Liang A) Date: Sun, 16 Feb 2020 15:43:46 +0000 Subject: [cinder][nova] volume-local-cache meeting results In-Reply-To: <39695ce2-fe35-6bfb-17f1-bbfaef4293d1@gmail.com> References: <39695ce2-fe35-6bfb-17f1-bbfaef4293d1@gmail.com> Message-ID: Thanks Brian organized the meeting, so Nova and Cinder experts can discuss directly, it is very efficient. Now I think most of you are very clear of the spec. I have updated the spec this weekend. Hope it meets your expectation now. Time fly, this spec has been worked/discussed on 4 months from Shanghai PTG. I continues put effort on this because I have confidence that it can greatly improve storage performance. Although not perfect, I still hope the first edition can land in U release, and let's improve it in V release. Regards LiangFang -----Original Message----- From: Brian Rosmaita Sent: Friday, February 14, 2020 4:08 AM To: openstack-discuss at lists.openstack.org Subject: [cinder][nova] volume-local-cache meeting results Thanks to everyone who attended this morning, we had a productive meeting. If you missed it and want to know what happened: etherpad: https://etherpad.openstack.org/p/volume-local-cache recording: https://youtu.be/P9bouCCoqVo Liang Fang will be updating the specs to reflect what was discussed: cinder spec: https://review.opendev.org/#/c/684556/ nova spec: https://review.opendev.org/#/c/689070/ cheers, brian From gmann at ghanshyammann.com Mon Feb 17 03:59:49 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 16 Feb 2020 21:59:49 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-13 Update (we are at deadline) Message-ID: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-13 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * Deadline was M-2 (R-13 week). * Things are breaking and being fixed daily. * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. * Tempest has dropped the py2 and stop running the py2 jobs and so does devstack. * tempest-full job has been moved to py2 (it started running on py3 when devstack default to py3). This job is not supposed to run on master gate (except d-g which should keep running on py2 jobs). tempest-full-py3 is the py3 version of it. But few master jobs derived from tempest-full needs to either start using tempest-full-py3 or explicitly enable py3. I will be working on those next week. * Bugs & Status: No open bugs as of now. Below is the status of last week's bugs. ** QA dropped py3.5 support also from many tooling like stackviz etc which are branchless and should keep working on stable py3.5 env. We have reverted the dropping py3.5 and working now. ** Nova stable branches are still falling for the u-c reason on nova-live-migration. Fixes are up[2] *** Fixed and gate is green. * * Tempest tox 'all-plugin' usage issue[3] *** Fixed and gate is green. ** neutron-vpass in-tree plugin issue *** Fixed and gate is green. NOTE: I know few py2.7 drop patches are failing on few projects, I will continue debugging those. Help from the failing project will be greatly appreciated to meet the deadline. Project wise status and need reviews: ============================ Phase-1 status: The OpenStack services have not merged the py2 drop patches: NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). * Adjutant ** https://review.opendev.org/#/c/706723/ * Masakari ** https://review.opendev.org/#/c/694551/ * Qinling ** https://review.opendev.org/#/c/694687/ Phase-2 status: By today, we should be completing the phase-2 work which is nothing but drop py2 from everything except requirement repo. But we have few repos to merge the patches on priority. * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open How you can help: ============== - Review the patches. Push the patches if I missed any repo. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012463.html [2] https://review.opendev.org/#/q/I8190f93e0a754fa59ed848a3a230d1ef63a06abc [3] https://bugs.launchpad.net/tempest/+bug/1862240 [4] https://review.opendev.org/#/q/topic:bug/1862240+(status:open+OR+status:merged) -gmann From amotoki at gmail.com Mon Feb 17 08:19:53 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 17 Feb 2020 17:19:53 +0900 Subject: [goals][Drop Python 2.7 Support] Week R-13 Update (we are at deadline) In-Reply-To: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> References: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> Message-ID: On Mon, Feb 17, 2020 at 1:02 PM Ghanshyam Mann wrote: > > Hello Everyone, > > Below is the progress on "Drop Python 2.7 Support" at end of R-13 week. > > Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > Highlights: > ======== > * Deadline was M-2 (R-13 week). > > * Things are breaking and being fixed daily. > > * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. > > * Tempest has dropped the py2 and stop running the py2 jobs and so does devstack. > > * tempest-full job has been moved to py2 (it started running on py3 when devstack default to py3). This job is not supposed > to run on master gate (except d-g which should keep running on py2 jobs). tempest-full-py3 is the py3 version of it. But few > master jobs derived from tempest-full needs to either start using tempest-full-py3 or explicitly enable py3. I will be working > on those next week. > > * Bugs & Status: No open bugs as of now. Below is the status of last week's bugs. There is another bug fixed. The release notes job in stable branches was broken as the job is run against the master branch but it was run with python 2.7. The project-template in openstack-zuul-jobs was updated [1] and build-openstack-releasenotes job now runs with python3 in all stable branches. If there are jobs in stable branches which runs using the master branch, the similar change might be needed. [1] https://review.opendev.org/#/c/706825/ > > ** QA dropped py3.5 support also from many tooling like stackviz etc which are branchless and should keep working on > stable py3.5 env. We have reverted the dropping py3.5 and working now. > > ** Nova stable branches are still falling for the u-c reason on nova-live-migration. Fixes are up[2] > *** Fixed and gate is green. > > * * Tempest tox 'all-plugin' usage issue[3] > *** Fixed and gate is green. > > ** neutron-vpass in-tree plugin issue > *** Fixed and gate is green. > > NOTE: I know few py2.7 drop patches are failing on few projects, I will continue debugging those. Help from the failing project will be greatly appreciated to meet the deadline. > > > Project wise status and need reviews: > ============================ > Phase-1 status: > The OpenStack services have not merged the py2 drop patches: > NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). > > * Adjutant > ** https://review.opendev.org/#/c/706723/ > * Masakari > ** https://review.opendev.org/#/c/694551/ > * Qinling > ** https://review.opendev.org/#/c/694687/ > > Phase-2 status: > By today, we should be completing the phase-2 work which is nothing but drop py2 from everything except requirement repo. > But we have few repos to merge the patches on priority. > > * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open > > > How you can help: > ============== > - Review the patches. Push the patches if I missed any repo. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012463.html > [2] https://review.opendev.org/#/q/I8190f93e0a754fa59ed848a3a230d1ef63a06abc > [3] https://bugs.launchpad.net/tempest/+bug/1862240 > [4] https://review.opendev.org/#/q/topic:bug/1862240+(status:open+OR+status:merged) > > -gmann > > From isanjayk5 at gmail.com Mon Feb 17 08:51:16 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Mon, 17 Feb 2020 14:21:16 +0530 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster Message-ID: Hello openstack-helm team, I am trying to deploy stein cinder service in my k8s cluster using persistent volume and persistent volume claim for local storage or NFS storage. However after deploying cinder in my cluster, the cinder pods remains in Init state even though the PV and PVC are created. Please look at my below post on openstack forum and guide me how to resolve this issue. https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ thank you for your help and support on this. best regards, Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinlam at gmail.com Mon Feb 17 09:04:26 2020 From: tinlam at gmail.com (Tin Lam) Date: Mon, 17 Feb 2020 03:04:26 -0600 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster In-Reply-To: References: Message-ID: Hello, Sanjay - IIRC, cinder service in OSH never supported the NFS provisioner. Can you try using the Ceph storage charts instead? Regards, Tin On Mon, Feb 17, 2020 at 2:54 AM Sanjay K wrote: > Hello openstack-helm team, > > I am trying to deploy stein cinder service in my k8s cluster using > persistent volume and persistent volume claim for local storage or NFS > storage. However after deploying cinder in my cluster, the cinder pods > remains in Init state even though the PV and PVC are created. > > Please look at my below post on openstack forum and guide me how to > resolve this issue. > > > https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ > > thank you for your help and support on this. > > best regards, > Sanjay > -- Regards, Tin Lam -------------- next part -------------- An HTML attachment was scrubbed... URL: From isanjayk5 at gmail.com Mon Feb 17 09:24:06 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Mon, 17 Feb 2020 14:54:06 +0530 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster In-Reply-To: References: Message-ID: Hi Tin, Is there any support for local storage for cinder deployment? If yes, how can I try that? Since Ceph is not part of our production deployment, I can't include Ceph in our deployment. thanks and regards, Sanjay On Mon, Feb 17, 2020 at 2:34 PM Tin Lam wrote: > Hello, Sanjay - > > IIRC, cinder service in OSH never supported the NFS provisioner. Can you > try using the Ceph storage charts instead? > > Regards, > Tin > > On Mon, Feb 17, 2020 at 2:54 AM Sanjay K wrote: > >> Hello openstack-helm team, >> >> I am trying to deploy stein cinder service in my k8s cluster using >> persistent volume and persistent volume claim for local storage or NFS >> storage. However after deploying cinder in my cluster, the cinder pods >> remains in Init state even though the PV and PVC are created. >> >> Please look at my below post on openstack forum and guide me how to >> resolve this issue. >> >> >> https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ >> >> thank you for your help and support on this. >> >> best regards, >> Sanjay >> > > > -- > Regards, > > Tin Lam > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Mon Feb 17 11:19:39 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Mon, 17 Feb 2020 11:19:39 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> <1317590151.1993706.1581576773856@mail.yahoo.com> <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> Message-ID: <1415279454.3463025.1581938379342@mail.yahoo.com> Hi mdulko,Thanks you very much,I am able to launch Container and VM side by side in same network on arm64 platform. Issue was openvswitch kernel  Module(Enabled "geneve module" in openvswitch). Now i am able to ping from container to VM but unable to ping Vice Versa (VM to container). And also, I am able ping VM to VM (Both spawned from OpenStack dashboard). Is there any configuration to enable traffic from VM to Container? Regards, Veera. On Thursday, 13 February, 2020, 02:42:21 pm IST, wrote: We're using 2.9.5 on the x86_64 gate and that works fine. I'm not sure if downgrading could help. This is a Neutron issue and I don't have much experience on such a low level. You can try asking on IRC, e.g. on #openstack-neutron. On Thu, 2020-02-13 at 06:52 +0000, VeeraReddy wrote: > Thanks mdulko, > Issue is in openvswitch, iam getting following error in switchd logs > ./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message > > Do we need to patch openvswitch to support above flow? > > My ovs version > > [root at node-2088 ~]# ovs-vsctl --version > ovs-vsctl (Open vSwitch) 2.11.0 > DB Schema 7.16.1 > > > > Regards, > Veera. > > > On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: > > > The controller logs are from an hour later than the CNI one. The issues > seems not to be present. > > Isn't your controller still restarting? If so try to use -p option on > `kubectl logs` to get logs from previous run. > > On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > > Hi Mdulko, > > Below are log files: > > > > Controller Log:http://paste.openstack.org/show/789457/ > > cni : http://paste.openstack.org/show/789456/ > > kubelet : http://paste.openstack.org/show/789453/ > > > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > > > Please let me know the issue > > > > > > > > Regards, > > Veera. > > > > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > > > > Hi, > > > > So from this run you need the kuryr-controller logs. Apparently the pod > > never got annotated with an information about the VIF. > > > > Thanks, > > Michał > > > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Thanks for your support. > > > > > > As you mention i removed readinessProbe and > > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > > on /healthz endpoint. Are those full logs, including the moment you > > > tried spawning a container there? It seems like you only pasted the > > > fragments with tracebacks regarding failures to read /healthz endpoint > > > of kube-apiserver. That is another problem you should investigate - > > > that causes Kuryr pods to restart. > > > > > > At first I'd disable the healthchecks (remove readinessProbe and > > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > > logs. > > > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > > Hi mdulko, > > > > Please find kuryr-cni logs > > > > http://paste.openstack.org/show/789209/ > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > > > > Hi, > > > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > > > Thanks, > > > > Michał > > > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > > Hi, > > > > > I am trying to run kubelet in arm64 platform > > > > > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > > 2.    Generated kuryr-cni-arm64 container. > > > > > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > > > Please help me to fix the issue > > > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > Veera. > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 17 14:06:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 17 Feb 2020 08:06:05 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-13 Update (we are at deadline) In-Reply-To: References: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> Message-ID: <1705378699e.f6be3b5d110323.5998502757333700076@ghanshyammann.com> ---- On Mon, 17 Feb 2020 02:19:53 -0600 Akihiro Motoki wrote ---- > On Mon, Feb 17, 2020 at 1:02 PM Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Below is the progress on "Drop Python 2.7 Support" at end of R-13 week. > > > > Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > > > > Highlights: > > ======== > > * Deadline was M-2 (R-13 week). > > > > * Things are breaking and being fixed daily. > > > > * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. > > > > * Tempest has dropped the py2 and stop running the py2 jobs and so does devstack. > > > > * tempest-full job has been moved to py2 (it started running on py3 when devstack default to py3). This job is not supposed > > to run on master gate (except d-g which should keep running on py2 jobs). tempest-full-py3 is the py3 version of it. But few > > master jobs derived from tempest-full needs to either start using tempest-full-py3 or explicitly enable py3. I will be working > > on those next week. > > > > * Bugs & Status: No open bugs as of now. Below is the status of last week's bugs. > > There is another bug fixed. The release notes job in stable branches > was broken as the job is run against the master branch but it was run > with python 2.7. > The project-template in openstack-zuul-jobs was updated [1] and > build-openstack-releasenotes job now runs with python3 in all stable > branches. > If there are jobs in stable branches which runs using the master > branch, the similar change might be needed. > [1] https://review.opendev.org/#/c/706825/ Thanks amotoki for updates and fix. This is very helpful. Yeah, we need to switch the master related jobs on py3 irrespective where they run. -gmann > > > > > ** QA dropped py3.5 support also from many tooling like stackviz etc which are branchless and should keep working on > > stable py3.5 env. We have reverted the dropping py3.5 and working now. > > > > ** Nova stable branches are still falling for the u-c reason on nova-live-migration. Fixes are up[2] > > *** Fixed and gate is green. > > > > * * Tempest tox 'all-plugin' usage issue[3] > > *** Fixed and gate is green. > > > > ** neutron-vpass in-tree plugin issue > > *** Fixed and gate is green. > > > > NOTE: I know few py2.7 drop patches are failing on few projects, I will continue debugging those. Help from the failing project will be greatly appreciated to meet the deadline. > > > > > > Project wise status and need reviews: > > ============================ > > Phase-1 status: > > The OpenStack services have not merged the py2 drop patches: > > NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). > > > > * Adjutant > > ** https://review.opendev.org/#/c/706723/ > > * Masakari > > ** https://review.opendev.org/#/c/694551/ > > * Qinling > > ** https://review.opendev.org/#/c/694687/ > > > > Phase-2 status: > > By today, we should be completing the phase-2 work which is nothing but drop py2 from everything except requirement repo. > > But we have few repos to merge the patches on priority. > > > > * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open > > > > > > How you can help: > > ============== > > - Review the patches. Push the patches if I missed any repo. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012463.html > > [2] https://review.opendev.org/#/q/I8190f93e0a754fa59ed848a3a230d1ef63a06abc > > [3] https://bugs.launchpad.net/tempest/+bug/1862240 > > [4] https://review.opendev.org/#/q/topic:bug/1862240+(status:open+OR+status:merged) > > > > -gmann > > > > > From dpawlik at redhat.com Mon Feb 17 15:39:26 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Mon, 17 Feb 2020 16:39:26 +0100 Subject: [tripleo] RDO image server migration Message-ID: Hello, Todays migration of images server to the new cloud provider was not finished. We are planning to continue tomorrow (18th Feb), on 10 AM UTC. What was done today: - moved rhel-8 build base image to new image server What will be done tomorrow: - change DNS record - disable upload images to old host - sync old images (if some are available) Migration should be transparent to the end user. However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik - Javier Pena Regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Mon Feb 17 17:19:36 2020 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Mon, 17 Feb 2020 17:19:36 +0000 Subject: [blazar] Spec for flexible reservation policy enforcement Message-ID: Hi all, Last week I introduced a final version of the flexible reservation usage spec[1]. This builds on the original spec proposed by Pierre Riteau some months ago. The purpose of the functionality is to allow operators to define various limits or more sophisticated policies around advanced reservations, in order to prevent e.g., a single user from reserving all resources for an indefinite amount of time. Instead of a quota-based approach, the decisions are delegated to an external service; a future improvement could be providing a default implementation (perhaps using quotas and some default time limits) that can be deployed alongside Blazar. I would appreciate reviews from the core team and feedback from others as to the design. This work is planned for Ussuri pending spec approval. Thanks, /Jason [1]: https://review.opendev.org/#/c/707042/ -- Jason Anderson Chameleon DevOps Lead Consortium for Advanced Science and Engineering, The University of Chicago Mathematics & Computer Science Division, Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 17 18:45:40 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 17 Feb 2020 12:45:40 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 Message-ID: Hi, We seem to have created a bit of a problem with the latest oslo.limit release. In keeping with our general policy of bumping the major version when we release libraries without py2 support, oslo.limit got bumped to 1.0. Unfortunately, being a pre-1.0 library it should have only had the minor version bumped. This puts us in an awkward situation since the library is still under heavy development and we expect the API will change, possibly multiple times, before we're ready to commit to a stable API. We also need the ability to continue doing releases during development so we can test the library with consuming projects. I can think of a few options, although I won't guarantee all of these are even possible: * Unpublish 1.0.0 and do further pre-1.0 development on a feature branch cut from before we released 1.0.0. Once we're ready for "1.0", we merge the feature branch to master and release it as 2.0.0. * Stick a big disclaimer in the 1.0.0 docs that it is still under development and proceed to treat 1.0 the same as we would have treated a pre-1.0 library. Again, when ready for "1.0" we tag it 2.0.0. * Make our breaking changes as needed and just continue bumping the major version every release. This unfortunately makes it hard to communicate via the version when the library is ready for use. :-/ * [some better idea that you suggest :-)] Any input on the best way to handle this is greatly appreciated. Thanks. -Ben From sean.mcginnis at gmx.com Mon Feb 17 19:05:46 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 17 Feb 2020 13:05:46 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: Message-ID: On 2/17/20 12:45 PM, Ben Nemec wrote: > Hi, > > We seem to have created a bit of a problem with the latest oslo.limit > release. In keeping with our general policy of bumping the major > version when we release libraries without py2 support, oslo.limit got > bumped to 1.0. Unfortunately, being a pre-1.0 library it should have > only had the minor version bumped. > > This puts us in an awkward situation since the library is still under > heavy development and we expect the API will change, possibly multiple > times, before we're ready to commit to a stable API. We also need the > ability to continue doing releases during development so we can test > the library with consuming projects. > > I can think of a few options, although I won't guarantee all of these > are even possible: > > * Unpublish 1.0.0 and do further pre-1.0 development on a feature > branch cut from before we released 1.0.0. Once we're ready for "1.0", > we merge the feature branch to master and release it as 2.0.0. > In general, the idea of unpublishing something is a Very Bad Thing. That said, in this situation I think it's worth considering. Publishing something as 1.0 conveys something that will be used by consumers to make assumptions about the state of the library, which in this case would be very misleading. It's not easy to unpublish (nor should it be) but if we can get some infra help, we should be able to take down that library from PyPi and push up a patch to the openstack/releases repo removing the 1.0.0 release. We can then do another release patch to do a 0.2.0 (or whatever the team thinks is appropriate) to re-release the package under a version number that more accurately conveys the status of the library. > * Stick a big disclaimer in the 1.0.0 docs that it is still under > development and proceed to treat 1.0 the same as we would have treated > a pre-1.0 library. Again, when ready for "1.0" we tag it 2.0.0. > This is certainly an options too. And of course, everyone always reads the docs, so we should be totally safe. ;) > * Make our breaking changes as needed and just continue bumping the > major version every release. This unfortunately makes it hard to > communicate via the version when the library is ready for use. :-/ > Also an option. This does work, and there's no reason we can't have multiple major version bumps over a short period of time. But like you say, communication is an issue here, and with the current release we are communicating something that we probably shouldn't be. > * [some better idea that you suggest :-)] > > Any input on the best way to handle this is greatly appreciated. Thanks. > > -Ben > From peter.matulis at canonical.com Mon Feb 17 19:56:03 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 17 Feb 2020 14:56:03 -0500 Subject: [charms] OpenStack Charms 20.02 release is now available Message-ID: The 20.02 release of the OpenStack Charms is now available. This release brings several new features to the existing OpenStack Charms deployments for Queens, Rocky, Stein, Train, and many other stable combinations of Ubuntu + OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/2002.html == Highlights == * New charm: manila-ganesha There is a new charm to support Ganesha for use with Manila and CephFS: manila-ganesha. This charm, as well as the requisite manila and ceph-fs charms, have been promoted to supported status. * Swift global cluster With OpenStack Newton or later, support for a global cluster with Swift is available as a tech preview. * OVN With OpenStack Train or later, support for integration with Open Virtual Network (OVN) is available as a tech preview. * MySQL8 With Ubuntu 19.10 or later, support for MySQL 8 is available as a tech preview. * New charms: watcher and watcher-dashboard There are two new charms to support Watcher, the resource optimization service for multi-tenant OpenStack-based clouds: watcher and watcher-dashboard. This is the first release of these charms and they are available as a tech preview. * Policy overrides The policy overrides feature provides operators with a mechanism to override policy defaults on a per-service basis. The last release (19.10) introduced the feature for a number of charms. This release adds support for openstack-dashboard and octavia charms. * Disabling snapshots as a boot source for the OpenStack dashboard Snapshots can be disabled as valid boot sources for launching instances in the dashboard. This is done via the new 'disable-instance-snapshot' configuration option in the openstack-dashboard charm. == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel on Freenode. == Thank you == Lots of thanks to the below 49 charm contributors who squashed 53 bugs, enabled support for a new release of OpenStack, improved documentation, and added exciting new functionality! Liam Young Corey Bryant Peter Matulis Sahid Orentino Ferdjaoui Frode Nordahl Alex Kavanagh David Ames Chris MacNaughton Stamatis Katsaounis inspurericzhang Tiago Pasqualini Andrew McLeod ShangXiao Ryan Beisner James Page Felipe Reyes kangyufei Edward Hope-Morley Adam Dyess Xuan Yandong Arif Ali Chris Johnston wangfaxin Aurelien Lourot Alexandros Soumplis Jose Guedez Qitao Tytus Kurek Seyeong Kim Dongdong Tao Haw Loeung Jorge Niedbalski Qiu Fossen Yanos Angelopoulos Syed Mohammad Adnan Karim JiaSiRui Xiyue Wang Adam Dyess Jose Delarosa Alexander Balderson Andreas Jaeger Dmitrii Shcherbakov Jacek Nykis Thobias Trevisan Hemanth Nakkina Shuo Liu Drew Freiberger zhangboye Aggelos Kolaitis -- OpenStack Charms Team From doug at doughellmann.com Mon Feb 17 20:02:14 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 17 Feb 2020 15:02:14 -0500 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: Message-ID: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> > On Feb 17, 2020, at 2:05 PM, Sean McGinnis wrote: > > On 2/17/20 12:45 PM, Ben Nemec wrote: >> Hi, >> >> We seem to have created a bit of a problem with the latest oslo.limit >> release. In keeping with our general policy of bumping the major >> version when we release libraries without py2 support, oslo.limit got >> bumped to 1.0. Unfortunately, being a pre-1.0 library it should have >> only had the minor version bumped. >> >> This puts us in an awkward situation since the library is still under >> heavy development and we expect the API will change, possibly multiple >> times, before we're ready to commit to a stable API. We also need the >> ability to continue doing releases during development so we can test >> the library with consuming projects. >> >> I can think of a few options, although I won't guarantee all of these >> are even possible: >> >> * Unpublish 1.0.0 and do further pre-1.0 development on a feature >> branch cut from before we released 1.0.0. Once we're ready for "1.0", >> we merge the feature branch to master and release it as 2.0.0. >> > In general, the idea of unpublishing something is a Very Bad Thing. > > That said, in this situation I think it's worth considering. Publishing > something as 1.0 conveys something that will be used by consumers to > make assumptions about the state of the library, which in this case > would be very misleading. > > It's not easy to unpublish (nor should it be) but if we can get some > infra help, we should be able to take down that library from PyPi and > push up a patch to the openstack/releases repo removing the 1.0.0 > release. We can then do another release patch to do a 0.2.0 (or whatever > the team thinks is appropriate) to re-release the package under a > version number that more accurately conveys the status of the library. I’m not 100% sure, but I think if you remove a release from PyPI you can’t release again using that version number. So a future stable release would have to be 1.1.0, or something like that. > >> * Stick a big disclaimer in the 1.0.0 docs that it is still under >> development and proceed to treat 1.0 the same as we would have treated >> a pre-1.0 library. Again, when ready for "1.0" we tag it 2.0.0. >> > This is certainly an options too. And of course, everyone always reads > the docs, so we should be totally safe. ;) > >> * Make our breaking changes as needed and just continue bumping the >> major version every release. This unfortunately makes it hard to >> communicate via the version when the library is ready for use. :-/ >> > Also an option. This does work, and there's no reason we can't have > multiple major version bumps over a short period of time. But like you > say, communication is an issue here, and with the current release we are > communicating something that we probably shouldn't be. When is the next breaking release likely to be? >> * [some better idea that you suggest :-)] >> >> Any input on the best way to handle this is greatly appreciated. Thanks. >> >> -Ben From amotoki at gmail.com Mon Feb 17 20:08:49 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 18 Feb 2020 05:08:49 +0900 Subject: [neutron] bug deputy report (Feb 10-17) Message-ID: Hi, Here is a neutron bug deputy report last week. While 15 new bugs were reported, several bugs needs further investigation and are not triaged yet. I will look into them but it would be great if they need attentions. --- Undecided - https://bugs.launchpad.net/neutron/+bug/1862611 Neutron try to register invalid host to nova aggregate for ironic routed network Undecided It was covered by the last week report, but it needs more eyes familiar with routed network. - https://bugs.launchpad.net/neutron/+bug/1862851 update_floatingip_statuses: StaleDataError: UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 were matched. Undecided Needs to be checked by folks familiar with DVR. It looks like a race condition. - https://bugs.launchpad.net/neutron/+bug/1862932 [neutron-bgp-dragent] passive agents send wrong number of routes Undecided Needs attention by folks familiar with dynamic-routing stuff. - https://bugs.launchpad.net/neutron/+bug/1863068 Dublicated Neutron Meter Rules in different projects kills metering Undecided - https://bugs.launchpad.net/neutron/+bug/1863091 IPVS setup fails with openvswitch firewall driver, works with iptables_hybrid Undecided - https://bugs.launchpad.net/neutron/+bug/1863110 2/3 snat namespace transitions to master Undecided - https://bugs.launchpad.net/neutron/+bug/1863201 stein regression listing security group rules Undecided - https://bugs.launchpad.net/neutron/+bug/1863213 Spawning of DHCP processes fail: invalid netcat options Undecided ralonsoh assigned himself. Any update on this? Incomplete - https://bugs.launchpad.net/neutron/+bug/1863206 Port is reported with 'port_security_enabled=True' without port-security extension Incomplete I think the bug author misunderstood the default of port-security extension behavior. Double check would be appreciated. Confirmed - https://bugs.launchpad.net/neutron/+bug/1863577 [ovn] tempest.scenario.test_network_v6.TestGettingAddress tests failing 100% times Confirmed, High In Progress - https://bugs.launchpad.net/neutron/+bug/1862618 [OVN] functional test test_virtual_port_delete_parents is unstable In Progress, Medium - https://bugs.launchpad.net/neutron/+bug/1862703 Neutron remote security group does not work In Progress, High - https://bugs.launchpad.net/neutron/+bug/1862648 [OVN] Reduce the number of tables watched by MetadataProxyHandler High, Fix Released - https://bugs.launchpad.net/neutron/+bug/1862893 [OVN]Updating a QoS policy for a port will cause a KeyEerror In Progress, Low Fix Released - https://bugs.launchpad.net/neutron/+bug/1862927 "ncat" rootwrap filter is missing Fix Released The root cause turned out that we need to specify rootwrap filter config by abspath. From fungi at yuggoth.org Mon Feb 17 20:42:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Feb 2020 20:42:34 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> Message-ID: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: [...] > I’m not 100% sure, but I think if you remove a release from PyPI > you can’t release again using that version number. So a future > stable release would have to be 1.1.0, or something like that. [...] More accurately, you can't republish the same filename to PyPI even if it's been previously deleted. You could however publish a oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz though that seems a bit of a messy workaround. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Mon Feb 17 22:15:27 2020 From: openstack at fried.cc (Eric Fried) Date: Mon, 17 Feb 2020 16:15:27 -0600 Subject: [nova] Ussuri feature scrub Message-ID: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> Nova maintainers and contributors- { Please refer to this ML thread [1][2] for background. } Now that spec freeze has passed, I would like to assess the Design:Approved blueprints and understand how many we could reasonably expect to land in Ussuri. We completed 25 blueprints in Train. However, mriedem is gone, and it is likely that I will drop off the radar after Q1. Obviously all blueprints/releases/reviews/etc. are not created equal, but using stackalytics review numbers as a rough heuristic, I expect about 20 blueprints to get completed in Ussuri. If we figure that 5-ish of the incompletes will be due to factors beyond our control, that would mean we should Direction:Approve about 25. As of this writing: - 30 blueprints are targeted for ussuri [3]. Of these, - 7 are already implemented. Of the remaining 23, - 2 are not yet Design:Approved. These will need an exception if they are to proceed. And - 19 (including the unapproved ones) have code in various stages. I would like to see us cut 5-ish of the 30. I have made an etherpad [4] with the unimplemented blueprints listed with owners and code links. I made notes on some of the ones I would like to see prioritized, and a couple on which I'm more meh. If you have a stake in Nova/Ussuri, I encourage you to weigh in. How will we ultimately decide? Will we actually cut anything? I don't have the answers yet. Let's go through this exercise and see if anything obvious falls out, and then we can figure out the next steps. Thanks, efried [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009832.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-October/thread.html#9835 [3] https://blueprints.launchpad.net/nova/ussuri [4] https://etherpad.openstack.org/p/nova-ussuri-planning From er.gauravgoyal at gmail.com Mon Feb 17 21:57:51 2020 From: er.gauravgoyal at gmail.com (Gaurav Goyal) Date: Mon, 17 Feb 2020 16:57:51 -0500 Subject: [Ask OpenStack] 7 updates about "galera", "rabbiitmq", "auth_token", "keysone", "swift-proxy" and more In-Reply-To: <20200213032627.13937.95474@ask01.openstack.org> References: <20200213032627.13937.95474@ask01.openstack.org> Message-ID: Dear Openstack Experts, We are using a big openstack setup in our organization and different operation teams adds new computes nodes to it. We as system admin of this Openstack environment wants to audit the configuration parameters on all the nodes. Can you please help to suggest a better way (Tools) to audit this environment? Awaiting your kind reply. Regards Gaurav Goyal On Wed, Feb 12, 2020 at 10:26 PM wrote: > Dear Gaurav Goyal, > > Ask OpenStack has these updates, please have a look: > > - Kubernetes cluster created with Magnum doesn't work > (new > question) > - how do i list the top objects consuming more space in the swift > container > (new > question) > - action requests is empty > (new > question) > - How to Migrate OpenStack Instance from one Tenant to another > (new > question) > - Architecture of keystone, swift and memcached together > (3 > rev) > - why is octavia not using keystone public endpoint to validate tokens? > (new > question) > - Error while launching instance on openstack > (2 > rev, 1 ans, 3 ans rev) > > To change frequency and content of these alerts, please visit your user > profile > . > To unsubscribe, visit this page > > If you believe that this message was sent in an error, please email about > it the forum administrator at communitymngr at openstack.org. > ------------------------------ > > Sincerely, > Ask OpenStack Administrator > > To unsubscribe: there's a big "Stop Email" button in the link to your user > profile above. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 17 23:18:42 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 17 Feb 2020 17:18:42 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> Message-ID: On 2/17/20 2:42 PM, Jeremy Stanley wrote: > On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > [...] >> I’m not 100% sure, but I think if you remove a release from PyPI >> you can’t release again using that version number. So a future >> stable release would have to be 1.1.0, or something like that. > [...] > > More accurately, you can't republish the same filename to PyPI even > if it's been previously deleted. You could however publish a > oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > though that seems a bit of a messy workaround. > This seems sensible - it would be kind of like rewriting history in a git repo to re-release 1.0 with different content. I'm also completely fine with having to use a different release number for our eventual 1.0 release. It may make our release version checks unhappy, but since this is (hopefully) not a thing we'll be doing regularly I imagine we can find a way around that. If we can pull the 1.0.0 release that would be ideal since as Sean mentioned people aren't good about reading docs and a 1.0 implies some things that aren't true here. From liang.a.fang at intel.com Tue Feb 18 01:18:32 2020 From: liang.a.fang at intel.com (Fang, Liang A) Date: Tue, 18 Feb 2020 01:18:32 +0000 Subject: [nova][sfe] Support volume local cache Message-ID: Hi We would like to have a spec freeze exception for the spec: Support volume local cache [1]. This is part of cross project contribution, with another spec in cinder [2]. I will attend the Nova meeting on February 20 2020 1400 UTC. [1] https://review.opendev.org/#/c/689070/ [2] https://review.opendev.org/#/c/684556/ Regards Liang -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Feb 18 02:57:28 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 18 Feb 2020 02:57:28 +0000 Subject: [nova][sfe] Support re-configure deleted_on_termination in server Message-ID: <63aa5ce6b60e4be7a0fff11abfce47e4@inspur.com> Hi, nova: We would like to have a spec freeze exception for the spec: Support re-configure deleted_on_termination in server [1], and it’s PoC code in [2] I will attend the nova meeting on February 20 2020 1400 UTC as much as possible. [1] https://review.opendev.org/#/c/580336/ [2] https://review.opendev.org/#/c/693828/ brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Feb 18 04:30:23 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 17 Feb 2020 23:30:23 -0500 Subject: [kolla][zun] Zun image source Message-ID: Hi Kolla team, I was looking into the CentOS Zun image downloaded from DockerHub. It looks the source code is from stable/train branch: $ docker run kolla/centos-source-zun-compute:master cat zun/zun.egg-info/pbr.json {"git_version": "25e56636", "is_release": false} The git_version points to the stable/train branch, but I think it should point to the master branch. FWIW, I also checked the Ubuntu image and the git version in there looks correct: $ docker run kolla/ubuntu-source-zun-compute:master cat zun/zun.egg-info/pbr.json {"git_version": "6fbf52ae", "is_release": false} Any suggestion? Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Feb 18 06:40:37 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 18 Feb 2020 07:40:37 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> Message-ID: At first glance the documentation don't seem to quote the version, can you confirm that point. If we decide to drop the pypi version we also need to be sure to keep the documentation version aligned with the latest version available of the doc. If the doc version is only represented by "latest" I don't think that's an issue here. Always in a situation where we decide to drop the pypi version, what's about this doc? (It will become false) => https://releases.openstack.org/ussuri/index.html#ussuri-oslo-limit Could be worth to initiate a doc to track things that should be updated too to don't missing something. Le mar. 18 févr. 2020 à 00:22, Ben Nemec a écrit : > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: > > On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > > [...] > >> I’m not 100% sure, but I think if you remove a release from PyPI > >> you can’t release again using that version number. So a future > >> stable release would have to be 1.1.0, or something like that. > > [...] > > > > More accurately, you can't republish the same filename to PyPI even > > if it's been previously deleted. You could however publish a > > oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > > though that seems a bit of a messy workaround. > > > > This seems sensible - it would be kind of like rewriting history in a > git repo to re-release 1.0 with different content. I'm also completely > fine with having to use a different release number for our eventual 1.0 > release. It may make our release version checks unhappy, but since this > is (hopefully) not a thing we'll be doing regularly I imagine we can > find a way around that. > > If we can pull the 1.0.0 release that would be ideal since as Sean > mentioned people aren't good about reading docs and a 1.0 implies some > things that aren't true here. > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccamacho at redhat.com Tue Feb 18 07:01:57 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Tue, 18 Feb 2020 08:01:57 +0100 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Hi, Here you have the video published in case you want to see it[1], sorry for the delay but I had some issues related to the audio codecs when processing the video. [1]: https://youtu.be/D18RaSBGyQU Cheers, Carlos. On Fri, Feb 7, 2020 at 4:23 PM Carlos Camacho Gonzalez wrote: > Thanks Emilien for the session, > > I wasn't able to be present but I'll proceed to edit/publish it in the > TripleO youtube channel. > > Thanks! > > On Fri, Feb 7, 2020 at 4:21 PM Emilien Macchi wrote: > >> Thanks for joining, and the great questions, I hope you learned >> something, and that we can do it again soon. >> >> Here is the recording: >> https://bluejeans.com/s/vTSAY >> Slides: >> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/edit >> >> >> >> On Wed, Feb 5, 2020 at 7:22 PM Emilien Macchi wrote: >> >>> Of course it'll be recorded and the link will be available for everyone. >>> >>> On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, >>> wrote: >>> >>>> Hi folks, >>>> >>>> On Friday I'll do a deep-dive on where we are with container tools. >>>> It's basically an update on the removal of Paunch, what will change etc. >>>> >>>> I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask >>>> questions or give feedback. >>>> >>>> https://bluejeans.com/6007759543 >>>> Link of the slides: >>>> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ >>>> -- >>>> Emilien Macchi >>>> >>> >> >> -- >> Emilien Macchi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Feb 18 08:56:55 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 18 Feb 2020 09:56:55 +0100 Subject: [kolla][zun] Zun image source In-Reply-To: References: Message-ID: Hi Hongbin, it's confusing but Ussuri branches of many projects stopped working on CentOS 7, hence we pinned it to Train for the time being. The CentOS 8 images properly use the Ussuri (master) branches, as do other distros. CentOS 8 images will soon replace CentOS 7 ones on master and the confusion will finally be gone. -yoctozepto wt., 18 lut 2020 o 05:39 Hongbin Lu napisał(a): > > Hi Kolla team, > > I was looking into the CentOS Zun image downloaded from DockerHub. It looks the source code is from stable/train branch: > > $ docker run kolla/centos-source-zun-compute:master cat zun/zun.egg-info/pbr.json > {"git_version": "25e56636", "is_release": false} > > The git_version points to the stable/train branch, but I think it should point to the master branch. FWIW, I also checked the Ubuntu image and the git version in there looks correct: > > $ docker run kolla/ubuntu-source-zun-compute:master cat zun/zun.egg-info/pbr.json > {"git_version": "6fbf52ae", "is_release": false} > > Any suggestion? > > Best regards, > Hongbin From thierry at openstack.org Tue Feb 18 10:23:18 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Feb 2020 11:23:18 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> Message-ID: <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Ben Nemec wrote: > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: >> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: >> [...] >>> I’m not 100% sure, but I think if you remove a release from PyPI >>> you can’t release again using that version number. So a future >>> stable release would have to be 1.1.0, or something like that. >> [...] >> >> More accurately, you can't republish the same filename to PyPI even >> if it's been previously deleted. You could however publish a >> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz >> though that seems a bit of a messy workaround. >> > > This seems sensible - it would be kind of like rewriting history in a > git repo to re-release 1.0 with different content. I'm also completely > fine with having to use a different release number for our eventual 1.0 > release. It may make our release version checks unhappy, but since this > is (hopefully) not a thing we'll be doing regularly I imagine we can > find a way around that. > > If we can pull the 1.0.0 release that would be ideal since as Sean > mentioned people aren't good about reading docs and a 1.0 implies some > things that aren't true here. As others suggested, the simplest is probably to remove 1.0.0 from PyPI and releases.o.o, and then wait until the API is stable to push a 2.0.0 tag. That way we don't break anything (the tag stays, we still increment releases, we do not rewrite history, we do not use weird post1 bits) but just limit the diffusion of the confusing 1.0.0 artifact. I'm not sure a feature branch is really needed ? -- Thierry Carrez (ttx) From moguimar at redhat.com Tue Feb 18 10:34:37 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Tue, 18 Feb 2020 11:34:37 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: If removing 1.0.0 is the way we choose to go, people who already have 1.0.0 won't be able to get "newer" 0.x.y versions. We will need an announcement to blacklist 1.0.0. Then, when the time comes to finally make it stable, we can choose to either go 2.0.0 or 1.0.1. We should specifically put in the installation page instructions to blacklist 1.0.0 in requirements files. On Tue, Feb 18, 2020 at 11:24 AM Thierry Carrez wrote: > Ben Nemec wrote: > > > > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: > >> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > >> [...] > >>> I’m not 100% sure, but I think if you remove a release from PyPI > >>> you can’t release again using that version number. So a future > >>> stable release would have to be 1.1.0, or something like that. > >> [...] > >> > >> More accurately, you can't republish the same filename to PyPI even > >> if it's been previously deleted. You could however publish a > >> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > >> though that seems a bit of a messy workaround. > >> > > > > This seems sensible - it would be kind of like rewriting history in a > > git repo to re-release 1.0 with different content. I'm also completely > > fine with having to use a different release number for our eventual 1.0 > > release. It may make our release version checks unhappy, but since this > > is (hopefully) not a thing we'll be doing regularly I imagine we can > > find a way around that. > > > > If we can pull the 1.0.0 release that would be ideal since as Sean > > mentioned people aren't good about reading docs and a 1.0 implies some > > things that aren't true here. > > As others suggested, the simplest is probably to remove 1.0.0 from PyPI > and releases.o.o, and then wait until the API is stable to push a 2.0.0 > tag. > > That way we don't break anything (the tag stays, we still increment > releases, we do not rewrite history, we do not use weird post1 bits) but > just limit the diffusion of the confusing 1.0.0 artifact. > > I'm not sure a feature branch is really needed ? > > -- > Thierry Carrez (ttx) > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 18 10:40:28 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Feb 2020 11:40:28 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Moises Guimaraes de Medeiros wrote: > If removing 1.0.0 is the way we choose to go, people who already have > 1.0.0 won't be able to get "newer" 0.x.y versions. Indeed. Do we need to release 0.x.y versions before the API stabilizes, though ? Do we expect anything to use it ? If we really do, then the least confusing might be to keep 1.0.0 and bump major every time you release a breaking change. -- Thierry Carrez (ttx) From lyarwood at redhat.com Tue Feb 18 11:06:58 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 18 Feb 2020 11:06:58 +0000 Subject: [nova][cinder] What should the behaviour of extend_volume be with attached encrypted volumes? In-Reply-To: <20200213095102.qrvvdqbxa22jceo7@lyarwood.usersys.redhat.com> References: <20200213095102.qrvvdqbxa22jceo7@lyarwood.usersys.redhat.com> Message-ID: <20200218110643.pyxnrb67pbpcqajn@lyarwood.usersys.redhat.com> On 13-02-20 09:51:02, Lee Yarwood wrote: > Hello all, > > The following bug was raised recently regarding a failure to extend > attached encrypted volumes: > > Failing to extend an attached encrypted volume > https://bugs.launchpad.net/nova/+bug/1861071 > > I've worked up a series below that resolves this for LUKSv1 volumes by > taking the LUKSv1 header into account before calling Libvirt to resize > the block device within the instance: > > https://review.opendev.org/#/q/topic:bug/1861071 > > This results in the instance visable block device being resized to a > size just smaller than that requested through Cinder's API. > > My question to the list is if that behaviour is acceptable given the > same call to extend an attached unencrypted volume *will* grow the > instance visable block device to the requested size? Bumping the thread as I'm still looking for input. The above topic is ready for review now so if I don't hear any objections I'll move forward with the current approach of making the user visible block device smaller within the instance. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hberaud at redhat.com Tue Feb 18 11:09:35 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 18 Feb 2020 12:09:35 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: Le mar. 18 févr. 2020 à 11:42, Thierry Carrez a écrit : > Moises Guimaraes de Medeiros wrote: > > If removing 1.0.0 is the way we choose to go, people who already have > > 1.0.0 won't be able to get "newer" 0.x.y versions. > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > though ? Do we expect anything to use it ? > > If we really do, then the least confusing might be to keep 1.0.0 and > bump major every time you release a breaking change. > Could be a more proper solution. > -- > Thierry Carrez (ttx) > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Tue Feb 18 11:09:52 2020 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 18 Feb 2020 11:09:52 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: On Tue, 2020-02-18 at 11:40 +0100, Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: > > If removing 1.0.0 is the way we choose to go, people who already have > > 1.0.0 won't be able to get "newer" 0.x.y versions. > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > though ? Do we expect anything to use it ? > > If we really do, then the least confusing might be to keep 1.0.0 and > bump major every time you release a breaking change. Agreed. I don't imageine anyone is using this yet, and so long as we use a future major version to indicate breaking changes, what would it matter even if they were? I think we should just keep 1.0.0 and remember to release 2.0.0 when it's done, personally. Certainly a lot less work. Stephen From zhengyupann at 163.com Tue Feb 18 11:12:39 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Tue, 18 Feb 2020 19:12:39 +0800 (CST) Subject: [neutron] Can br-ex and br-tun use the same interface? Message-ID: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> hi, I have only two physical interfaces. In my deploying, network node and compute node are the same. Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the other interface? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Tue Feb 18 11:36:50 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 18 Feb 2020 12:36:50 +0100 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: Thanks Burak and Ignazio. Appreciate it On Thu, Feb 13, 2020 at 10:19 PM Burak Hoban wrote: > Hi guys, > > We use Dell EMC VxFlex OS, which in its current version allows for free > use and commercial (in version 3.5 a licence is needed, but its perpetual). > It's similar to Ceph but more geared towards scale and performance etc (it > use to be called ScaleIO). > > Other than that, I know of a couple sites using SAN storage, but a lot of > people just seem to use Ceph. > > Cheers, > > Burak > > ------------------------------ > > Message: 2 > Date: Thu, 13 Feb 2020 18:20:29 +0100 > From: Ignazio Cassano > To: Alfredo De Luca > Cc: openstack-discuss > Subject: Re: [CINDER] Distributed storage alternatives > Message-ID: > < > CAB7j8cXLQWh5fx-E9AveUEa6OncDwCL6BOGc-Pm2TX4FKwnUKg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hello Alfredo, I think best opensource solution is ceph. > As far as commercial solutions are concerned we are working with network > appliance (netapp) and emc unity. > Regards > Ignazio > > Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha > scritto: > > > Hi all. > > we 'd like to explore storage back end alternatives to CEPH for > > Openstack > > > > I am aware of GlusterFS but what would you recommend for distributed > > storage like Ceph and specifically for block device provisioning? > > Of course must be: > > > > 1. *Reliable* > > 2. *Fast* > > 3. *Capable of good performance over WAN given a good network back > > end* > > > > Both open source and commercial technologies and ideas are welcome. > > > > Cheers > > > > -- > > *Alfredo* > > > > > > _____________________________________________________________________ > > The information transmitted in this message and its attachments (if any) > is intended > only for the person or entity to which it is addressed. > The message may contain confidential and/or privileged material. Any > review, > retransmission, dissemination or other use of, or taking of any action in > reliance > upon this information, by persons or entities other than the intended > recipient is > prohibited. > > If you have received this in error, please contact the sender and delete > this e-mail > and associated material from any computer. > > The intended recipient of this e-mail may only use, reproduce, disclose or > distribute > the information contained in this e-mail and any attached files, with the > permission > of the sender. > > This message has been scanned for viruses. > _____________________________________________________________________ > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 18 11:39:25 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 18 Feb 2020 11:39:25 +0000 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> Message-ID: <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: > hi, > I have only two physical interfaces. In my deploying, network node and compute node are the same. > Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the other > interface? yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup what interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via patch ports i it will use the out_port action to skip sending the packet via the kernel networking stack. so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont know if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all packets that use vxlan will be sent via the kernel which will significantly reduce performance. im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. From zhengyupann at 163.com Tue Feb 18 12:03:52 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Tue, 18 Feb 2020 20:03:52 +0800 (CST) Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> Message-ID: <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> Hi, Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds tunnel ip in br-ex? -- Thanks. Zhengyu At 2020-02-18 18:39:25, "Sean Mooney" wrote: >On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: >> hi, >> I have only two physical interfaces. In my deploying, network node and compute node are the same. >> Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the other >> interface? >yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup what >interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need >to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is >different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, >in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via patch >ports i it will use the out_port action to skip sending the packet via the kernel networking stack. > >so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. >that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont know >if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. > >not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all packets >that use vxlan will be sent via the kernel which will significantly reduce performance. > >im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 18 12:29:35 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Feb 2020 13:29:35 +0100 Subject: [Release-job-failures] Release of openstack/puppet-keystone for ref refs/tags/16.1.0 failed In-Reply-To: References: Message-ID: <11e5710b-bb25-21f6-aaf9-3c458641b899@openstack.org> We had a series of failures uploading Puppet artifacts to the forge: Forge API failed to upload tarball with code: 500 errors: An error was encountered while processing your request, please try again later. This affected: openstack/puppet-ceilometer (16.1.0) https://zuul.opendev.org/t/openstack/build/5b0b890d777545c098ea88b70eb89b52 openstack/puppet-panko (16.1.0) https://zuul.opendev.org/t/openstack/build/073cbb3b72ac4241aa1802ccbaf4ada5 openstack/puppet-murano (16.1.0) https://zuul.opendev.org/t/openstack/build/b07a78bd59c84176bb7d9d2f1e0f635c openstack/puppet-keystone (16.1.0) https://zuul.opendev.org/t/openstack/build/be7510a717d3413ab008ab3d34db3672 As a result the tarballs were not uploaded to tarballs.openstack.org either. I think we should reenqueue the tag reference to retrigger the release-openstack-puppet job for those ? -- Thierry Carrez (ttx) From smooney at redhat.com Tue Feb 18 12:30:17 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 18 Feb 2020 12:30:17 +0000 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> Message-ID: On Tue, 2020-02-18 at 20:03 +0800, Zhengyu Pan wrote: > Hi, > Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds tunnel > ip in br-ex? no the br-ex should be connect to the br-int by a patch port already and the br-tun will be connected to the br-int via a patch port already so br-tun and br-ex are connected indirectly so the optimisation will work. if you do not have a patch port between br-ex and br-int then yes you shoudl create one. you want to avoid a loop between the bridge so you dont want all bridge to be connected directly. they should all connect to br-int but not to each other. regarding the ip i alwasy just configruied it on the br-ex local bridge port so "ifconfig br-ex 192.168.100.42/24 up" or whatever you ip is. you can obviously do that with network manager or systemd network script too. just ensure whatever ip is set as the neutron local tunnel ip is assigned to the br-ex and your tunnel traffic will use that interface as long as the routing table identifs it as the correct interface. if you have two interface in the same subnet or your vxlan tunnel ips are on multiple subnets and are routed you need to make sure the metric/route pirortiy for the br-ex will be set correctly so that it is selected instead of other options. normally you wont have to do anything as your tunnel endpoint ips will come form a singel subnet and you wont have other interfaces in the same range i just mention that above incase you have a non standard deployment. > > > -- > > Thanks. > Zhengyu > > > > At 2020-02-18 18:39:25, "Sean Mooney" wrote: > > On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: > > > hi, > > > I have only two physical interfaces. In my deploying, network node and compute node are the same. > > > Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the > > > other > > > interface? > > > > yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup > > what > > interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need > > to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is > > different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, > > in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via > > patch > > ports i it will use the out_port action to skip sending the packet via the kernel networking stack. > > > > so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. > > that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont > > know > > if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. > > > > not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all > > packets > > that use vxlan will be sent via the kernel which will significantly reduce performance. > > > > im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. From skaplons at redhat.com Tue Feb 18 12:55:29 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 18 Feb 2020 13:55:29 +0100 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> Message-ID: <322C66B3-F3A1-491A-9A76-1649DE996C18@redhat.com> Hi, > On 18 Feb 2020, at 13:30, Sean Mooney wrote: > > On Tue, 2020-02-18 at 20:03 +0800, Zhengyu Pan wrote: >> Hi, >> Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds tunnel >> ip in br-ex? > no the br-ex should be connect to the br-int by a patch port already and the br-tun will be connected to the br-int > via a patch port already so br-tun and br-ex are connected indirectly so the optimisation will work. > if you do not have a patch port between br-ex and br-int then yes you shoudl create one. Patch ports between br-int and all external bridges defined in bridge_mappings are created automatically by neutron-ovs-agent: https://github.com/openstack/neutron/blob/8ba44d672059e2dbea6a0516e5832cec40800a77/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1420 > > you want to avoid a loop between the bridge so you dont want all bridge to be connected directly. > they should all connect to br-int but not to each other. > > regarding the ip i alwasy just configruied it on the br-ex local bridge port > so "ifconfig br-ex 192.168.100.42/24 up" or whatever you ip is. > you can obviously do that with network manager or systemd network script too. > > just ensure whatever ip is set as the neutron local tunnel ip is assigned to the br-ex and > your tunnel traffic will use that interface as long as the routing table identifs it as the correct > interface. > > if you have two interface in the same subnet or your vxlan tunnel ips are on multiple subnets and are routed > you need to make sure the metric/route pirortiy for the br-ex will be set correctly so that it is selected instead of > other options. > > normally you wont have to do anything as your tunnel endpoint ips will come form a singel subnet and you wont have other > interfaces in the same range i just mention that above incase you have a non standard deployment. >> >> >> -- >> >> Thanks. >> Zhengyu >> >> >> >> At 2020-02-18 18:39:25, "Sean Mooney" wrote: >>> On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: >>>> hi, >>>> I have only two physical interfaces. In my deploying, network node and compute node are the same. >>>> Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the >>>> other >>>> interface? >>> >>> yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup >>> what >>> interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need >>> to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is >>> different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, >>> in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via >>> patch >>> ports i it will use the out_port action to skip sending the packet via the kernel networking stack. >>> >>> so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. >>> that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont >>> know >>> if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. >>> >>> not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all >>> packets >>> that use vxlan will be sent via the kernel which will significantly reduce performance. >>> >>> im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. > > — Slawek Kaplonski Senior software engineer Red Hat From smooney at redhat.com Tue Feb 18 13:01:51 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 18 Feb 2020 13:01:51 +0000 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <322C66B3-F3A1-491A-9A76-1649DE996C18@redhat.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> <322C66B3-F3A1-491A-9A76-1649DE996C18@redhat.com> Message-ID: <81d065c5b3cc83b4a6b12da720ce5424e329dd34.camel@redhat.com> On Tue, 2020-02-18 at 13:55 +0100, Slawek Kaplonski wrote: > Hi, > > > On 18 Feb 2020, at 13:30, Sean Mooney wrote: > > > > On Tue, 2020-02-18 at 20:03 +0800, Zhengyu Pan wrote: > > > Hi, > > > Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds > > > tunnel > > > ip in br-ex? > > > > no the br-ex should be connect to the br-int by a patch port already and the br-tun will be connected to the br-int > > via a patch port already so br-tun and br-ex are connected indirectly so the optimisation will work. > > if you do not have a patch port between br-ex and br-int then yes you shoudl create one. > > Patch ports between br-int and all external bridges defined in bridge_mappings are created automatically by neutron- > ovs-agent: > https://github.com/openstack/neutron/blob/8ba44d672059e2dbea6a0516e5832cec40800a77/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1420 yep they shoudl be. however if you have not configured the bridge mapping becasue you are not usign provider networks you might not have one. that said i have always configure them via the bridge mappings. > > > > > you want to avoid a loop between the bridge so you dont want all bridge to be connected directly. > > they should all connect to br-int but not to each other. > > > > regarding the ip i alwasy just configruied it on the br-ex local bridge port > > so "ifconfig br-ex 192.168.100.42/24 up" or whatever you ip is. > > you can obviously do that with network manager or systemd network script too. > > > > just ensure whatever ip is set as the neutron local tunnel ip is assigned to the br-ex and > > your tunnel traffic will use that interface as long as the routing table identifs it as the correct > > interface. > > > > if you have two interface in the same subnet or your vxlan tunnel ips are on multiple subnets and are routed > > you need to make sure the metric/route pirortiy for the br-ex will be set correctly so that it is selected instead > > of > > other options. > > > > normally you wont have to do anything as your tunnel endpoint ips will come form a singel subnet and you wont have > > other > > interfaces in the same range i just mention that above incase you have a non standard deployment. > > > > > > > > > -- > > > > > > Thanks. > > > Zhengyu > > > > > > > > > > > > At 2020-02-18 18:39:25, "Sean Mooney" wrote: > > > > On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: > > > > > hi, > > > > > I have only two physical interfaces. In my deploying, network node and compute node are the same. > > > > > Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the > > > > > other > > > > > interface? > > > > > > > > yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup > > > > what > > > > interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you > > > > need > > > > to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port > > > > which is > > > > different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a > > > > bridge, > > > > in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via > > > > patch > > > > ports i it will use the out_port action to skip sending the packet via the kernel networking stack. > > > > > > > > so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. > > > > that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i > > > > dont > > > > know > > > > if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. > > > > > > > > not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all > > > > packets > > > > that use vxlan will be sent via the kernel which will significantly reduce performance. > > > > > > > > im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > From fungi at yuggoth.org Tue Feb 18 15:16:26 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Feb 2020 15:16:26 +0000 Subject: [Release-job-failures] Release of openstack/puppet-keystone for ref refs/tags/16.1.0 failed In-Reply-To: <11e5710b-bb25-21f6-aaf9-3c458641b899@openstack.org> References: <11e5710b-bb25-21f6-aaf9-3c458641b899@openstack.org> Message-ID: <20200218151626.wmjnmtujzr46tril@yuggoth.org> On 2020-02-18 13:29:35 +0100 (+0100), Thierry Carrez wrote: > We had a series of failures uploading Puppet artifacts to the > forge: [...] > I think we should reenqueue the tag reference to retrigger the > release-openstack-puppet job for those ? This sounds reasonable to me. Unless there are objections to this plan, I'll try to get to them later today between meetings. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Feb 18 15:46:16 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 18 Feb 2020 09:46:16 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: On 2/18/20 4:34 AM, Moises Guimaraes de Medeiros wrote: > If removing 1.0.0 is the way we choose to go, people who already have > 1.0.0 won't be able to get "newer" 0.x.y versions. > > We will need an announcement to blacklist 1.0.0. Then, when the time > comes to finally make it stable, we can choose to either go 2.0.0 or 1.0.1. > > We should specifically put in the installation page instructions to > blacklist 1.0.0 in requirements files. If we pull it from pypi, do we really need to blacklist it? A regular pip install would only find the 0.x versions after that, right? In general, I'm not that concerned about someone having already installed it at this point. It was just released and the only people who are likely aware of the library are the ones working on it. My main concern is that we've released the library with a version number that implies a certain level of completeness that doesn't actually exist yet. Given the length of time it has taken to get it to this point, the possibility exists that this bad state could persist for six months or more. I'd prefer to nip it in the bud now rather than have somebody find it down the road and waste a bunch of time trying to make an incomplete thing work. > > On Tue, Feb 18, 2020 at 11:24 AM Thierry Carrez > wrote: > > Ben Nemec wrote: > > > > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: > >> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > >> [...] > >>> I’m not 100% sure, but I think if you remove a release from PyPI > >>> you can’t release again using that version number. So a future > >>> stable release would have to be 1.1.0, or something like that. > >> [...] > >> > >> More accurately, you can't republish the same filename to PyPI even > >> if it's been previously deleted. You could however publish a > >> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > >> though that seems a bit of a messy workaround. > >> > > > > This seems sensible - it would be kind of like rewriting history > in a > > git repo to re-release 1.0 with different content. I'm also > completely > > fine with having to use a different release number for our > eventual 1.0 > > release. It may make our release version checks unhappy, but > since this > > is (hopefully) not a thing we'll be doing regularly I imagine we can > > find a way around that. > > > > If we can pull the 1.0.0 release that would be ideal since as Sean > > mentioned people aren't good about reading docs and a 1.0 implies > some > > things that aren't true here. > > As others suggested, the simplest is probably to remove 1.0.0 from PyPI > and releases.o.o, and then wait until the API is stable to push a > 2.0.0 tag. > > That way we don't break anything (the tag stays, we still increment > releases, we do not rewrite history, we do not use weird post1 bits) > but > just limit the diffusion of the confusing 1.0.0 artifact. > > I'm not sure a feature branch is really needed ? > > -- > Thierry Carrez (ttx) > > > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > From mihalis68 at gmail.com Tue Feb 18 15:47:05 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 18 Feb 2020 10:47:05 -0500 Subject: ops meetups team meeting 2020-2-18 Message-ID: The OpenStack Ops Meetups team meeting was held today on IRC, minutes linked below. Key links vancouver opendev+ptg : https://www.openstack.org/events/opendev-ptg-2020/ South Korea Ops Meetup proposal : https://etherpad.openstack.org/p/ops-meetup-2nd-korea-2020 OpenInfra summit 2020 announced for Berlin, Oct 19-23 : https://superuser.openstack.org/articles/inside-open-infrastructure-the-latest-from-the-openstack-foundation-4/ The meetups team will be proposing content for Vancouver and will shortly solicit feedback on preferred dates for the Ops Meetups even in South Korea. Please follow https://twitter.com/osopsmeetup for more announcements. Minutes: <•openstack> Meeting ended Tue Feb 18 15:37:42 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:37 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-02-18-15.01.html 10:37 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-02-18-15.01.txt 10:37 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-02-18-15.01.log.html Chris Morgan - on behalf of the OpenStack Ops Meetups team ( https://wiki.openstack.org/wiki/Ops_Meetups_Team) -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Feb 18 15:47:36 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 18 Feb 2020 09:47:36 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: On 2/18/20 4:23 AM, Thierry Carrez wrote: > Ben Nemec wrote: >> >> >> On 2/17/20 2:42 PM, Jeremy Stanley wrote: >>> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: >>> [...] >>>> I’m not 100% sure, but I think if you remove a release from PyPI >>>> you can’t release again using that version number. So a future >>>> stable release would have to be 1.1.0, or something like that. >>> [...] >>> >>> More accurately, you can't republish the same filename to PyPI even >>> if it's been previously deleted. You could however publish a >>> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz >>> though that seems a bit of a messy workaround. >>> >> >> This seems sensible - it would be kind of like rewriting history in a >> git repo to re-release 1.0 with different content. I'm also completely >> fine with having to use a different release number for our eventual >> 1.0 release. It may make our release version checks unhappy, but since >> this is (hopefully) not a thing we'll be doing regularly I imagine we >> can find a way around that. >> >> If we can pull the 1.0.0 release that would be ideal since as Sean >> mentioned people aren't good about reading docs and a 1.0 implies some >> things that aren't true here. > > As others suggested, the simplest is probably to remove 1.0.0 from PyPI > and releases.o.o, and then wait until the API is stable to push a 2.0.0 > tag. > > That way we don't break anything (the tag stays, we still increment > releases, we do not rewrite history, we do not use weird post1 bits) but > just limit the diffusion of the confusing 1.0.0 artifact. > > I'm not sure a feature branch is really needed ? > If we could continue to tag master with 0.x releases then no. I think my feature branch option was in case we couldn't have a 0.1.0 tag that was later than 1.0.0 on the same branch. From openstack at nemebean.com Tue Feb 18 15:50:52 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 18 Feb 2020 09:50:52 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: On 2/18/20 4:40 AM, Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: >> If removing 1.0.0 is the way we choose to go, people who already have >> 1.0.0 won't be able to get "newer" 0.x.y versions. > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > though ? Do we expect anything to use it ? Yes, the Nova team has a PoC change up that uses these early releases. That's how we're iterating on the API. I can't guarantee that we'll need more releases, but I'm also not aware of anyone having said, "yep, it's ready" so I do expect more releases. > > If we really do, then the least confusing might be to keep 1.0.0 and > bump major every time you release a breaking change. > From Albert.Braden at synopsys.com Tue Feb 18 16:46:38 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 18 Feb 2020 16:46:38 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> Message-ID: Hi Sean, Do you have time to look at the mem_stats_period_seconds / virtio memory balloon issue this week? -----Original Message----- From: Albert Braden Sent: Friday, February 7, 2020 2:26 PM To: Sean Mooney ; openstack-discuss at lists.openstack.org Subject: RE: Virtio memory balloon driver I opened a bug: https://bugs.launchpad.net/nova/+bug/1862425 -----Original Message----- From: Sean Mooney Sent: Wednesday, February 5, 2020 10:25 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5842-2DL5852&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=_kEGfZqTkPscjy0GJB2N_WBXRJPEt2400ADV12hhxR8&e= it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.perl.bot_p_njvgbm&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=5J3hH_mxdtOyNFqbW6j9yGiSyMXmhy3bXrmXRHkJ9I0&e= > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From rosmaita.fossdev at gmail.com Tue Feb 18 22:32:38 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 18 Feb 2020 17:32:38 -0500 Subject: [operators][cinder][nova][glance] possible data loss situation (bug 1852106) Message-ID: tl;dr: If you are running the OpenStack Train release, add the following to the [DEFAULT]non_inheritable_image_properties configuration option [0] in your nova.conf: cinder_encryption_key_id,cinder_encryption_key_deletion_policy About Your nova.conf ==================== - if you already have a value set for non_inheritable_image_properties, add the above to what you currently have - if non_inheritable_image_properties is *not* set in your current nova.conf, you must set its value to include BOTH the above properties AND the default values (which you can see when you generate a sample nova configuration file [1]) NOTE: At a minimum, the non_inheritable_image_properties list should contain: - the properties used for image-signature-validation (these are not transferrable from image to image): * img_signature_hash_method * img_signature * img_signature_key_type * img_signature_certificate_uuid - the properties used to manage the keys for images of cinder encrypted volumes: * cinder_encryption_key_id * cinder_encryption_key_deletion_policy - review the documentation to determine whether you want to include the other default properties: * cache_in_nova * bittorrent Details ======= This issue is being tracked as Launchpad bug 1852106 [2]. This is probably a low-occurrence situation, because in order for the issue to occur, all of the following must happen: (0) using the OpenStack Train release (or code from master (Ussuri development)) (1) cinder_encryption_key_id and cinder_encryption_key_deletion_policy are NOT included in the non_inheritable_image_properties setting in nova.conf (which they aren't, by default) (2) a user has created a volume of an encrypted volume-type in the Block Storage service (cinder). Call this Volume-1 (3) using the Block Storage service, the user has uploaded the encrypted volume as an image to the Image service (glance). Call this Image-1 (4) using the Compute service (nova), the user has attempted to directly boot a server from the image. (Note: this is an unsupported action, the supported workflow is to use the image to boot-from-volume.) (5) although an unsupported action, if a user does (4), it currently results in a server in status ACTIVE but which is unusable because the operating system can't be found (6) using the Compute service, the user requests the createImage action on the unusable (yet ACTIVE) server, resulting in Image-2 (7) using the Image service, the user deletes Image-2 (which has inherited the cinder_encryption_key_* properties from Image-1) upon which the encryption key is deleted, thereby rendering Image-1 non-decryptable so that it can no longer be used in the normal boot-from-volume workflow NOTE 1: the cinder_encryption_key_deletion_policy image property was introduced in Train. In pre-Train releases, deleting the useless Image-2 in step (7) does NOT result in encryption key deletion. NOTE 2: Volume-1 created in step (2) has a *different* encryption key ID than the one associated with Image-1. Thus, even in the scenario where Image-1 becomes non-decryptable, Volume-1 is not affected. Workaround ========== When cinder_encryption_key_id,cinder_encryption_key_deletion_policy are added to the non_inheritable_image_properties setting in nova.conf, the useless Image-2 created in step (6) above will not have the image properties on it that enable Glance to delete the encryption key still in use by Image-1. This does not, however, protect images for which steps (4)-(6) have been performed before the deployment of this workaround. The safest way to deal with images created before the workaround is deployed is to remove the cinder_encryption_key_deletion_policy image property from any image that has it (or to change its value to 'do_not_delete'). While it is possible to use other image properties to identify images created by Nova as opposed to images created by Cinder, this is not guaranteed to be reliable because image properties may have been modified or removed by the image owner. Proposed Longer-term Fixes ========================== - In the Ussuri release, the unsupported action in step (4) above will result in a 400 rather than an active yet unusable server. Hence it will no longer be possible to create the image of the unusable server that causes the issue. [3] - Additionally, given that the image properties associated with cinder encrypted volumes and image signature validation are specific to a single image and should not be inherited by server snapshots under any circumstances, in Ussuri these "absolutely non-inheritable image properties" will no longer be required to appear in the non_inheritable_image_properties configuration setting in order to prevent them from being propagated to server snapshots. [4] References ========== [0] https://docs.openstack.org/nova/train/configuration/config.html#DEFAULT.non_inheritable_image_properties [1] https://docs.openstack.org/nova/train/configuration/sample-config.html [2] https://bugs.launchpad.net/nova/+bug/1852106 [3] https://review.opendev.org/#/c/707738/ [4] https://review.opendev.org/#/c/708126/ From john at johngarbutt.com Tue Feb 18 22:47:53 2020 From: john at johngarbutt.com (John Garbutt) Date: Tue, 18 Feb 2020 22:47:53 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: On Tue, 18 Feb 2020 at 15:55, Ben Nemec wrote: > On 2/18/20 4:40 AM, Thierry Carrez wrote: > > Moises Guimaraes de Medeiros wrote: > >> If removing 1.0.0 is the way we choose to go, people who already have > >> 1.0.0 won't be able to get "newer" 0.x.y versions. > > > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > > though ? Do we expect anything to use it ? > > Yes, the Nova team has a PoC change up that uses these early releases. > That's how we're iterating on the API. I can't guarantee that we'll need > more releases, but I'm also not aware of anyone having said, "yep, it's > ready" so I do expect more releases. I certainly don't feel like its ready yet. I am pushing on Nova support for unified limits here (but its very WIP right now): https://review.opendev.org/#/c/615180 I was hoping we would better understand two level limits before cutting v1.0.0: https://review.opendev.org/#/c/695527 > > If we really do, then the least confusing might be to keep 1.0.0 and > > bump major every time you release a breaking change. I am +1 this approach. What we have might work well enough. Thanks, johnthetubaguy From kpuusild at gmail.com Wed Feb 19 06:29:43 2020 From: kpuusild at gmail.com (Kevin Puusild) Date: Wed, 19 Feb 2020 08:29:43 +0200 Subject: OpenStack with vCenter Message-ID: Hello I'm currently trying to setup DevStack with vCenter (Not ESXi), i found a perfect documentation for this task: https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide But the problem here is that when i start *stack.sh *with localrc file shown in documentation the installing process fails. Is this documentation out-dated? Is there some up to date documentation? -- Best Regards. Kevin Puusild -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Wed Feb 19 08:09:17 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Wed, 19 Feb 2020 09:09:17 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <1415279454.3463025.1581938379342@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> <1317590151.1993706.1581576773856@mail.yahoo.com> <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> <1415279454.3463025.1581938379342@mail.yahoo.com> Message-ID: <16d8d2567f79374114e69a78d563f11addccc567.camel@redhat.com> On Mon, 2020-02-17 at 11:19 +0000, VeeraReddy wrote: > Hi mdulko, > Thanks you very much, > I am able to launch Container and VM side by side in same network on arm64 platform. Issue was openvswitch kernel Module(Enabled "geneve module" in openvswitch). > > > Now i am able to ping from container to VM but unable to ping Vice Versa (VM to container). > And also, I am able ping VM to VM (Both spawned from OpenStack dashboard). This should depend entirely on your network topology. If Pod->VM traffic works, then subnets seems to be routed correctly, so I'd expect it's the security groups that are blocking traffic. Please check that. Please note that we test VM->Pod and Pod->VM on the gate [1]. [1] https://github.com/openstack/kuryr-tempest-plugin/blob/master/kuryr_tempest_plugin/tests/scenario/test_cross_ping.py#L34-L73 > Is there any configuration to enable traffic from VM to Container? > > Regards, > Veera. > > > On Thursday, 13 February, 2020, 02:42:21 pm IST, wrote: > > > We're using 2.9.5 on the x86_64 gate and that works fine. I'm not sure > if downgrading could help. This is a Neutron issue and I don't have > much experience on such a low level. You can try asking on IRC, e.g. on > #openstack-neutron. > > On Thu, 2020-02-13 at 06:52 +0000, VeeraReddy wrote: > > Thanks mdulko, > > Issue is in openvswitch, iam getting following error in switchd logs > > ./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message > > > > Do we need to patch openvswitch to support above flow? > > > > My ovs version > > > > [root at node-2088 ~]# ovs-vsctl --version > > ovs-vsctl (Open vSwitch) 2.11.0 > > DB Schema 7.16.1 > > > > > > > > Regards, > > Veera. > > > > > > On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: > > > > > > The controller logs are from an hour later than the CNI one. The issues > > seems not to be present. > > > > Isn't your controller still restarting? If so try to use -p option on > > `kubectl logs` to get logs from previous run. > > > > On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > > > Hi Mdulko, > > > Below are log files: > > > > > > Controller Log:http://paste.openstack.org/show/789457/ > > > cni : http://paste.openstack.org/show/789456/ > > > kubelet : http://paste.openstack.org/show/789453/ > > > > > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > > > > > Please let me know the issue > > > > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > > > > > > > Hi, > > > > > > So from this run you need the kuryr-controller logs. Apparently the pod > > > never got annotated with an information about the VIF. > > > > > > Thanks, > > > Michał > > > > > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > > > Hi mdulko, > > > > Thanks for your support. > > > > > > > > As you mention i removed readinessProbe and > > > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > > > on /healthz endpoint. Are those full logs, including the moment you > > > > tried spawning a container there? It seems like you only pasted the > > > > fragments with tracebacks regarding failures to read /healthz endpoint > > > > of kube-apiserver. That is another problem you should investigate - > > > > that causes Kuryr pods to restart. > > > > > > > > At first I'd disable the healthchecks (remove readinessProbe and > > > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > > > logs. > > > > > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > > > Hi mdulko, > > > > > Please find kuryr-cni logs > > > > > http://paste.openstack.org/show/789209/ > > > > > > > > > > > > > > > Regards, > > > > > Veera. > > > > > > > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > > > > > > > Hi, > > > > > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > > > > > Thanks, > > > > > Michał > > > > > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > > > Hi, > > > > > > I am trying to run kubelet in arm64 platform > > > > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > > > 2. Generated kuryr-cni-arm64 container. > > > > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > > > > > Please help me to fix the issue > > > > > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From moguimar at redhat.com Wed Feb 19 08:41:03 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 19 Feb 2020 09:41:03 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: I vote for removing 1.0.0 first (ASAP) and then deciding which will be the next version. The longer the time 1.0.0 is available, the harder it will be to push for a 0.x solution. On Tue, Feb 18, 2020 at 11:48 PM John Garbutt wrote: > On Tue, 18 Feb 2020 at 15:55, Ben Nemec wrote: > > On 2/18/20 4:40 AM, Thierry Carrez wrote: > > > Moises Guimaraes de Medeiros wrote: > > >> If removing 1.0.0 is the way we choose to go, people who already have > > >> 1.0.0 won't be able to get "newer" 0.x.y versions. > > > > > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > > > though ? Do we expect anything to use it ? > > > > Yes, the Nova team has a PoC change up that uses these early releases. > > That's how we're iterating on the API. I can't guarantee that we'll need > > more releases, but I'm also not aware of anyone having said, "yep, it's > > ready" so I do expect more releases. > > I certainly don't feel like its ready yet. > > I am pushing on Nova support for unified limits here (but its very WIP > right now): > https://review.opendev.org/#/c/615180 > > I was hoping we would better understand two level limits before cutting > v1.0.0: > https://review.opendev.org/#/c/695527 > > > > If we really do, then the least confusing might be to keep 1.0.0 and > > > bump major every time you release a breaking change. > > I am +1 this approach. > What we have might work well enough. > > Thanks, > johnthetubaguy > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Feb 19 10:45:19 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 11:45:19 +0100 Subject: [vmware][hyperv][kolla] Is there any interest? (or should we deprecate and remove?) In-Reply-To: References: Message-ID: We received no reply to this so I am proposing relevant deprecations. -yoctozepto wt., 21 sty 2020 o 19:42 Radosław Piliszek napisał(a): > > Hello Fellow Stackers, > > In Kolla and Kolla-Ansible we have some support for VMware (both > hypervisor and networking controller stuff) and Hyper-V. > The issue is the relevant code is in pretty bad shape and we had no > recent reports about these being used nor working at all for that > matter and we are looking into dropping support for these. > Please respond if you are interested in these. > Long term we would require access to some CI running these to really > keep things in shape. > > -yoctozepto From thierry at openstack.org Wed Feb 19 11:20:43 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 19 Feb 2020 12:20:43 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: Moises Guimaraes de Medeiros wrote: > I vote for removing 1.0.0 first (ASAP) and then deciding which will be > the next version. > The longer the time 1.0.0 is available, the harder it will be to push > for a 0.x solution. The long-standing position of the release team[1] is that you can't "remove" a release. It's out there. We can hide it so that it's harder to accidentally consume it, but we should otherwise assume that some people got it. So I'm not a big fan of the plan to release 0.x versions and pretending 1.0.0 never happened, potentially breaking upgrades. From a user perspective I see it as equally disruptive to cranking out major releases at each future API break. Rather than rewrite history for an equally-suboptimal result, personally I would just own our mistake and accept that oslo.limit version numbers convey a level of readiness that might not be already there. That seems easier to communicate out than explaining that the 1.0.0 that you may have picked up at one point in the git repo, the tarballs site, Pypi (or any distro that accidentally picked it up since) is not really a thing and you need to take manual cleanup steps to restore local sanity. [1] heck, we even did a presentation about that rule at EuroPython -- Thierry Carrez (ttx) From sfinucan at redhat.com Wed Feb 19 13:51:47 2020 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 19 Feb 2020 13:51:47 +0000 Subject: [oslo] Core reviewer changes Message-ID: Hi all, Just an update on some recent changes that have been made to oslo-core after discussion among current, active cores. Added: scmcginnis Removed: ChangBo Guo(gcb) Davanum Srinivas (dims) Flavio Percoco Joshua Harlow Mehdi Abaakouk (sileht) Michael Still Victor Stinner lifeless Please let me know if anyone has any concerns about these changes. Cheers, Stephen From moguimar at redhat.com Wed Feb 19 14:33:06 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 19 Feb 2020 15:33:06 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: +1 here is the link to one of the EuroPython talks: https://youtu.be/5MaDhl01fpc?t=1470 On Wed, Feb 19, 2020 at 12:21 PM Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: > > I vote for removing 1.0.0 first (ASAP) and then deciding which will be > > the next version. > > The longer the time 1.0.0 is available, the harder it will be to push > > for a 0.x solution. > > The long-standing position of the release team[1] is that you can't > "remove" a release. It's out there. We can hide it so that it's harder > to accidentally consume it, but we should otherwise assume that some > people got it. > > So I'm not a big fan of the plan to release 0.x versions and pretending > 1.0.0 never happened, potentially breaking upgrades. From a user > perspective I see it as equally disruptive to cranking out major > releases at each future API break. > > Rather than rewrite history for an equally-suboptimal result, personally > I would just own our mistake and accept that oslo.limit version numbers > convey a level of readiness that might not be already there. > > That seems easier to communicate out than explaining that the 1.0.0 that > you may have picked up at one point in the git repo, the tarballs site, > Pypi (or any distro that accidentally picked it up since) is not really > a thing and you need to take manual cleanup steps to restore local sanity. > > [1] heck, we even did a presentation about that rule at EuroPython > > -- > Thierry Carrez (ttx) > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 19 14:34:58 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 19 Feb 2020 08:34:58 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> On 2/19/20 5:20 AM, Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: >> I vote for removing 1.0.0 first (ASAP) and then deciding which will >> be the next version. >> The longer the time 1.0.0 is available, the harder it will be to push >> for a 0.x solution. > > The long-standing position of the release team[1] is that you can't > "remove" a release. It's out there. We can hide it so that it's harder > to accidentally consume it, but we should otherwise assume that some > people got it. > > So I'm not a big fan of the plan to release 0.x versions and > pretending 1.0.0 never happened, potentially breaking upgrades. From a > user perspective I see it as equally disruptive to cranking out major > releases at each future API break. > > Rather than rewrite history for an equally-suboptimal result, > personally I would just own our mistake and accept that oslo.limit > version numbers convey a level of readiness that might not be already > there. > > That seems easier to communicate out than explaining that the 1.0.0 > that you may have picked up at one point in the git repo, the tarballs > site, Pypi (or any distro that accidentally picked it up since) is not > really a thing and you need to take manual cleanup steps to restore > local sanity. > > [1] heck, we even did a presentation about that rule at EuroPython > It seems there's no great answer here. This thread has been great to go over the options though. I think after reading through everything, our best bet is probably to just go with documenting the state of 1.0.0, then plan on bumping the major release version on any breaking changes like normal. We are still conveying something that we don't really want to be by the 1.0 designation, and chances are high that the docs will be missed, but at least we would have somewhere to point to if there are any questions about it. So I guess I'm saying, let's cut our losses, move ahead with this 1.0.0 release, and hopefully the library will get to a more complete state that this is no longer an issue. Sean From fungi at yuggoth.org Wed Feb 19 14:52:09 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Feb 2020 14:52:09 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> References: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> Message-ID: <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: [...] > We are still conveying something that we don't really want to be by the > 1.0 designation, and chances are high that the docs will be missed, but > at least we would have somewhere to point to if there are any questions > about it. [...] To what extent is it likely to see production use before Nova has ironed out its consumption of the library? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Feb 19 14:54:20 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 19 Feb 2020 08:54:20 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> References: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> Message-ID: <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> On 2/19/20 8:52 AM, Jeremy Stanley wrote: > On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: > [...] >> We are still conveying something that we don't really want to be by the >> 1.0 designation, and chances are high that the docs will be missed, but >> at least we would have somewhere to point to if there are any questions >> about it. > [...] > > To what extent is it likely to see production use before Nova has > ironed out its consumption of the library? I would assume the likelihood to be very low. From openstack at nemebean.com Wed Feb 19 15:14:49 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 19 Feb 2020 09:14:49 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> References: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> Message-ID: <0f9a3d26-fe35-1927-336c-ed221afc7e5a@nemebean.com> On 2/19/20 8:54 AM, Sean McGinnis wrote: > On 2/19/20 8:52 AM, Jeremy Stanley wrote: >> On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: >> [...] >>> We are still conveying something that we don't really want to be by the >>> 1.0 designation, and chances are high that the docs will be missed, but >>> at least we would have somewhere to point to if there are any questions >>> about it. >> [...] >> >> To what extent is it likely to see production use before Nova has >> ironed out its consumption of the library? > I would assume the likelihood to be very low. > The Nova folks are driving this work so at this point I wouldn't declare the oslo.limit API stable without their signoff. From fungi at yuggoth.org Wed Feb 19 15:26:12 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Feb 2020 15:26:12 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <0f9a3d26-fe35-1927-336c-ed221afc7e5a@nemebean.com> References: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> <0f9a3d26-fe35-1927-336c-ed221afc7e5a@nemebean.com> Message-ID: <20200219152612.zkkc5lu3mjvqg367@yuggoth.org> On 2020-02-19 09:14:49 -0600 (-0600), Ben Nemec wrote: > On 2/19/20 8:54 AM, Sean McGinnis wrote: > > On 2/19/20 8:52 AM, Jeremy Stanley wrote: > > > On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: > > > [...] > > > > We are still conveying something that we don't really want > > > > to be by the 1.0 designation, and chances are high that the > > > > docs will be missed, but at least we would have somewhere to > > > > point to if there are any questions about it. > > > [...] > > > > > > To what extent is it likely to see production use before Nova > > > has ironed out its consumption of the library? > > > > I would assume the likelihood to be very low. > > The Nova folks are driving this work so at this point I wouldn't > declare the oslo.limit API stable without their signoff. With that, I wouldn't expect any other projects to even try to use it before Nova's officially doing so. It sounds like the projects likely to be confused about the production-ready state of the library could be approximately zero regardless of which solution you choose. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Wed Feb 19 15:32:31 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 19 Feb 2020 09:32:31 -0600 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: Thanks, and welcome Sean! On 2/19/20 7:51 AM, Stephen Finucane wrote: > Hi all, > > Just an update on some recent changes that have been made to oslo-core > after discussion among current, active cores. > > Added: > > scmcginnis > > Removed: > > ChangBo Guo(gcb) > Davanum Srinivas (dims) > Flavio Percoco > Joshua Harlow > Mehdi Abaakouk (sileht) > Michael Still > Victor Stinner > lifeless > > Please let me know if anyone has any concerns about these changes. > > Cheers, > Stephen > > From moguimar at redhat.com Wed Feb 19 15:51:30 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 19 Feb 2020 16:51:30 +0100 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: Welcome Sean o/ On Wed, Feb 19, 2020 at 4:32 PM Ben Nemec wrote: > Thanks, and welcome Sean! > > On 2/19/20 7:51 AM, Stephen Finucane wrote: > > Hi all, > > > > Just an update on some recent changes that have been made to oslo-core > > after discussion among current, active cores. > > > > Added: > > > > scmcginnis > > > > Removed: > > > > ChangBo Guo(gcb) > > Davanum Srinivas (dims) > > Flavio Percoco > > Joshua Harlow > > Mehdi Abaakouk (sileht) > > Michael Still > > Victor Stinner > > lifeless > > > > Please let me know if anyone has any concerns about these changes. > > > > Cheers, > > Stephen > > > > > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Feb 19 16:07:12 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 19 Feb 2020 17:07:12 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: References: <20191119102615.oq46xojyhoybulna@skaplons-mac> Message-ID: <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> Hi, Ussuri-2 milestone is behind us and we still don’t have anyone who would like to maintain neutron-fwaas project upstream. So I just sent patch [1] to add info to project’s docs that it is deprecated in Neutron stadium. Question, to TC members mostly - should I do also some other actions to officially deprecate this project? Any changes to governance repo or something like that? Thanks in advance for any help with that. [1] https://review.opendev.org/708675 > On 20 Jan 2020, at 21:26, Slawek Kaplonski wrote: > > Hi, > > We are getting closer and closer to Ussuri-2 milestone which is our deadline to deprecate neutron-fwaas project if there will be no any volunteers to maintain this project. > So if You are interested in this project, please raise Your hand here or ping me on IRC about that. > >> On 6 Jan 2020, at 21:05, Slawek Kaplonski wrote: >> >> Hi, >> >> Just as a reminder, we are still looking for maintainers who want to keep neutron-fwaas project alive. As it was written in my previous email, we will mark this project as deprecated. >> So please reply to this email or contact me directly if You are interested in maintaining this project. >> >>> On 19 Nov 2019, at 11:26, Slawek Kaplonski wrote: >>> >>> Hi, >>> >>> Over the past couple of cycles we have noticed that new contributions and >>> maintenance efforts for neutron-fwaas project were almost non existent. >>> This impacts patches for bug fixes, new features and reviews. The Neutron >>> core team is trying to at least keep the CI of this project healthy, but we >>> don’t have enough knowledge about the details of the neutron-fwaas >>> code base to review more complex patches. >>> >>> During the PTG in Shanghai we discussed that with operators and TC members >>> during the forum session [1] and later within the Neutron team during the >>> PTG session [2]. >>> >>> During these discussions, with the help of operators and TC members, we reached >>> the conclusion that we need to have someone responsible for maintaining project. >>> This doesn’t mean that the maintainer needs to spend full time working on this >>> project. Rather, we need someone to be the contact person for the project, who >>> takes care of the project’s CI and review patches. Of course that’s only a >>> minimal requirement. If the new maintainer works on new features for the >>> project, it’s even better :) >>> >>> If we don’t have any new maintainer(s) before milestone Ussuri-2, which is >>> Feb 10 - Feb 14 according to [3], we will need to mark neutron-fwaas >>> as deprecated and in “V” cycle we will propose to move the project >>> from the Neutron stadium, hosted in the “openstack/“ namespace, to the >>> unofficial projects hosted in the “x/“ namespace. >>> >>> So if You are using this project now, or if You have customers who are >>> using it, please consider the possibility of maintaining it. Otherwise, >>> please be aware that it is highly possible that the project will be >>> deprecated and moved out from the official OpenStack projects. >>> >>> [1] >>> https://etherpad.openstack.org/p/PVG-Neutron-stadium-projects-the-path-forward >>> [2] https://etherpad.openstack.org/p/Shanghai-Neutron-Planning-restored - >>> Lines 379-421 >>> [3] https://releases.openstack.org/ussuri/schedule.html >>> >>> -- >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From hberaud at redhat.com Wed Feb 19 16:11:24 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 19 Feb 2020 17:11:24 +0100 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: Welcome on board Sean! Le mer. 19 févr. 2020 à 16:54, Moises Guimaraes de Medeiros < moguimar at redhat.com> a écrit : > Welcome Sean o/ > > On Wed, Feb 19, 2020 at 4:32 PM Ben Nemec wrote: > >> Thanks, and welcome Sean! >> >> On 2/19/20 7:51 AM, Stephen Finucane wrote: >> > Hi all, >> > >> > Just an update on some recent changes that have been made to oslo-core >> > after discussion among current, active cores. >> > >> > Added: >> > >> > scmcginnis >> > >> > Removed: >> > >> > ChangBo Guo(gcb) >> > Davanum Srinivas (dims) >> > Flavio Percoco >> > Joshua Harlow >> > Mehdi Abaakouk (sileht) >> > Michael Still >> > Victor Stinner >> > lifeless >> > >> > Please let me know if anyone has any concerns about these changes. >> > >> > Cheers, >> > Stephen >> > >> > >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Feb 19 17:09:18 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Feb 2020 17:09:18 +0000 Subject: [OSSA-2020-001] Nova can leak consoleauth token into log files (CVE-2015-9543) Message-ID: <20200219170918.4n33kxopcu7fzw3k@yuggoth.org> ============================================================= OSSA-2020-001: Nova can leak consoleauth token into log files ============================================================= :Date: February 19, 2020 :CVE: CVE-2015-9543 Affects ~~~~~~~ - Nova: <18.2.4,>=19.0.0<19.1.0,>=20.0.0<20.1.0 Description ~~~~~~~~~~~ Paul Carlton from HP reported a vulnerability in Nova. An attacker with read access to the service’s logs may obtain tokens used for console access. All Nova setups using novncproxy are affected. Patches ~~~~~~~ - https://review.opendev.org/707845 (Queens) - https://review.opendev.org/704255 (Rocky) - https://review.opendev.org/702181 (Stein) - https://review.opendev.org/696685 (Train) - https://review.opendev.org/220622 (Ussuri) Credits ~~~~~~~ - Paul Carlton from HP (CVE-2015-9543) References ~~~~~~~~~~ - https://launchpad.net/bugs/1492140 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9543 Notes ~~~~~ - The stable/queens branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. -- Jeremy Stanley, on behalf of OpenStack Vulnerability Management -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jasonanderson at uchicago.edu Wed Feb 19 17:21:19 2020 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Wed, 19 Feb 2020 17:21:19 +0000 Subject: [kolla-ansible] External Ceph keyring encryption Message-ID: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Hi all, My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? Thanks, /Jason [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html -- Jason Anderson Chameleon DevOps Lead Consortium for Advanced Science and Engineering, The University of Chicago Mathematics & Computer Science Division, Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Feb 19 17:29:00 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 19 Feb 2020 18:29:00 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Message-ID: Hi Jason, I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. Best regards, Michal On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > Hi all, > > My understanding is that KA has dropped support for provisioning Ceph > directly, and now requires an external Ceph cluster (side note: we should > update the docs[1], which state it is only "sometimes necessary" to use an > external cluster--I will try to submit something today). > > I think this works well, but the handling of keyrings cuts a bit against > the grain of KA. The keyring files must be dropped in to the > node_custom_config directory. This means that operators who prefer to keep > their KA configuration in source control must have some mechanism for > securing that, as it is unencrypted. What does everybody think about > storing Ceph keyring secrets in passwords.yml instead, similar to how SSH > keys are handled? > > Thanks, > /Jason > > [1]: > https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > > > -- > Jason Anderson > > Chameleon DevOps Lead > *Consortium for Advanced Science and Engineering, The University of > Chicago* > *Mathematics & Computer Science Division, Argonne National Laboratory* > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Feb 19 17:40:53 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 18:40:53 +0100 Subject: [kolla] Following Nova with XenAPI deprecation Message-ID: Hello fellow OpenStackers! There exists some logic in Kolla Ansible that handles XenAPI-specific overrides and bootstrapping with os-xenapi. It is one of those murky, untested features that we prefer to remove as soon as possible. :-) Hence we follow Nova in deprecating this functionality for removal. We will do the removal in Victoria. If you are using XenAPI with recent Kolla-Ansible, please let us know. I will be proposing deprec changes later today. As far as other deployment projects are concerned, DevStack, OpenStack-Helm and Puppet-OpenStack seem to still handle XenAPI specifics. -yoctozepto From openstack at fried.cc Wed Feb 19 17:43:57 2020 From: openstack at fried.cc (Eric Fried) Date: Wed, 19 Feb 2020 11:43:57 -0600 Subject: [all][nova][ptl] Another one bites the dust Message-ID: Dear OpenStack- Due to circumstances beyond my control, my job responsibilities will be changing shortly and I will be leaving the community. I have enjoyed my time here immensely; I have never loved a job, my colleagues, or the tools of the trade more than I have here. My last official day will be March 31st (though some portion of my remaining time will be vacation -- TBD). Unfortunately this means I will need to abdicate my position as Nova PTL mid-cycle. As noted in the last meeting [1], I'm calling for a volunteer to take over for the remainder of Ussuri. Feel free to approach me privately if you prefer. Thanks, efried [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 From radoslaw.piliszek at gmail.com Wed Feb 19 17:46:01 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 18:46:01 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Message-ID: Hi Jason, Ansible autodecrypts files on copy so they can be stored encrypted. It could go in docs. :-) -yoctozepto śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > > Hi Jason, > > I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > > Best regards, > Michal > > On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: >> >> Hi all, >> >> My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). >> >> I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? >> >> Thanks, >> /Jason >> >> [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html >> >> >> -- >> Jason Anderson >> >> Chameleon DevOps Lead >> Consortium for Advanced Science and Engineering, The University of Chicago >> Mathematics & Computer Science Division, Argonne National Laboratory > > -- > Michał Nasiadka > mnasiadka at gmail.com From jasonanderson at uchicago.edu Wed Feb 19 17:52:24 2020 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Wed, 19 Feb 2020 17:52:24 +0000 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Message-ID: <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> Oh wow, I did not know it could do this transparently. Thanks, I will have a look at that. I can update the docs to reference this approach as well if it works out. Cheers! /Jason On 2/19/20 11:46 AM, Radosław Piliszek wrote: > Hi Jason, > > Ansible autodecrypts files on copy so they can be stored encrypted. > It could go in docs. :-) > > -yoctozepto > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): >> Hi Jason, >> >> I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. >> >> Best regards, >> Michal >> >> On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: >>> Hi all, >>> >>> My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). >>> >>> I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? >>> >>> Thanks, >>> /Jason >>> >>> [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html >>> >>> >>> -- >>> Jason Anderson >>> >>> Chameleon DevOps Lead >>> Consortium for Advanced Science and Engineering, The University of Chicago >>> Mathematics & Computer Science Division, Argonne National Laboratory >> -- >> Michał Nasiadka >> mnasiadka at gmail.com From nate.johnston at redhat.com Wed Feb 19 18:02:18 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 19 Feb 2020 13:02:18 -0500 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> Message-ID: <20200219180218.ahgxnglss3jrvqgp@firewall> On Wed, Feb 19, 2020 at 05:07:12PM +0100, Slawek Kaplonski wrote: > Hi, > > Ussuri-2 milestone is behind us and we still don’t have anyone who would like to maintain neutron-fwaas project upstream. > So I just sent patch [1] to add info to project’s docs that it is deprecated in Neutron stadium. > > Question, to TC members mostly - should I do also some other actions to officially deprecate this project? Any changes to governance repo or something like that? Thanks in advance for any help with that. Shall we follow the same process we used for LBaaS in [1] and [2], or does that need to wait? I think there is a good chance we will not see another release of neutron-fwaas code. Thanks Nate [1] https://review.opendev.org/#/c/705780/ [2] https://review.opendev.org/#/c/658493/ > [1] https://review.opendev.org/708675 > > > On 20 Jan 2020, at 21:26, Slawek Kaplonski wrote: > > > > Hi, > > > > We are getting closer and closer to Ussuri-2 milestone which is our deadline to deprecate neutron-fwaas project if there will be no any volunteers to maintain this project. > > So if You are interested in this project, please raise Your hand here or ping me on IRC about that. > > > >> On 6 Jan 2020, at 21:05, Slawek Kaplonski wrote: > >> > >> Hi, > >> > >> Just as a reminder, we are still looking for maintainers who want to keep neutron-fwaas project alive. As it was written in my previous email, we will mark this project as deprecated. > >> So please reply to this email or contact me directly if You are interested in maintaining this project. > >> > >>> On 19 Nov 2019, at 11:26, Slawek Kaplonski wrote: > >>> > >>> Hi, > >>> > >>> Over the past couple of cycles we have noticed that new contributions and > >>> maintenance efforts for neutron-fwaas project were almost non existent. > >>> This impacts patches for bug fixes, new features and reviews. The Neutron > >>> core team is trying to at least keep the CI of this project healthy, but we > >>> don’t have enough knowledge about the details of the neutron-fwaas > >>> code base to review more complex patches. > >>> > >>> During the PTG in Shanghai we discussed that with operators and TC members > >>> during the forum session [1] and later within the Neutron team during the > >>> PTG session [2]. > >>> > >>> During these discussions, with the help of operators and TC members, we reached > >>> the conclusion that we need to have someone responsible for maintaining project. > >>> This doesn’t mean that the maintainer needs to spend full time working on this > >>> project. Rather, we need someone to be the contact person for the project, who > >>> takes care of the project’s CI and review patches. Of course that’s only a > >>> minimal requirement. If the new maintainer works on new features for the > >>> project, it’s even better :) > >>> > >>> If we don’t have any new maintainer(s) before milestone Ussuri-2, which is > >>> Feb 10 - Feb 14 according to [3], we will need to mark neutron-fwaas > >>> as deprecated and in “V” cycle we will propose to move the project > >>> from the Neutron stadium, hosted in the “openstack/“ namespace, to the > >>> unofficial projects hosted in the “x/“ namespace. > >>> > >>> So if You are using this project now, or if You have customers who are > >>> using it, please consider the possibility of maintaining it. Otherwise, > >>> please be aware that it is highly possible that the project will be > >>> deprecated and moved out from the official OpenStack projects. > >>> > >>> [1] > >>> https://etherpad.openstack.org/p/PVG-Neutron-stadium-projects-the-path-forward > >>> [2] https://etherpad.openstack.org/p/Shanghai-Neutron-Planning-restored - > >>> Lines 379-421 > >>> [3] https://releases.openstack.org/ussuri/schedule.html > >>> > >>> -- > >>> Slawek Kaplonski > >>> Senior software engineer > >>> Red Hat > >> > >> — > >> Slawek Kaplonski > >> Senior software engineer > >> Red Hat > >> > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From radoslaw.piliszek at gmail.com Wed Feb 19 18:02:47 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 19:02:47 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> Message-ID: I just realized we also do a lookup on them and not sure if that works though. -yoctozepto śr., 19 lut 2020 o 18:52 Jason Anderson napisał(a): > > Oh wow, I did not know it could do this transparently. Thanks, I will > have a look at that. I can update the docs to reference this approach as > well if it works out. > > Cheers! > /Jason > > On 2/19/20 11:46 AM, Radosław Piliszek wrote: > > Hi Jason, > > > > Ansible autodecrypts files on copy so they can be stored encrypted. > > It could go in docs. :-) > > > > -yoctozepto > > > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > >> Hi Jason, > >> > >> I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > >> > >> Best regards, > >> Michal > >> > >> On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > >>> Hi all, > >>> > >>> My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). > >>> > >>> I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? > >>> > >>> Thanks, > >>> /Jason > >>> > >>> [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > >>> > >>> > >>> -- > >>> Jason Anderson > >>> > >>> Chameleon DevOps Lead > >>> Consortium for Advanced Science and Engineering, The University of Chicago > >>> Mathematics & Computer Science Division, Argonne National Laboratory > >> -- > >> Michał Nasiadka > >> mnasiadka at gmail.com > From elmiko at redhat.com Wed Feb 19 19:57:36 2020 From: elmiko at redhat.com (Michael McCune) Date: Wed, 19 Feb 2020 14:57:36 -0500 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: Eric, we never really got to work on the same project (outside of sig activities), but i just wanted to say thank you! it was always a pleasure getting to interact and collaborate with you, good luck on the new adventures =) peace o/ On Wed, Feb 19, 2020 at 12:47 PM Eric Fried wrote: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. > > Thanks, > efried > > [1] > > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Feb 19 20:28:58 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 19 Feb 2020 15:28:58 -0500 Subject: [cinder] volume-local-cache spec - last call Message-ID: <9dfcde28-f8d8-20a8-2ac6-552b9fdc049c@gmail.com> The volume-local-cache spec [0], which was granted a Cinder spec-freeze exception, currently has two +2s. It's been reviewed by a lot of people, and as far as I can tell, everyone's concerns have been addressed at this point. However, I'd like to give people one last opportunity to verify this for themselves. Unless a substantive objection is raised, I intend to approve the spec around 12:30 UTC tomorrow (Thursday) so that it can be merged before the Nova meeting at 14:00 UTC. cheers, brian [0] https://review.opendev.org/#/c/684556/ From rosmaita.fossdev at gmail.com Wed Feb 19 20:47:48 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 19 Feb 2020 15:47:48 -0500 Subject: [operators][cinder] driver removal policy update Message-ID: <860145f9-d06d-b547-3996-96b2e2fb5e51@gmail.com> The Cinder driver removal policy has been updated [0]. Drivers eligible for removal, at the discretion of the team, may remain in the code repository as long as they continue to pass OpenStack CI testing. When such a driver blocks the CI check or gate, it will be removed immediately. The intent of the policy revision is twofold. First, it gives vendors a longer grace period in which to make the necessary changes to have their drivers reinstated as ‘supported’. Second, keeping these drivers in-tree longer should make life easier for operators who have deployed storage backends with drivers that have been marked as ‘unsupported’. Please see the full statement of the policy [0] for details. [0] https://docs.openstack.org/cinder/latest/drivers-all-about.html#driver-removal From lyarwood at redhat.com Wed Feb 19 20:51:35 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 19 Feb 2020 20:51:35 +0000 Subject: [operators][cinder][nova][glance] possible data loss situation (bug 1852106) In-Reply-To: References: Message-ID: <20200219205135.3q5sqdg6ndptdbdf@lyarwood.usersys.redhat.com> On 18-02-20 17:32:38, Brian Rosmaita wrote: > Proposed Longer-term Fixes > ========================== > - In the Ussuri release, the unsupported action in step (4) above will > result in a 400 rather than an active yet unusable server. Hence it will no > longer be possible to create the image of the unusable server that causes > the issue. [3] Thanks for the detailed write up Brian. I've proposed backports back to stable/queens for this change below: https://review.opendev.org/#/q/Idf84ccff254d26fa13473fe9741ddac21cbcf321 Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From whayutin at redhat.com Wed Feb 19 22:27:40 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 19 Feb 2020 15:27:40 -0700 Subject: [tripleo] proposal to make Kevin Carter core Message-ID: Greetings, I'm sure by now you have all seen the contributions from Kevin to the tripleo-ansible project, transformation, mistral to ansible etc. In his short tenure in TripleO Kevin has accomplished a lot and is the number #3 contributor to tripleo for the ussuri cycle! First of all very well done, and thank you for all the contributions! Secondly, I'd like to propose Kevin to core.. Please vote by replying to this email. Thank you +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Wed Feb 19 22:31:41 2020 From: johfulto at redhat.com (John Fulton) Date: Wed, 19 Feb 2020 17:31:41 -0500 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, Feb 19, 2020, 5:30 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Wed Feb 19 22:51:53 2020 From: abishop at redhat.com (Alan Bishop) Date: Wed, 19 Feb 2020 14:51:53 -0800 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, Feb 19, 2020 at 2:31 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 19 22:55:01 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 19 Feb 2020 16:55:01 -0600 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: <08d694df-a27c-be45-48bd-2cbc392dda37@gmx.com> Thanks all, glad to be able to help! On 2/19/20 10:11 AM, Herve Beraud wrote: > Welcome on board Sean! > > Le mer. 19 févr. 2020 à 16:54, Moises Guimaraes de Medeiros > > a écrit : > > Welcome Sean o/ > > On Wed, Feb 19, 2020 at 4:32 PM Ben Nemec > wrote: > > Thanks, and welcome Sean! > > On 2/19/20 7:51 AM, Stephen Finucane wrote: > > Hi all, > > > > Just an update on some recent changes that have been made to > oslo-core > > after discussion among current, active cores. > > > > Added: > > > > scmcginnis > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Feb 19 23:04:00 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 19 Feb 2020 16:04:00 -0700 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, Feb 19, 2020, 3:32 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed Feb 19 23:44:04 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 19 Feb 2020 15:44:04 -0800 Subject: [manila] Core Team additions Message-ID: Hello Zorillas/Stackers, I'd like to make propose some core team additions. Earlier in the cycle [1], I sought contributors who are interested in helping us maintain and grow manila. I'm happy to report that we had an fairly enthusiastic response. I'd like to propose two individuals who have stepped up to join the core maintainers team. Bear with me as I seek to support my proposals with my personal notes of endorsement: Victoria Martinez de la Cruz - Victoria has been contributing to Manila since it's inception. She has played various roles during this time and has contributed in significant ways to build this community. She's been the go-to person to seek reviews and collaborate on for CephFS integration, python-manilaclient, manila-ui maintenance and support for the OpenStack client. She has also brought onboard and mentored multiple interns on the team (Fun fact: She was recognized as Mentor of Mentors [2] by this community). It gives me great joy that she agreed to help maintain the project as a core maintainer. Carlos Eduardo - Carlos has made significant contributions to Manila for the past two releases. He worked on several feature gaps with the DHSS=True driver mode, and is now working on graduating experimental features that the project has been building since the Newton release. He performs meaningful reviews that drive good design discussions. I am happy to note that he needed little mentoring to start reviewing the OpenStack Way [3] - this is a dead give away to me to spot a dedicated maintainer who cares about growing the community, along with the project. Please give me your +/- 1s for this proposal. Thank you, Goutham Pacha Ravi (gouthamr) [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html [2] https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ [3] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html From emilien at redhat.com Wed Feb 19 23:49:13 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 19 Feb 2020 18:49:13 -0500 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: On Wed, Feb 19, 2020 at 5:34 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Strong +1 : - his leadership in Transformation work (ie tripleo-ansible design, maintenance and review velocity) - his strong involvement in Mistral removal and understand on the underlying of TripleO. - his consistent and high rate of reviews across all TripleO projects. - his upstream contribution and participation in general, which is appreciated by everyone. Thanks Kevin for your hard work! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From viroel at gmail.com Thu Feb 20 00:15:50 2020 From: viroel at gmail.com (Douglas) Date: Wed, 19 Feb 2020 21:15:50 -0300 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: vkmc +1 carloss +1 Em qua, 19 de fev de 2020 20:45, Goutham Pacha Ravi escreveu: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xingyang105 at gmail.com Thu Feb 20 01:58:06 2020 From: xingyang105 at gmail.com (Xing Yang) Date: Wed, 19 Feb 2020 20:58:06 -0500 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: Big +1 for Victoria! +1 Carlos Thanks, Xing On Wed, Feb 19, 2020 at 6:46 PM Goutham Pacha Ravi wrote: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aoren at infinidat.com Thu Feb 20 04:38:27 2020 From: aoren at infinidat.com (Amit Oren) Date: Thu, 20 Feb 2020 06:38:27 +0200 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: Big +1 from me to both Victoria and Carlos! Amit On Thu, Feb 20, 2020 at 1:47 AM Goutham Pacha Ravi wrote: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Thu Feb 20 05:31:25 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Thu, 20 Feb 2020 07:31:25 +0200 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Thu, Feb 20, 2020 at 12:29 AM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Thu Feb 20 06:12:09 2020 From: soulxu at gmail.com (Alex Xu) Date: Thu, 20 Feb 2020 14:12:09 +0800 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: Hey, Eric, glad to work with you, although it isn't very long and end by mysterious decision we still don't know...but I enjoy the teamwork we have. Good luck man. Thanks Alex Eric Fried 于2020年2月20日周四 上午1:50写道: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. > > Thanks, > efried > > [1] > > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Feb 20 06:34:47 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 20 Feb 2020 08:34:47 +0200 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Thu, Feb 20, 2020 at 7:33 AM Sagi Shnaidman wrote: > +1 > > On Thu, Feb 20, 2020 at 12:29 AM Wesley Hayutin > wrote: > >> Greetings, >> >> I'm sure by now you have all seen the contributions from Kevin to the >> tripleo-ansible project, transformation, mistral to ansible etc. In his >> short tenure in TripleO Kevin has accomplished a lot and is the number #3 >> contributor to tripleo for the ussuri cycle! >> >> First of all very well done, and thank you for all the contributions! >> Secondly, I'd like to propose Kevin to core.. >> >> Please vote by replying to this email. >> Thank you >> >> +1 >> > > > -- > Best regards > Sagi Shnaidman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Thu Feb 20 06:54:31 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Thu, 20 Feb 2020 07:54:31 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, 2020-02-19 at 15:27 -0700, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In > his short tenure in TripleO Kevin has accomplished a lot and is the > number #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 From fsbiz at yahoo.com Wed Feb 19 23:16:48 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Wed, 19 Feb 2020 23:16:48 +0000 (UTC) Subject: [ironic]: Failed to create config drive on disk References: <779062398.5405876.1582154208873.ref@mail.yahoo.com> Message-ID: <779062398.5405876.1582154208873@mail.yahoo.com> Hi, I'd very much appreciate some help on this. We have a medium-large ironic installation where the baremetal nodes are constantly doing provisionings and deprovisionings.We test a variety of images (Ubuntu, RedHat, Windows, etc.) in both BIOS and UEFI modes. Our approach so far is to configure all the baremetal nodes in CSM UEFI mode so that both BIOS and UEFI images can be run. And things have worked fairly well with this. Lately, I'm having this weird "Failed to create config drive on disk" issue and it is happening only with BIOS images on certain baremetal nodes. Here are the important snippets from the ironic conductor and the IPA that I've managed to narrow down. ================================Ironic conductor (before power-on)2020-02-18 13:06:33.261 DEBUG iDeploy boot mode is uefi for ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 ================================Ironic conductor (power-on)2020-02-18 13:07:18.541 INFO Successfully set node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 power state to power on by rebooting. 020-02-18 13:07:18.560 INFO Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "wait call-back" from state "deploying"; target provision state is "active" ================================== Ironic Python Agent: (I can provide the full log on request. Only relevant logs provided here for the sake of brevity). wipefs --force --all Feb 18 21:08:17 ironic-python-agent: DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): wipefs --force --all /dev/sda Feb 18 21:08:17 ironic-python-agent: CMD "wipefs --force --all /dev/sda" returned: 0 in 0.023s Feb 18 21:08:17 ironic-python-agentt: Execution completed, command line is "wipefs --force --all /dev/sda" Feb 18 21:08:17 ironic-python-agent: Command stdout is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:103Feb 18 21:08:17 ironic-python-agent: Command stderr is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:104 sgdisk -ZFeb 18 21:08:17 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:17.304 15063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sgdisk -Z /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.321 15063 DEBUG oslo_concurrency.processutils [-] CMD "sgdisk -Z /dev/sda" returned: 0 in 1.017s Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "sgdisk -Z /dev/sda" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Command stdout is: "Creating new GPT entries in memory.Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. fuser /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.326 15063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): fuser /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 0 in 0.006s Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda"Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stdout is: " 15221" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stderr is: "/dev/sda: fuser /dev/sda Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.324 15063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): fuser /dev/sda Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 1 in 0.012s Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda" Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 INFO ironic_lib.disk_utils [-] Disk metadata on /dev/sda successfully destroyed for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 start iscsi:Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_python_agent.extensions.iscsi [-] Starting ISCSI target with iqn iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 on device /dev/sda start_iscsi_targetFeb 18 21:08:19 host-10-33-23-71 kernel: Rounding down aligned max_sectors from 4294967295 to 4294967288Feb 18 21:08:19 host-10-33-23-71 kernel: db_root: cannot open: /etc/targetFeb 18 21:08:19 host-10-33-23-71 WARNING ironic_python_agent.extensions.iscsi [-] Linux-IO is not available, falling back to TGT. Error: Cannot set dbroot to /etc/target. Please check if this directory exists..: RTSLibError: Cannot set dbroot to /etc/target. Please check if this directory exists.Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtd Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op show" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op show" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""  Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""Feb 18 21:08:19 host-10-33-23-71 INFO root [-] Command iscsi.start_iscsi_target completed: Command name: start_iscsi_target, params: {u'wipe_disk_metadata': True, u'iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97', u'portal_port': 3260}, status: SUCCEEDED, result: {'iscsi_target_iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97'}.Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: ::ffff:10.33.24.87 - - [18/Feb/2020 21:08:19] "POST /v1/commands?wait=true HTTP/1.1" 200 386 ====================================Back to Ironic conductor020-02-18 13:08:25.953 346408 DEBUG RPC heartbeat called for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:25.997 346408 DEBUG Heartbeat from node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:26.033 346408 Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "deploying" from state "wait call-back"; target provision state is "active" Starting iscsi:2020-02-18 13:08:28.385 Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf iscsiadm -m node -p 10.33.23.71:3260 -T iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 --login  2020-02-18 13:08:28.613 CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf iscsiadm -m node -p 10.33.23.71:3260 -T iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 --login" returned: 0 in 0.229s 2020-02-18 13:08:28.615 Execution completed, command line is "iscsiadm -m node -p 10.33.23.71:3260 -T iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 --login" 2020-02-18 13:08:28.615 Command stdout is: "Logging in to [iface: default, target: iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97, portal: 10.33.23.71,3260] (multiple) Successful login to iSCSI:Login to [iface: default, target: iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97, portal: 10.33.23.71,3260] successful. qemu-img info2020-02-18 13:08:29.005 346408 DEBUG Running cmd (subprocess): /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 -- env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk 2020-02-18 13:08:29.072 346408 DEBUG CMD "/usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 -- env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" returned: 0 in 0.067s 2020-02-18 13:08:29.073 346408 DEBUG Execution completed, command line is "env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" 2020-02-18 13:08:29.074 346408 DEBUG Command stdout is: "image: /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/diskfile format: rawvirtual size: 9.8G (10485760000 bytes)disk size: 5.4G copying image via dd: 2020-02-18 13:08:29.075 Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct 2020-02-18 13:08:49.020 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" returned: 0 in 19.945s 2020-02-18 13:08:49.021 346408 DEBUG Execution completed, command line is "dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" 2020-02-18 13:08:49.022 346408 DEBUG Command stdout is: "" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:08:49.022 346408 DEBUG Command stderr is: "10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 19.7712 s, 530 MB/s partprobe2020-02-18 13:08:49.023 346408 DEBUGRunning cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:03.341 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 14.318s 2020-02-18 13:09:03.341 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.342 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:03.342 346408 DEBUG Command stderr is: ""  lsblk:2020-02-18 13:09:03.343 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:03.519 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.176s 2020-02-18 13:09:03.519 346408 DEBUG Execution completed, command line is "lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.519 346408 DEBUG Command stdout is: "NAME="sdb" LABEL="" NAME="sdb1" LABEL="" " execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:03.520 346408 DEBUG iCommand stderr is: "" Adding configDrive partition:2020-02-18 13:09:03.772 346408 DEBUG Adding config drive partition 64 MiB to device: /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 create_config_drive_partition  blkid2020-02-18 13:09:03.773 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:03.959 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.186s 2020-02-18 13:09:03.960 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Execution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.960 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "dos 2020-02-18 13:09:03.960 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  parted2020-02-18 13:09:03.961 346408 DEBUG oRunning cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:04.136 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.175s 2020-02-18 13:09:04.137 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:04.137 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot; blkid2020-02-18 13:09:04.138 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf  blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:04.321 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.182s 2020-02-18 13:09:04.321 346408 DEBUG iExecution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.321 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "dos2020-02-18 13:09:04.322 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  partprobe2020-02-18 13:09:04.322 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:04.510 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.188s 2020-02-18 13:09:04.511 346408 DEBUG Execution completed, command line is "partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.512 346408 DEBUG Command stdout is: "/dev/sdb: msdos partitions 1 " 2020-02-18 13:09:04.512 346408 DEBUG Command stderr is: ""  blockdev --getsize642020-02-18 13:09:04.513 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:05.045 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.532s 2020-02-18 13:09:05.046 346408 DEBUG Execution completed, command line is "blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.046 346408 DEBUG Command stdout is: "1000204886016" 2020-02-18 13:09:05.046 346408 DEBUG Command stderr is: ""  parted/mkpart2020-02-18 13:09:05.047 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0 2020-02-18 13:09:05.241 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" returned: 0 in 0.194s 2020-02-18 13:09:05.241 346408 DEBUG Execution completed, command line is "parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" 2020-02-18 13:09:05.242 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.242 346408 DEBUG Command stderr is: "Warning: The resulting partition is not properly aligned for best performance. partprobe2020-02-18 13:09:05.329 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:05.516 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.187s2020-02-18 13:09:05.517 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.517 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.518 346408 DEBUG Command stderr is: ""  sgdisk -v:   IS THERE ANY ISSUE WITH THE BELOW OUTPUT2020-02-18 13:09:05.518 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:05.707 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.189s 2020-02-18 13:09:05.708 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Execution completed, command line is  "sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:101 2020-02-18 13:09:05.708 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "***************************************************************Found invalid GPT and valid MBR; converting MBR to GPT formatin memory.***************************************************************Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Identified 1 problems! 2020-02-18 13:09:05.709 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  parted2020-02-18 13:09:05.709 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print 2020-02-18 13:09:05.899 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.189s 2020-02-18 13:09:05.899 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:05.900 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot;2:953806MiB:953870MiB:64.0MiB:::lba;" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:05.900 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  2020-02-18 13:09:05.927 346408 DEBUG Waiting for the config drive partition /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 on node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 to be ready for writing. create_config_drive_partition /usr/lib/python2.7/site-packages/ironic_lib/disk_utils.py:8692020-02-18 13:09:05.927 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Running cmd (subprocess): test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:05.945 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] u'test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2' failed. Retrying. execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:461 The code retries several times followed by the config drive failure error.Again, this happens on a few nodes only and happens only when I try to run BIOS based images.  UEFI based images provision just fine. Any help will be appreciated.thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsbiz at yahoo.com Wed Feb 19 23:27:31 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Wed, 19 Feb 2020 23:27:31 +0000 (UTC) Subject: Fw: [ironic]: Failed to create config drive on disk In-Reply-To: <779062398.5405876.1582154208873@mail.yahoo.com> References: <779062398.5405876.1582154208873.ref@mail.yahoo.com> <779062398.5405876.1582154208873@mail.yahoo.com> Message-ID: <744752359.5442874.1582154851981@mail.yahoo.com> Hi, I'd very much appreciate some help on this. We have a medium-large ironic installation where the baremetal nodes are constantly doing provisionings and deprovisionings.Our approach so far is to configure all the baremetal nodes in CSM UEFI mode so that both BIOS and UEFI images can be run. And things have worked fairly well with this. Lately, I'm having this weird "Failed to create config drive on disk" issue and it is happening only with BIOS images on certain baremetal nodes. Here are the important snippets from the ironic conductor and the IPA that I've managed to narrow down. ================================== Ironic conductor (power-on) 2020-02-18 13:07:18.541 INFO Successfully set node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 power state to power on by rebooting. 020-02-18 13:07:18.560 INFO Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "wait call-back" from state "deploying"; target provision state is "active" ================================== Ironic Python Agent: (I can provide the full log on request. Only relevant logs provided here for the sake of brevity). wipefs --force --all Feb 18 21:08:17 ironic-python-agent: CMD "wipefs --force --all /dev/sda" returned: 0 in 0.023s  Feb 18 21:08:17 ironic-python-agentt: Execution completed, command line is "wipefs --force --all /dev/sda" Feb 18 21:08:17 ironic-python-agent: Command stdout is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:103Feb 18 21:08:17 ironic-python-agent: Command stderr is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:104 sgdisk -ZFeb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.321 15063 DEBUG oslo_concurrency.processutils [-] CMD "sgdisk -Z /dev/sda" returned: 0 in 1.017s  Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "sgdisk -Z /dev/sda" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Command stdout is: "Creating new GPT entries in memory.Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. fuser /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 0 in 0.006s  Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda"Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stdout is: " 15221" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stderr is: "/dev/sda: fuser /dev/sdaFeb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 1 in 0.012s  Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda" Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 INFO ironic_lib.disk_utils [-] Disk metadata on /dev/sda successfully destroyed for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 start iscsi:Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_python_agent.extensions.iscsi [-] Starting ISCSI target with iqn iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 on device /dev/sda start_iscsi_targetFeb 18 21:08:19 host-10-33-23-71 kernel: Rounding down aligned max_sectors from 4294967295 to 4294967288Feb 18 21:08:19 host-10-33-23-71 kernel: db_root: cannot open: /etc/targetFeb 18 21:08:19 host-10-33-23-71 WARNING ironic_python_agent.extensions.iscsi [-] Linux-IO is not available, falling back to TGT. Error: Cannot set dbroot to /etc/target. Please check if this directory exists..: RTSLibError: Cannot set dbroot to /etc/target. Please check if this directory exists.Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtd Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op show" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op show" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""  Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""Feb 18 21:08:19 host-10-33-23-71 INFO root [-] Command iscsi.start_iscsi_target completed: Command name: start_iscsi_target, params: {u'wipe_disk_metadata': True, u'iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97', u'portal_port': 3260}, status: SUCCEEDED, result: {'iscsi_target_iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97'}. ====================================Back to Ironic conductor020-02-18 13:08:25.953 346408 DEBUG RPC heartbeat called for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:25.997 346408 DEBUG Heartbeat from node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:26.033 346408 Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "deploying" from state "wait call-back"; target provision state is "active" Successful login to iSCSI: Login to [iface: default, target: iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97, portal: 10.33.23.71,3260] successful. qemu-img info2020-02-18 13:08:29.072 346408 DEBUG CMD "/usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 -- env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" returned: 0 in 0.067s  2020-02-18 13:08:29.073 346408 DEBUG Execution completed, command line is "env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" 2020-02-18 13:08:29.074 346408 DEBUG Command stdout is: "image: /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/diskfile format: rawvirtual size: 9.8G (10485760000 bytes)disk size: 5.4G copying image via dd: 2020-02-18 13:08:49.020 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" returned: 0 in 19.945s 2020-02-18 13:08:49.021 346408 DEBUG Execution completed, command line is "dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" 2020-02-18 13:08:49.022 346408 DEBUG Command stdout is: "" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:08:49.022 346408 DEBUG Command stderr is: "10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 19.7712 s, 530 MB/s partprobe2020-02-18 13:09:03.341 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 14.318s 2020-02-18 13:09:03.341 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.342 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:03.342 346408 DEBUG Command stderr is: ""  lsblk:2020-02-18 13:09:03.519 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.176s 2020-02-18 13:09:03.519 346408 DEBUG Execution completed, command line is "lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.519 346408 DEBUG Command stdout is: "NAME="sdb" LABEL="" NAME="sdb1" LABEL="" " execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:03.520 346408 DEBUG iCommand stderr is: "" Adding configDrive partition:2020-02-18 13:09:03.772 346408 DEBUG Adding config drive partition 64 MiB to device: /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 create_config_drive_partition  blkid2020-02-18 13:09:03.959 346408 CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.186s  2020-02-18 13:09:03.960 346408 Execution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.960 346408 Command stdout is: "dos2020-02-18 13:09:03.960 346408 Command stderr is: ""  parted2020-02-18 13:09:03.961 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:04.136 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.175s 2020-02-18 13:09:04.137 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:04.137 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot; blkid2020-02-18 13:09:04.321 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.182s  2020-02-18 13:09:04.321 346408 DEBUG Execution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.321 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "dos2020-02-18 13:09:04.322 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  partprobe2020-02-18 13:09:04.510 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.188s 2020-02-18 13:09:04.511 346408 DEBUG Execution completed, command line is "partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.512 346408 DEBUG Command stdout is: "/dev/sdb: msdos partitions 1 " 2020-02-18 13:09:04.512 346408 DEBUG Command stderr is: ""  blockdev --getsize642020-02-18 13:09:05.045 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.532s  2020-02-18 13:09:05.046 346408 DEBUG Execution completed, command line is "blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.046 346408 DEBUG Command stdout is: "1000204886016" 2020-02-18 13:09:05.046 346408 DEBUG Command stderr is: ""  parted/mkpart2020-02-18 13:09:05.241 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" returned: 0 in 0.194s 2020-02-18 13:09:05.241 346408 DEBUG Execution completed, command line is "parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" 2020-02-18 13:09:05.242 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.242 346408 DEBUG Command stderr is: "Warning: The resulting partition is not properly aligned for best performance. partprobe2020-02-18 13:09:05.516 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.187s 2020-02-18 13:09:05.517 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.517 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.518 346408 DEBUG Command stderr is: ""  sgdisk -v:   IS THERE ANY ISSUE WITH THE BELOW OUTPUT2020-02-18 13:09:05.707 346408 CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.189s  2020-02-18 13:09:05.708 346408 Execution completed, command line is  "sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:101 2020-02-18 13:09:05.708 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "***************************************************************Found invalid GPT and valid MBR; converting MBR to GPT formatin memory.***************************************************************Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Identified 1 problems! 2020-02-18 13:09:05.709 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  parted2020-02-18 13:09:05.899 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.189s 2020-02-18 13:09:05.899 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:05.900 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot;2:953806MiB:953870MiB:64.0MiB:::lba;" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:05.900 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  2020-02-18 13:09:05.927 346408 DEBUG Waiting for the config drive partition /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 on node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 to be ready for writing. create_config_drive_partition /usr/lib/python2.7/site-packages/ironic_lib/disk_utils.py:8692020-02-18 13:09:05.927 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Running cmd (subprocess): test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:05.945 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] u'test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2' failed. Retrying. execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:461 The code retries several times followed by the config drive failure error.Again, this happens on a few nodes only and happens only when I try to run BIOS based images.  UEFI based images provision just fine. Any help will be appreciated.thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodrigo.barbieri2010 at gmail.com Thu Feb 20 08:05:45 2020 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Thu, 20 Feb 2020 05:05:45 -0300 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: +1 to both On Thu, Feb 20, 2020 at 1:45 AM Amit Oren wrote: > Big +1 from me to both Victoria and Carlos! > > Amit > > On Thu, Feb 20, 2020 at 1:47 AM Goutham Pacha Ravi > wrote: > >> Hello Zorillas/Stackers, >> >> I'd like to make propose some core team additions. Earlier in the >> cycle [1], I sought contributors who are interested in helping us >> maintain and grow manila. I'm happy to report that we had an fairly >> enthusiastic response. I'd like to propose two individuals who have >> stepped up to join the core maintainers team. Bear with me as I seek >> to support my proposals with my personal notes of endorsement: >> >> Victoria Martinez de la Cruz - Victoria has been contributing to >> Manila since it's inception. She has played various roles during this >> time and has contributed in significant ways to build this community. >> She's been the go-to person to seek reviews and collaborate on for >> CephFS integration, python-manilaclient, manila-ui maintenance and >> support for the OpenStack client. She has also brought onboard and >> mentored multiple interns on the team (Fun fact: She was recognized as >> Mentor of Mentors [2] by this community). It gives me great joy that >> she agreed to help maintain the project as a core maintainer. >> >> Carlos Eduardo - Carlos has made significant contributions to Manila >> for the past two releases. He worked on several feature gaps with the >> DHSS=True driver mode, and is now working on graduating experimental >> features that the project has been building since the Newton release. >> He performs meaningful reviews that drive good design discussions. I >> am happy to note that he needed little mentoring to start reviewing >> the OpenStack Way [3] - this is a dead give away to me to spot a >> dedicated maintainer who cares about growing the community, along with >> the project. >> >> Please give me your +/- 1s for this proposal. >> >> Thank you, >> Goutham Pacha Ravi (gouthamr) >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html >> [2] >> https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ >> [3] >> https://docs.openstack.org/project-team-guide/review-the-openstack-way.html >> >> -- Rodrigo Barbieri MSc Computer Scientist OpenStack Manila Core Contributor Federal University of São Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Thu Feb 20 08:30:00 2020 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 20 Feb 2020 09:30:00 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On 19. 02. 20 23:27, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > From J.Horstmann at mittwald.de Thu Feb 20 08:36:01 2020 From: J.Horstmann at mittwald.de (Jan Horstmann) Date: Thu, 20 Feb 2020 08:36:01 +0000 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> Message-ID: <546d83f6859b960bce53911e425a7011df4512e3.camel@mittwald.de> We had the exact same problem with an external ceph cluster and keyrings in source control. I can confirm that it works fine. The lookup was introduced on purpose in order to make vault encrypted keyrings possible ([1]). [1]: https://review.opendev.org/#/c/689753/ On Wed, 2020-02-19 at 19:02 +0100, Radosław Piliszek wrote: > I just realized we also do a lookup on them and not sure if that works though. > > -yoctozepto > > śr., 19 lut 2020 o 18:52 Jason Anderson napisał(a): > > Oh wow, I did not know it could do this transparently. Thanks, I will > > have a look at that. I can update the docs to reference this approach as > > well if it works out. > > > > Cheers! > > /Jason > > > > On 2/19/20 11:46 AM, Radosław Piliszek wrote: > > > Hi Jason, > > > > > > Ansible autodecrypts files on copy so they can be stored encrypted. > > > It could go in docs. :-) > > > > > > -yoctozepto > > > > > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > > > > Hi Jason, > > > > > > > > I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > > > > > > > > Best regards, > > > > Michal > > > > > > > > On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > > > > > Hi all, > > > > > > > > > > My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). > > > > > > > > > > I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? > > > > > > > > > > Thanks, > > > > > /Jason > > > > > > > > > > [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > > > > > > > > > > > > > > > -- > > > > > Jason Anderson > > > > > > > > > > Chameleon DevOps Lead > > > > > Consortium for Advanced Science and Engineering, The University of Chicago > > > > > Mathematics & Computer Science Division, Argonne National Laboratory > > > > -- > > > > Michał Nasiadka > > > > mnasiadka at gmail.com -- Jan Horstmann Systementwickler | Infrastruktur _____ Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 j.horstmann at mittwald.de https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From amotoki at gmail.com Thu Feb 20 08:37:06 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 Feb 2020 17:37:06 +0900 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <20200219180218.ahgxnglss3jrvqgp@firewall> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> Message-ID: On Thu, Feb 20, 2020 at 3:04 AM Nate Johnston wrote: > > On Wed, Feb 19, 2020 at 05:07:12PM +0100, Slawek Kaplonski wrote: > > Hi, > > > > Ussuri-2 milestone is behind us and we still don’t have anyone who would like to maintain neutron-fwaas project upstream. > > So I just sent patch [1] to add info to project’s docs that it is deprecated in Neutron stadium. > > > > Question, to TC members mostly - should I do also some other actions to officially deprecate this project? Any changes to governance repo or something like that? Thanks in advance for any help with that. > > Shall we follow the same process we used for LBaaS in [1] and [2], or does that > need to wait? In case of LBaaS, the repository was marked as 'deprecated' when the master branch was retired. When neutron-lbaas was deprecated (in the master branch), no change happened in the governance side. > I think there is a good chance we will not see another release of > neutron-fwaas code. I also would like to note that 'deprecation' does not mean we stop new releases. neutron-lbaas continued to cut releases even after the deprecation is enabled. IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens release and it was released till Stein. neutron-fwaas is being marked as deprecated, so I think we still need to release it until neutron-fwaas was retired in the master branch. Thanks, Akihiro > > Thanks > > Nate > > [1] https://review.opendev.org/#/c/705780/ > [2] https://review.opendev.org/#/c/658493/ > > > [1] https://review.opendev.org/708675 > > > > > On 20 Jan 2020, at 21:26, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > We are getting closer and closer to Ussuri-2 milestone which is our deadline to deprecate neutron-fwaas project if there will be no any volunteers to maintain this project. > > > So if You are interested in this project, please raise Your hand here or ping me on IRC about that. > > > > > >> On 6 Jan 2020, at 21:05, Slawek Kaplonski wrote: > > >> > > >> Hi, > > >> > > >> Just as a reminder, we are still looking for maintainers who want to keep neutron-fwaas project alive. As it was written in my previous email, we will mark this project as deprecated. > > >> So please reply to this email or contact me directly if You are interested in maintaining this project. > > >> > > >>> On 19 Nov 2019, at 11:26, Slawek Kaplonski wrote: > > >>> > > >>> Hi, > > >>> > > >>> Over the past couple of cycles we have noticed that new contributions and > > >>> maintenance efforts for neutron-fwaas project were almost non existent. > > >>> This impacts patches for bug fixes, new features and reviews. The Neutron > > >>> core team is trying to at least keep the CI of this project healthy, but we > > >>> don’t have enough knowledge about the details of the neutron-fwaas > > >>> code base to review more complex patches. > > >>> > > >>> During the PTG in Shanghai we discussed that with operators and TC members > > >>> during the forum session [1] and later within the Neutron team during the > > >>> PTG session [2]. > > >>> > > >>> During these discussions, with the help of operators and TC members, we reached > > >>> the conclusion that we need to have someone responsible for maintaining project. > > >>> This doesn’t mean that the maintainer needs to spend full time working on this > > >>> project. Rather, we need someone to be the contact person for the project, who > > >>> takes care of the project’s CI and review patches. Of course that’s only a > > >>> minimal requirement. If the new maintainer works on new features for the > > >>> project, it’s even better :) > > >>> > > >>> If we don’t have any new maintainer(s) before milestone Ussuri-2, which is > > >>> Feb 10 - Feb 14 according to [3], we will need to mark neutron-fwaas > > >>> as deprecated and in “V” cycle we will propose to move the project > > >>> from the Neutron stadium, hosted in the “openstack/“ namespace, to the > > >>> unofficial projects hosted in the “x/“ namespace. > > >>> > > >>> So if You are using this project now, or if You have customers who are > > >>> using it, please consider the possibility of maintaining it. Otherwise, > > >>> please be aware that it is highly possible that the project will be > > >>> deprecated and moved out from the official OpenStack projects. > > >>> > > >>> [1] > > >>> https://etherpad.openstack.org/p/PVG-Neutron-stadium-projects-the-path-forward > > >>> [2] https://etherpad.openstack.org/p/Shanghai-Neutron-Planning-restored - > > >>> Lines 379-421 > > >>> [3] https://releases.openstack.org/ussuri/schedule.html > > >>> > > >>> -- > > >>> Slawek Kaplonski > > >>> Senior software engineer > > >>> Red Hat > > >> > > >> — > > >> Slawek Kaplonski > > >> Senior software engineer > > >> Red Hat > > >> > > > > > > — > > > Slawek Kaplonski > > > Senior software engineer > > > Red Hat > > > > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > From ssbarnea at redhat.com Thu Feb 20 08:39:15 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Thu, 20 Feb 2020 08:39:15 +0000 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, 19 Feb 2020 at 22:31, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Thu Feb 20 08:51:15 2020 From: mbultel at redhat.com (Mathieu Bultel) Date: Thu, 20 Feb 2020 09:51:15 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 indeed On Wed, Feb 19, 2020 at 11:32 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Thu Feb 20 08:54:28 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 20 Feb 2020 09:54:28 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: <1126709d-2034-6a24-6146-d4d616c10a15@redhat.com> On 2/19/20 11:27 PM, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc.  In his > short tenure in TripleO Kevin has accomplished a lot and is the number > #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. this should have happened yesterday already thanks Kevin -- Giulio Fidente GPG KEY: 08D733BA From eblock at nde.ag Thu Feb 20 09:04:18 2020 From: eblock at nde.ag (Eugen Block) Date: Thu, 20 Feb 2020 09:04:18 +0000 Subject: VPNaaS with multiple endpoint groups In-Reply-To: <20200214073752.Horde.fEamBT4aCkEnszSfxNwdzso@webmail.nde.ag> Message-ID: <20200220090418.Horde.2-pw9T_UT902LggR7hsWDxl@webmail.nde.ag> Hi, I'll respond to my own question in case someone else is looking for something similar. It turned out that the customer tried to do this with IKEv1 instead v2 (I know, v2 is not really "new"). After configuring both sites to use v2 the tunnel excepted both peer endpoints. They also had a flaw in the network design which prevented a many-to-many connection through the same tunnel. They're reworking it, so for now this question can be considered as closed. Regards, Eugen Zitat von Eugen Block : > Hi all, > > is anyone here able to help with a vpn issue? > It's not really my strong suit but I'll try to explain. > In a Rocky environment (using openvswitch) a customer has setup a > VPN service successfully, but that only seems to work if there's > only one local and one peer endpoint group. According to the docs it > should work with multiple endpoint groups, as far as I could tell > the setup looks fine and matches the docs (don't create the subnet > when creating the vpn service but use said endpoint groups). > > What we're seeing is that as soon as the vpn site connection is > created with multiple endpoints only one of the destination IPs is > reachable. And it seems as if it's always the first in the list of > EPs (see below). > > This seems to be reflected in the iptables where we also only see > one of the required IP ranges. Also neutron reports duplicate rules > if we try to use both EPs: > > 2020-02-12 14:14:27.638 16275 WARNING > neutron.agent.linux.iptables_manager > [req-92ff6f06-3a92-4daa-aeea-9c02dc9a31c3 > ba9bf239530d461baea2f6f60bd301e6 850dad648ce94dbaa5c0ea2fb450bbda - > - -] Duplicate iptables rule detected. This may indicate a bug in > the iptables rule generation code. Line: -A > neutron-l3-agent-POSTROUTING -s X.X.252.0/24 -d Y.Y.0.0/16 -m policy > --dir out --pol ipsec -j ACCEPT > > These are the configured endpoints: > > root at control:~ # openstack vpn endpoint group list > +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ > | ID | Name | Type | > Endpoints > | > +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ > | 0f853567-e4bf-4019-9290-4cd9f94a9793 | peer-ep-group-1 | cidr | > [u'X.X.253.0/24'] > | > | 152a0f9e-ce49-4769-94f1-bc0bebedd3ec | peer-ep-group-2 | cidr | > [u'X.X.253.0/24', u'Y.Y.0.0/16'] > | > | 791ab8ef-e150-4ba0-ac2c-c044659f509e | local-ep-group1 | subnet | > [u'38efad5e-0f1e-4e36-8995-74a611bfef41'] > | > | 810b0bf2-d258-459b-9b57-ae5b491ea612 | local-ep-group2 | subnet | > [u'38efad5e-0f1e-4e36-8995-74a611bfef41', > u'9e35d80f-029e-4cc1-a30b-1753f7683e16'] | > | b5c79e08-41e4-441c-9ed3-9b02c2654173 | peer-ep-group-3 | cidr | > [u'Y.Y.0.0/16'] > | > +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ > > > Has anyone experience with this and could help me out? > > Another follow-up question: how can we gather some information > regarding the ipsec status? Althoug there is a active tunnel we > don't see anything with 'ipsec statusall', I've checked all > namespaces on the control node. > > Any help is highly appreciated! > > Best regards, > Eugen From sbauza at redhat.com Thu Feb 20 09:58:17 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 20 Feb 2020 10:58:17 +0100 Subject: [nova] Ussuri feature scrub In-Reply-To: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> References: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> Message-ID: On Mon, Feb 17, 2020 at 11:22 PM Eric Fried wrote: > Nova maintainers and contributors- > > { Please refer to this ML thread [1][2] for background. } > > Now that spec freeze has passed, I would like to assess the > Design:Approved blueprints and understand how many we could reasonably > expect to land in Ussuri. > > We completed 25 blueprints in Train. However, mriedem is gone, and it is > likely that I will drop off the radar after Q1. Obviously all > blueprints/releases/reviews/etc. are not created equal, but using > stackalytics review numbers as a rough heuristic, I expect about 20 > blueprints to get completed in Ussuri. If we figure that 5-ish of the > incompletes will be due to factors beyond our control, that would mean > we should Direction:Approve about 25. > > As of this writing: > - 30 blueprints are targeted for ussuri [3]. Of these, > - 7 are already implemented. Of the remaining 23, > - 2 are not yet Design:Approved. These will need an exception if they > are to proceed. And > - 19 (including the unapproved ones) have code in various stages. > > I would like to see us cut 5-ish of the 30. > > While I understand your concerns, I'd like us to stop thinking at the above fact as a problem. What's honestly the issue if we only have, say, 20 specs be implemented ? Also, why could we say which specs should be cut, if we already agreed them ? And which ones ? We are a community where everyone tries to work upstream when they can. And it's fine. -Sylvain I have made an etherpad [4] with the unimplemented blueprints listed > with owners and code links. I made notes on some of the ones I would > like to see prioritized, and a couple on which I'm more meh. If you have > a stake in Nova/Ussuri, I encourage you to weigh in. > > How will we ultimately decide? Will we actually cut anything? I don't > have the answers yet. Let's go through this exercise and see if anything > obvious falls out, and then we can figure out the next steps. > > Thanks, > efried > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009832.html > [2] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/thread.html#9835 > [3] https://blueprints.launchpad.net/nova/ussuri > [4] https://etherpad.openstack.org/p/nova-ussuri-planning > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Feb 20 10:21:53 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 20 Feb 2020 11:21:53 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: <301af001-2faa-1efc-11b0-ef8869850dae@redhat.com> On 19.02.2020 23:27, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc.  In his > short tenure in TripleO Kevin has accomplished a lot and is the number > #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 +1 -- Best regards, Bogdan Dobrelya, Irc #bogdando From radoslaw.piliszek at gmail.com Thu Feb 20 10:26:41 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 20 Feb 2020 11:26:41 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: <546d83f6859b960bce53911e425a7011df4512e3.camel@mittwald.de> References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> <546d83f6859b960bce53911e425a7011df4512e3.camel@mittwald.de> Message-ID: Ah, yeah. I had this gut feeling I tested that and I was right (but forgetful). -yoctozepto czw., 20 lut 2020 o 09:50 Jan Horstmann napisał(a): > > We had the exact same problem with an external ceph cluster and > keyrings in source control. I can confirm that it works fine. > The lookup was introduced on purpose in order to make vault encrypted > keyrings possible ([1]). > > > [1]: https://review.opendev.org/#/c/689753/ > > On Wed, 2020-02-19 at 19:02 +0100, Radosław Piliszek wrote: > > I just realized we also do a lookup on them and not sure if that works though. > > > > -yoctozepto > > > > śr., 19 lut 2020 o 18:52 Jason Anderson napisał(a): > > > Oh wow, I did not know it could do this transparently. Thanks, I will > > > have a look at that. I can update the docs to reference this approach as > > > well if it works out. > > > > > > Cheers! > > > /Jason > > > > > > On 2/19/20 11:46 AM, Radosław Piliszek wrote: > > > > Hi Jason, > > > > > > > > Ansible autodecrypts files on copy so they can be stored encrypted. > > > > It could go in docs. :-) > > > > > > > > -yoctozepto > > > > > > > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > > > > > Hi Jason, > > > > > > > > > > I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > > > > > > > > > > Best regards, > > > > > Michal > > > > > > > > > > On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > > > > > > Hi all, > > > > > > > > > > > > My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). > > > > > > > > > > > > I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? > > > > > > > > > > > > Thanks, > > > > > > /Jason > > > > > > > > > > > > [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > > > > > > > > > > > > > > > > > > -- > > > > > > Jason Anderson > > > > > > > > > > > > Chameleon DevOps Lead > > > > > > Consortium for Advanced Science and Engineering, The University of Chicago > > > > > > Mathematics & Computer Science Division, Argonne National Laboratory > > > > > -- > > > > > Michał Nasiadka > > > > > mnasiadka at gmail.com > -- > Jan Horstmann > Systementwickler | Infrastruktur > _____ > > > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 4-6 > 32339 Espelkamp > > Tel.: 05772 / 293-900 > Fax: 05772 / 293-333 > > j.horstmann at mittwald.de > https://www.mittwald.de > > Geschäftsführer: Robert Meyer, Florian Jürgens > > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad > Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad > Oeynhausen > > Informationen zur Datenverarbeitung im Rahmen unserer > Geschäftstätigkeit > gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From lyarwood at redhat.com Thu Feb 20 10:28:40 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 20 Feb 2020 10:28:40 +0000 Subject: [nova] context being introduced to nova.virt.driver.ComputeDriver.extend_volume Message-ID: <20200220102840.bmlwg6jfmjcunqio@lyarwood.usersys.redhat.com> Hello all, I'm looking to introduce the request context to the signature of extend_volume in the following change: virt: Pass request context to extend_volume https://review.opendev.org/#/c/706899/ Any out of tree drivers implementing extend_volume will also need to add the request context to their implementations once this lands. This is part of a wider bugfix series below: https://review.opendev.org/#/q/topic:bug/1861071 Also discussed on the ML in the following thread: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012551.html If there are any concerns or issues with this then please let me know! Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Thu Feb 20 10:29:44 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Feb 2020 11:29:44 +0100 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: <7a7657fa-0bc1-d6db-49ee-94e3b49e6d85@openstack.org> Eric Fried wrote: > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. Thanks Eric for everything you did for Nova, but also to keep the OpenStack gate up and running for everyone. It was a pleasure and privilege to work with you. Good luck on your future endeavors ! > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. Another one bites the dust, but The show must go on. -- Thierry From thierry at openstack.org Thu Feb 20 10:33:45 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Feb 2020 11:33:45 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> Message-ID: <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> Akihiro Motoki wrote: > [...] > I also would like to note that 'deprecation' does not mean we stop new releases. > neutron-lbaas continued to cut releases even after the deprecation is enabled. > IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens > release and it was released till Stein. > neutron-fwaas is being marked as deprecated, so I think we still need > to release it until neutron-fwaas was retired in the master branch. If you release it in Ussuri, then nothing needs to get done governance-wise... it's still considered a deliverable from the Neutron team. I'd advise you to consider not releasing it in Victoria though, if we are not comfortable with its state and nobody picks it up between now and the victoria-2 milestone. -- Thierry Carrez (ttx) From skaplons at redhat.com Thu Feb 20 11:03:20 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 20 Feb 2020 12:03:20 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> Message-ID: <71C5F9E0-34B2-4A23-948E-C5DDFB569416@redhat.com> Hi, > On 20 Feb 2020, at 11:33, Thierry Carrez wrote: > > Akihiro Motoki wrote: >> [...] >> I also would like to note that 'deprecation' does not mean we stop new releases. >> neutron-lbaas continued to cut releases even after the deprecation is enabled. >> IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens >> release and it was released till Stein. >> neutron-fwaas is being marked as deprecated, so I think we still need >> to release it until neutron-fwaas was retired in the master branch. > > If you release it in Ussuri, then nothing needs to get done governance-wise... it's still considered a deliverable from the Neutron team. > > I'd advise you to consider not releasing it in Victoria though, if we are not comfortable with its state and nobody picks it up between now and the victoria-2 milestone. Thx. I though that we will for sure release it in Ussuri still. And it’s good idea to add such note about not releasing for Victoria if there will be still nobody to take care of it. I will add it to the deprecation warning. And in such case, I think that deprecation process which worked for LBaaS and which Nate pointed to, will be good to apply here but in Victoria cycle, not now, right? > > -- > Thierry Carrez (ttx) > — Slawek Kaplonski Senior software engineer Red Hat From thierry at openstack.org Thu Feb 20 11:18:48 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Feb 2020 12:18:48 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <71C5F9E0-34B2-4A23-948E-C5DDFB569416@redhat.com> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> <71C5F9E0-34B2-4A23-948E-C5DDFB569416@redhat.com> Message-ID: <8a360821-bea6-977e-ee87-99b904d1918b@openstack.org> Slawek Kaplonski wrote: > Hi, > >> On 20 Feb 2020, at 11:33, Thierry Carrez wrote: >> >> Akihiro Motoki wrote: >>> [...] >>> I also would like to note that 'deprecation' does not mean we stop new releases. >>> neutron-lbaas continued to cut releases even after the deprecation is enabled. >>> IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens >>> release and it was released till Stein. >>> neutron-fwaas is being marked as deprecated, so I think we still need >>> to release it until neutron-fwaas was retired in the master branch. >> >> If you release it in Ussuri, then nothing needs to get done governance-wise... it's still considered a deliverable from the Neutron team. >> >> I'd advise you to consider not releasing it in Victoria though, if we are not comfortable with its state and nobody picks it up between now and the victoria-2 milestone. > > Thx. I though that we will for sure release it in Ussuri still. And it’s good idea to add such note about not releasing for Victoria if there will be still nobody to take care of it. I will add it to the deprecation warning. > And in such case, I think that deprecation process which worked for LBaaS and which Nate pointed to, will be good to apply here but in Victoria cycle, not now, right? Yes. -- Thierry From tpb at dyncloud.net Thu Feb 20 12:29:42 2020 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 20 Feb 2020 07:29:42 -0500 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: <20200220122942.sb3yileepw2qqmb4@barron.net> +1 !!! On 19/02/20 15:44 -0800, Goutham Pacha Ravi wrote: >Hello Zorillas/Stackers, > >I'd like to make propose some core team additions. Earlier in the >cycle [1], I sought contributors who are interested in helping us >maintain and grow manila. I'm happy to report that we had an fairly >enthusiastic response. I'd like to propose two individuals who have >stepped up to join the core maintainers team. Bear with me as I seek >to support my proposals with my personal notes of endorsement: > >Victoria Martinez de la Cruz - Victoria has been contributing to >Manila since it's inception. She has played various roles during this >time and has contributed in significant ways to build this community. >She's been the go-to person to seek reviews and collaborate on for >CephFS integration, python-manilaclient, manila-ui maintenance and >support for the OpenStack client. She has also brought onboard and >mentored multiple interns on the team (Fun fact: She was recognized as >Mentor of Mentors [2] by this community). It gives me great joy that >she agreed to help maintain the project as a core maintainer. > >Carlos Eduardo - Carlos has made significant contributions to Manila >for the past two releases. He worked on several feature gaps with the >DHSS=True driver mode, and is now working on graduating experimental >features that the project has been building since the Newton release. >He performs meaningful reviews that drive good design discussions. I >am happy to note that he needed little mentoring to start reviewing >the OpenStack Way [3] - this is a dead give away to me to spot a >dedicated maintainer who cares about growing the community, along with >the project. > >Please give me your +/- 1s for this proposal. > >Thank you, >Goutham Pacha Ravi (gouthamr) > >[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html >[2] https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ >[3] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > From zhengyupann at 163.com Thu Feb 20 12:48:42 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Thu, 20 Feb 2020 20:48:42 +0800 (CST) Subject: [neutron]After update neutron-lbaas, How to not restart neutron-server to let new neutron-lbaas code work? Message-ID: <22ba60c2.95af.17062a4a75f.Coremail.zhengyupann@163.com> Hi, I modified neutron-lbaas load balancer code to support our special need. I have updated neutron-lbaas code in the controller node. But i don't want to restart neutron-server in case the customer's business was interrupted. How can i let netron-lbaas code work in the case of not restarting neutrn-sever in controller node? -- Best! ZhengYu! -------------- next part -------------- An HTML attachment was scrubbed... URL: From vhariria at redhat.com Thu Feb 20 13:14:04 2020 From: vhariria at redhat.com (Vida Haririan) Date: Thu, 20 Feb 2020 08:14:04 -0500 Subject: [manila] Core Team additions In-Reply-To: <20200220122942.sb3yileepw2qqmb4@barron.net> References: <20200220122942.sb3yileepw2qqmb4@barron.net> Message-ID: <5a365a6d-454d-c2b9-5d3d-eaa3a48b4c1b@redhat.com> +1 :) On 2/20/20 7:29 AM, Tom Barron wrote: > +1 !!! > > On 19/02/20 15:44 -0800, Goutham Pacha Ravi wrote: >> Hello Zorillas/Stackers, >> >> I'd like to make propose some core team additions. Earlier in the >> cycle [1], I sought contributors who are interested in helping us >> maintain and grow manila. I'm happy to report that we had an fairly >> enthusiastic response. I'd like to propose two individuals who have >> stepped up to join the core maintainers team. Bear with me as I seek >> to support my proposals with my personal notes of endorsement: >> >> Victoria Martinez de la Cruz - Victoria has been contributing to >> Manila since it's inception. She has played various roles during this >> time and has contributed in significant ways to build this community. >> She's been the go-to person to seek reviews and collaborate on for >> CephFS integration, python-manilaclient, manila-ui maintenance and >> support for the OpenStack client. She has also brought onboard and >> mentored multiple interns on the team (Fun fact: She was recognized as >> Mentor of Mentors [2] by this community). It gives me great joy that >> she agreed to help maintain the project as a core maintainer. >> >> Carlos Eduardo - Carlos has made significant contributions to Manila >> for the past two releases. He worked on several feature gaps with the >> DHSS=True driver mode, and is now working on graduating experimental >> features that the project has been building since the Newton release. >> He performs meaningful reviews that drive good design discussions. I >> am happy to note that he needed little mentoring to start reviewing >> the OpenStack Way [3] - this is a dead give away to me to spot a >> dedicated maintainer who cares about growing the community, along with >> the project. >> >> Please give me your +/- 1s for this proposal. >> >> Thank you, >> Goutham Pacha Ravi (gouthamr) >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html >> [2] >> https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ >> [3] >> https://docs.openstack.org/project-team-guide/review-the-openstack-way.html >> > From amotoki at gmail.com Thu Feb 20 14:41:07 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 Feb 2020 23:41:07 +0900 Subject: [neutron] networking-bagpipe gate failure Message-ID: Hi, networking-bagpipe gate is broken now due to its dependencies. The situation is complicated so I am summarizing it and exploring the right solution. # what happens now Examples of the gate failure are [1] and [2], and the exact failure is found at [3]. It fails due to horizon dependency from networking-bgpvpn train release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master (horizon==18.0.0). The neutron team has not released a beta for ussuri, so requirements.txt tries to install networking-bgpvpn train which has capping of horizon version. The capping of horizon in networking-bgpvpn was introduced in [6] and we cut a release so it started to cause the failure like this. We've explored several workarounds to avoid it including specifying horizon in networking-bagpipe, specify horizon in required-projects in networking-bagpipe and dropping networking-bgpvpn in requirements.txt in networking-bagpipe, but all of them do not work. # possible solutions I am thinking two options. The one is to cut a beta release in neutron stadium for ussuri. The other is to uncap horizon in networking-bgpvpn train and release it. I believe both work but the first one would be better as it is time to release beta for Ussuri. Discussing it in the IRC, we are planning to release beta soon. (ovn-octavia-provider is also waiting for a beta release of neutron.) # Side notes Capping dependencies in stable branches is not what we usually do. Why we don't do this was discussed in the mailing list thread [4] and it is highlighted in [5]. Thanks, Akihiro Motoki (irc: amotoki) [1] https://review.opendev.org/#/c/708829/ [2] https://review.opendev.org/#/c/703949/ [3] https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html [6] https://review.opendev.org/#/c/699456/ From isanjayk5 at gmail.com Thu Feb 20 14:59:24 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Thu, 20 Feb 2020 20:29:24 +0530 Subject: [openstack-helm][neutron] neutron-dhcp-agent stuck in Init state Message-ID: Hi All, I am trying to deploy openstack services using helm charts in my k8s cluster. My cluster is having 4 nodes - 1 k8s master, 1 controller and 2 compute nodes. I am using helm v3.0.3 for openstack stein. I have already deployed the below charts in my cluster successfully in the respective order - ingress, mariadb, rabbitmq, memcached, keystone, glance and then neutron. For mariadb, rabbitmq and glance, I am using PV and PVC in my local storage while deploying. All are ok until I deploy neutron. After deployment, all neutron pods works well except *neutron-dhcp-agent-default pod *which remains in Init:0/2 state. The pod description shows it has dependencies shown as - DEPENDENCY_SERVICE: openstack:rabbitmq,openstack:neutron-server,openstack:nova-api DEPENDENCY_JOBS: neutron-rabbit-init pods state- NAME READY STATUS RESTARTS AGE glance-api-b575ff7c-qp7wt 1/1 Running 0 132m glance-bootstrap-zjtw9 0/1 Completed 0 132m glance-db-init-86xgk 0/1 Completed 0 132m glance-db-sync-z9p9l 0/1 Completed 0 132m glance-ks-endpoints-wwm59 0/3 Completed 0 132m glance-ks-service-vs92t 0/1 Completed 0 132m glance-ks-user-znn88 0/1 Completed 0 132m glance-metadefs-load-g685b 0/1 Completed 0 132m glance-rabbit-init-648w8 0/1 Completed 0 132m glance-storage-init-jdlx2 0/1 Completed 0 132m ingress-8f98f7d96-llkwl 1/1 Running 0 3h42m ingress-error-pages-84647d8fcb-6j6bz 1/1 Running 0 3h42m keystone-api-5785f4787-lz296 1/1 Running 0 154m keystone-bootstrap-6tcdx 0/1 Completed 0 154m keystone-credential-setup-d9js5 0/1 Completed 0 154m keystone-db-init-zsgbj 0/1 Completed 0 154m keystone-db-sync-z8hfk 0/1 Completed 0 154m keystone-domain-manage-nk48h 0/1 Completed 0 154m keystone-fernet-setup-4pzlj 0/1 Completed 0 154m keystone-rabbit-init-mlpsn 0/1 Completed 0 154m mariadb-ingress-669c67b6b5-w8jxb 1/1 Running 0 3h32m mariadb-ingress-error-pages-d77467d69-9njgt 1/1 Running 0 3h32m mariadb-server-0 1/1 Running 0 3h32m memcached-memcached-7b49f48865-tf6xc 1/1 Running 0 3h11m neutron-db-init-zpl87 0/1 Completed 0 109m neutron-db-sync-plsnc 0/1 Completed 0 109m neutron-dhcp-agent-default-sf76z 0/1 Init:0/2 0 109m neutron-ks-endpoints-jgmdv 0/3 Completed 0 109m neutron-ks-service-nr5w2 0/1 Completed 0 109m neutron-ks-user-lw4p4 0/1 Completed 0 109m neutron-l3-agent-default-lnzk8 1/1 Running 0 109m neutron-lb-agent-default-44v9x 1/1 Running 0 109m neutron-lb-agent-default-94xw4 1/1 Running 0 109m neutron-lb-agent-default-whms2 1/1 Running 0 109m neutron-metadata-agent-default-7rbrh 1/1 Running 0 109m neutron-rabbit-init-qkhqm 0/1 Completed 0 109m neutron-server-964fcffcb-gvv4k 1/1 Running 0 109m rabbitmq-cluster-wait-vjbzq 0/1 Completed 0 3h18m rabbitmq-rabbitmq-0 1/1 Running 0 3h18m Please guide me how to resolve this neutron-dhcp-agent pod so that I can deploy other services after that. I have plan to deploy libvirt, nova, cinder, horizon and ceilometer charts into my cluster. thank you for your help and support. best regards, Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Feb 20 15:12:13 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 20 Feb 2020 09:12:13 -0600 Subject: [nova] Ussuri feature scrub In-Reply-To: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> References: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> Message-ID: <0ffb5d3d-b8bc-4d2c-ef73-0c3ac2558e08@fried.cc> > I would like to see us cut 5-ish of the 30. We agreed in the nova meeting today to drop this idea and just go with the existing "process" [1]. As of today, we have 29 approved blueprints, and one eligible for exception if the spec can be approved by EOB tomorrow. efried [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-131 From james.slagle at gmail.com Thu Feb 20 15:16:38 2020 From: james.slagle at gmail.com (James Slagle) Date: Thu, 20 Feb 2020 10:16:38 -0500 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1. Thanks Kevin for your contributions and leadership. On Wed, Feb 19, 2020 at 5:32 PM Wesley Hayutin wrote: > > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the tripleo-ansible project, transformation, mistral to ansible etc. In his short tenure in TripleO Kevin has accomplished a lot and is the number #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 -- -- James Slagle -- From openstack at fried.cc Thu Feb 20 15:19:56 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 20 Feb 2020 09:19:56 -0600 Subject: [nova][sfe] Support re-configure deleted_on_termination in server In-Reply-To: <63aa5ce6b60e4be7a0fff11abfce47e4@inspur.com> References: <63aa5ce6b60e4be7a0fff11abfce47e4@inspur.com> Message-ID: <413aa140-feba-072e-ddb8-74d73f1e825a@fried.cc> We discussed this in today's Nova meeting. We agreed to grant the exception if those can be resolved and the spec can get two +2s by EOB tomorrow (Friday 20200221). http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-62 On 2/17/20 8:57 PM, Brin Zhang(张百林) wrote: > Hi, nova: > >        We would like to have a spec freeze exception for the spec: > Support re-configure deleted_on_termination in server [1], and it’s PoC > code in [2] > >        > > I will attend the nova meeting on February 20 2020 1400 UTCas much as > possible. > >   > >          [1] https://review.opendev.org/#/c/580336/ > >          [2] https://review.opendev.org/#/c/693828/ > >   > > brinzhang > From openstack at fried.cc Thu Feb 20 15:18:22 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 20 Feb 2020 09:18:22 -0600 Subject: [nova][sfe] Support volume local cache In-Reply-To: References: Message-ID: <1d10d684-2f25-45f5-b3d3-59536aa0f863@fried.cc> We agreed in today's Nova meeting to grant this exception. http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-48 On 2/17/20 7:18 PM, Fang, Liang A wrote: > Hi > >   > > We would like to have a spec freeze exception for the spec: Support > volume local cache [1]. This is part of cross project contribution, with > another spec in cinder [2]. > >   > > I will attend the Nova meeting on February 20 2020 1400 UTC. > >   > > [1] https://review.opendev.org/#/c/689070/ > > [2] https://review.opendev.org/#/c/684556/ > >   > > Regards > > Liang > >   > From gmann at ghanshyammann.com Thu Feb 20 15:35:13 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 20 Feb 2020 09:35:13 -0600 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: <170633d1ad4.116c4b33a148521.5355733457340253139@ghanshyammann.com> ---- On Wed, 19 Feb 2020 11:43:57 -0600 Eric Fried wrote ---- > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. Thanks, Eric for all your contribution around OpenStack not just Nova. It was great working with you. You have good leadership skills which helped the community a lot. Bets of luck for your new position. -gmann > > Thanks, > efried > > [1] > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > > From elod.illes at est.tech Thu Feb 20 15:41:20 2020 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Thu, 20 Feb 2020 15:41:20 +0000 Subject: [neutron] networking-bagpipe gate failure In-Reply-To: References: Message-ID: Thanks Akihiro for the summary! I think "possible solution 1" could work. Nevertheless I've pushed a revert [7] for the capping patch [6], since that is now completely unnecessary (given that horizon is added in upper constraints of Train). It is good to communicate the proper way of fixing these issues [4], especially since that changed recently as in the past e.g. neutron and horizon were not allowed to be added to upper-constraints [8]. Thanks, Előd [7] https://review.opendev.org/#/c/708865/ [8] https://review.opendev.org/#/c/631300/ On 2020. 02. 20. 15:41, Akihiro Motoki wrote: > Hi, > > networking-bagpipe gate is broken now due to its dependencies. > The situation is complicated so I am summarizing it and exploring the > right solution. > > # what happens now > > Examples of the gate failure are [1] and [2], and the exact failure is > found at [3]. > It fails due to horizon dependency from networking-bgpvpn train > release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master > (horizon==18.0.0). > The neutron team has not released a beta for ussuri, so > requirements.txt tries to install networking-bgpvpn train which has > capping of horizon version. > The capping of horizon in networking-bgpvpn was introduced in [6] and > we cut a release so it started to cause the failure like this. > > We've explored several workarounds to avoid it including specifying > horizon in networking-bagpipe, specify horizon in required-projects in > networking-bagpipe and dropping networking-bgpvpn in requirements.txt > in networking-bagpipe, but all of them do not work. > > # possible solutions > > I am thinking two options. > The one is to cut a beta release in neutron stadium for ussuri. > The other is to uncap horizon in networking-bgpvpn train and release it. > > I believe both work but the first one would be better as it is time to > release beta for Ussuri. > Discussing it in the IRC, we are planning to release beta soon. > (ovn-octavia-provider is also waiting for a beta release of neutron.) > > # Side notes > > Capping dependencies in stable branches is not what we usually do. > Why we don't do this was discussed in the mailing list thread [4] and > it is highlighted in [5]. > > Thanks, > Akihiro Motoki (irc: amotoki) > > [1] https://review.opendev.org/#/c/708829/ > [2] https://review.opendev.org/#/c/703949/ > [3] https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 > [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 > [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html > [6] https://review.opendev.org/#/c/699456/ > From katonalala at gmail.com Thu Feb 20 15:41:49 2020 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 20 Feb 2020 16:41:49 +0100 Subject: [neutron] networking-bagpipe gate failure In-Reply-To: References: Message-ID: Hi Akihiro, Thanks for summarizing. Just to make it written, networking-odl gate is suffering from the same problem. Regards Lajos Akihiro Motoki ezt írta (időpont: 2020. febr. 20., Cs, 15:45): > Hi, > > networking-bagpipe gate is broken now due to its dependencies. > The situation is complicated so I am summarizing it and exploring the > right solution. > > # what happens now > > Examples of the gate failure are [1] and [2], and the exact failure is > found at [3]. > It fails due to horizon dependency from networking-bgpvpn train > release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master > (horizon==18.0.0). > The neutron team has not released a beta for ussuri, so > requirements.txt tries to install networking-bgpvpn train which has > capping of horizon version. > The capping of horizon in networking-bgpvpn was introduced in [6] and > we cut a release so it started to cause the failure like this. > > We've explored several workarounds to avoid it including specifying > horizon in networking-bagpipe, specify horizon in required-projects in > networking-bagpipe and dropping networking-bgpvpn in requirements.txt > in networking-bagpipe, but all of them do not work. > > # possible solutions > > I am thinking two options. > The one is to cut a beta release in neutron stadium for ussuri. > The other is to uncap horizon in networking-bgpvpn train and release it. > > I believe both work but the first one would be better as it is time to > release beta for Ussuri. > Discussing it in the IRC, we are planning to release beta soon. > (ovn-octavia-provider is also waiting for a beta release of neutron.) > > # Side notes > > Capping dependencies in stable branches is not what we usually do. > Why we don't do this was discussed in the mailing list thread [4] and > it is highlighted in [5]. > > Thanks, > Akihiro Motoki (irc: amotoki) > > [1] https://review.opendev.org/#/c/708829/ > [2] https://review.opendev.org/#/c/703949/ > [3] > https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 > [4] > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 > [5] > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html > [6] https://review.opendev.org/#/c/699456/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Feb 20 16:32:54 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 20 Feb 2020 10:32:54 -0600 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: Eric, Sorry that you are leaving the community.  Thanks for all you have done for OpenStack and Nova over the years.  It has been good to work with you. Best of luck in your future endeavors! Jay On 2/19/2020 11:43 AM, Eric Fried wrote: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. > > Thanks, > efried > > [1] > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > From gouthampravi at gmail.com Thu Feb 20 17:42:14 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 20 Feb 2020 09:42:14 -0800 Subject: [manila][stable] Core Team removals - Message-ID: Hello Zorillas / Stackers, We sadly bid adieu to two of our core maintainers in Ussuri: Ben Swartzlander and Zhong Jun. Both of them are busy with other projects at the moment, and are able to continue to participate in this community, albeit without core maintenance responsibility. We thanked these awesome individuals in our community meeting today [1] and would love to have them back should they decide to get involved again. I would like help from the stable maintenance team to remove Ben and Zhong jun from the stable core team [2]. Thanks, Goutham Pacha Ravi (gouthamr) [1] http://eavesdrop.openstack.org/meetings/manila/2020/manila.2020-02-20-15.00.log.html#l-56 [2] https://review.opendev.org/#/admin/groups/1099,members From amoralej at redhat.com Thu Feb 20 17:50:37 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Thu, 20 Feb 2020 18:50:37 +0100 Subject: [tripleo][rdo][rdo-dev] Status of RDO Trunk Ussuri on CentOS 7 and transition to CentOS 8 In-Reply-To: References: Message-ID: Hi, I'd like to open a discussion about the status of RDO Ussuri repositories on CentOS7. As you know RDO and upstream teams (kolla, puppet-openstack, TripleO, TripleO CI, etc...) have been working to switch to CentOS8 during last few weeks. In order to make the transition easier from CentOS 7 to CentOS 8, RDO is still maintaining Trunk repos consistent for both CentOS 7/Python 2 and CentOS 8/Python 3. As OpenStack projects have been dropping support for P ython 2, we've started pinning them to the last commit working with Python 2[1], we were expecting that transition will finish soon but it's still going on. Over time, the number of pinned packages has been growing including services and Oslo libraries where we can't follow upper-constraints anymore [2]. Recently, Kolla has removed support for CentOS 7 so i doubt it makes sense to keep pinning packages to keep RDO Trunk consistent artificially and continue running promotion pipelines on a repo with so many outdated packages. Also, pinning these projects makes that changes needed for CentOS 8 will not be in RDO and would need to be backported manually to each package. My proposal is: - Unpin all packages in Ussuri to follow master trunk, or versions in upper -constraints (for clients and libraries). - RDO Ussuri on CentOS 7 repo consistent link will not move anymore (so no more promotions based on it). - We will keep running centos7-master DLRN builder, so that packages still builing with Python 2 will be available in current repo [3] to be used by teams needing them until migration to CentOS 8 is finished everywhere. - Projects which already have CentOS 8 jobs gating in master branch can remove CentOS 7 ones. We understand this can add some pressure on moving to CentOS8 to the teams working on it, but I'd say it's already a priority and it's justified at this stage. What do you think about this plan?, is there any reason to keep CentOS 7 artificially consistent and promoting at this point of the transition to CentOS 8? Best regards, Alfredo [1] https://review.rdoproject.org/r/#/q/topic:pin-py2 [2] https://review.rdoproject.org/r/#/c/24796/ [3] http://trunk.rdoproject.org/centos7-master/current -------------- next part -------------- An HTML attachment was scrubbed... URL: From j at ocado.com Thu Feb 20 20:55:48 2020 From: j at ocado.com (Justin Cattle) Date: Thu, 20 Feb 2020 20:55:48 +0000 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config Message-ID: Hi, I'm reaching out for help with a strange issue I've found. Running openstack queens, on ubuntu xenial. We have a bunch of different sites with the same set-up, recently upgraded from mitaka to queens. However, on this one site, after the upgrade, we cannot start neutron-server. The reason is, that the ml2 plugin throws an error because it can't find auth_url from the keystone_authtoken section of neutron.conf. However, it is there in the file. The ml2 plugin is calico, it fails with this error: 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico [-] Exception in function %s: TypeError: expected string or buffer 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico Traceback (most recent call last): 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/logutils.py", line 21, in wrapped 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return fn(*args, **kwargs) 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/plugins/ml2/drivers/calico/mech_calico.py", line 347, in _post_fork_init 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico auth_url=re.sub(r'/v3/?$', '', auth_url) + 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/re.py", line 155, in sub 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return _compile(pattern, flags).sub(repl, string, count) 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico TypeError: expected string or buffer When you look at the code, this is because neither auth_url or is found in cfg.CONF.keystone_authtoken. The config defintely exists. I have copied the neutron.conf config from a working site, same error. I have copied the entire /etc/neutron directory from a working site, same error. I have check with strace, and /etc/neutron/neutron.conf is the only neutron.conf being parsed. Here is the keystone_authtoken part of the config: [keystone_authtoken] auth_uri=https://api-srv-cloud.host.domain:5000 region_name=openstack memcached_servers=1.2.3.4:11211 auth_type=password auth_url=https://api-srv-cloud.host.domain:5000 username=neutron password=xxxxxxxxxxxxxxxxxxxxxxxxx user_domain_name=Default project_name=services project_domain_name=Default I'm struggling to understand how the auth_url config is really registered in via oslo_config. I found an excellent exchagne on the ML here: https://openstack.nimeyo.com/115150/openstack-keystone-devstack-confusion-auth_url-middleware This seems to indicate auth_url is only registered if a particular auth plugin requires it. But I can't find the plugin code that does it, so I'm not sure how/where to debug it properly. If anyone has any ideas, I would really appreciate some input or pointers. Thanks! Cheers, Just -- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. -------------- next part -------------- An HTML attachment was scrubbed... URL: From j at ocado.com Thu Feb 20 21:06:25 2020 From: j at ocado.com (Justin Cattle) Date: Thu, 20 Feb 2020 21:06:25 +0000 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config In-Reply-To: References: Message-ID: Just to add, it also doesn't seem to be registering the password option from keystone_authtoken either. So, makes me think the auth plugin isn't loading , or not the right one at least ?? Cheers, Just On Thu, 20 Feb 2020 at 20:55, Justin Cattle wrote: > Hi, > > > I'm reaching out for help with a strange issue I've found. Running > openstack queens, on ubuntu xenial. > > We have a bunch of different sites with the same set-up, recently upgraded > from mitaka to queens. However, on this one site, after the upgrade, we > cannot start neutron-server. The reason is, that the ml2 plugin throws an > error because it can't find auth_url from the keystone_authtoken section of > neutron.conf. However, it is there in the file. > > The ml2 plugin is calico, it fails with this error: > > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico [-] Exception in > function %s: TypeError: expected string or buffer > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico Traceback (most > recent call last): > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico File > "/usr/lib/python2.7/dist-packages/networking_calico/logutils.py", line 21, > in wrapped > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico return > fn(*args, **kwargs) > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico File > "/usr/lib/python2.7/dist-packages/networking_calico/plugins/ml2/drivers/calico/mech_calico.py", > line 347, in _post_fork_init > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico > auth_url=re.sub(r'/v3/?$', '', auth_url) + > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico File > "/usr/lib/python2.7/re.py", line 155, in sub > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico return > _compile(pattern, flags).sub(repl, string, count) > 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico TypeError: > expected string or buffer > > > When you look at the code, this is because neither auth_url or is found > in cfg.CONF.keystone_authtoken. The config defintely exists. > > I have copied the neutron.conf config from a working site, same error. I > have copied the entire /etc/neutron directory from a working site, same > error. > > I have check with strace, and /etc/neutron/neutron.conf is the > only neutron.conf being parsed. > > Here is the keystone_authtoken part of the config: > > [keystone_authtoken] > auth_uri=https://api-srv-cloud.host.domain:5000 > region_name=openstack > memcached_servers=1.2.3.4:11211 > auth_type=password > auth_url=https://api-srv-cloud.host.domain:5000 > username=neutron > password=xxxxxxxxxxxxxxxxxxxxxxxxx > user_domain_name=Default > project_name=services > project_domain_name=Default > > > I'm struggling to understand how the auth_url config is really registered > in via oslo_config. > I found an excellent exchagne on the ML here: > > > https://openstack.nimeyo.com/115150/openstack-keystone-devstack-confusion-auth_url-middleware > > This seems to indicate auth_url is only registered if a particular auth > plugin requires it. But I can't find the plugin code that does it, so I'm > not sure how/where to debug it properly. > > If anyone has any ideas, I would really appreciate some input or pointers. > > Thanks! > > > Cheers, > Just > -- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Thu Feb 20 21:42:55 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Thu, 20 Feb 2020 15:42:55 -0600 Subject: [security] Vulnerability Management Policy Changes Message-ID: The Vulnerability Management Team (VMT) has recently updated the vulnerability:managed policy[0] here, the key points are: - Softened our #5 policy from a hard requirement to a recommendation - Clarified that the VMT does not track external software components - Defined that a project must tag releases to qualify for VMT oversight, and that the VMT only deals with vulnerabilities in real releases (not pre-releases, release candidates, milestones...) - Private embargo's shall not last more than 90 days, except under unusual circumstances With the VMT policy changes[0] merged, we have also updated the VMT process document[1] to match. The biggest change to note is the new 90 day embargo limit: "If a report is held in embargo for 90 days without a fix, or significant details of the report are disclosed in a public venue, the embargo is terminated by a VMT coordinator at that time and subsequent process switches to the public report workflow instead." We'll be updating all current private reports to let participants know that there is a 90-day deadline (from when we update the report) to make those reports public. [0] https://review.opendev.org/#/c/678426/ [1] https://security.openstack.org/vmt-process.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Thu Feb 20 21:53:57 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 20 Feb 2020 16:53:57 -0500 Subject: [cinder] ussuri virtual mid-cycle part 2 poll Message-ID: <4066896c-9080-c04f-7280-88fb61ef2ad5@gmail.com> The second session of the Cinder virtual mid-cycle will be held the week of 16 March. As with part 1, the length will be 2 hours. There's a poll up to determine a suitable day and time that week: https://doodle.com/poll/eme483iv2faupn6z Please fill out the poll as soon as you can. If all the times are horrible for you, please suggest an alternative in a comment on the poll. The poll closes at 21:00 UTC on Wednesday 26 February 2020. cheers, brian From paye600 at gmail.com Thu Feb 20 22:26:54 2020 From: paye600 at gmail.com (Roman Gorshunov) Date: Thu, 20 Feb 2020 23:26:54 +0100 Subject: [openstack-helm][neutron] neutron-dhcp-agent stuck in Init state In-Reply-To: References: Message-ID: Hello Sanjay, Do you have logs and detailed info for the neutron-dhcp-agent-default pod and possibly containers inside it which are stuck in Init phase? neutron-server and rabbitmq seem to be running fine. Here are some docs and code you could get hints from: - https://docs.openstack.org/openstack-helm/latest/devref/networking.html#neutron-dhcp-agent - https://github.com/openstack/openstack-helm/blob/master/neutron/templates/daemonset-dhcp-agent.yaml - https://github.com/openstack/openstack-helm/blob/master/neutron/templates/bin/_neutron-dhcp-agent-init.sh.tpl - https://github.com/openstack/openstack-helm/blob/master/neutron/templates/bin/_neutron-dhcp-agent.sh.tpl Best regards, — Roman Gorshunov -------------- next part -------------- An HTML attachment was scrubbed... URL: From yangyi01 at inspur.com Fri Feb 21 00:38:36 2020 From: yangyi01 at inspur.com (=?gb2312?B?WWkgWWFuZyAo0e6gRCkt1Ma3/s7xvK/NxQ==?=) Date: Fri, 21 Feb 2020 00:38:36 +0000 Subject: [neutron] Why network performance is extremely bad and linearly related with number of VMs? Message-ID: <1ab7f827833846cd9ee1fd3fe93ce407@inspur.com> Hi, All Anybody has noticed network performance between VMs is extremely bad, it is basically linearly related with numbers of VMs in same compute node. In my case, if I launch one VM per compute node and run iperf3 tcp and udp, performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP packets, it can reach 180000 pps (packets per second), but if I launch two VMs per compute node (note: they are in the same subnet) and only run pps test case, that will be decrease to about 90000 pps, if I launch 3 VMs per compute node, that will be about 50000 pps, I tried to find out the root cause, other VMs in this subnet (they are in the same compute node as iperf3 client) can receive all the packets iperf3 client VM sent out although destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of iperf3 server VM in another compute node, by further check, I did find qemu instances of these VMs have higher CPU utilization and corresponding vhost kernel threads also also higher CPU utilization, to be importantly, I did find ovs was broadcasting these packets because all the ovs bridges didn’t learn this destination MAC. I tried this in Queens and Rocky, the same issue is there. By the way, we’re using linux bridge for security group, so VM tap interface is attached into linux bridge which is connected to br-int by veth pair. Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched many VMs: recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412, used:0.000s, flags:SP., actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10 .3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18 ,19 $ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51 All the bridges can’t learn this MAC. My question is why ovs bridges can’t learn MACs of other compute nodes, is this common issue of all the Openstack versions? Is there any known existing way to fix it? Look forward to hearing your insights and solutions, thank you in advance and have a good day. -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From gouthampravi at gmail.com Thu Feb 20 17:44:43 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 20 Feb 2020 09:44:43 -0800 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: On Wed, Feb 19, 2020 at 3:44 PM Goutham Pacha Ravi wrote: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > Thank you all for responding. I've added Victoria and Carlos to https://review.opendev.org/#/admin/groups/213,members. Welcome, let's get back to work :) > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Feb 21 07:52:45 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Fri, 21 Feb 2020 07:52:45 +0000 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: <1582271562.6281.8@est.tech> On Wed, Feb 19, 2020 at 11:43, Eric Fried wrote: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will > be > changing shortly and I will be leaving the community. I have enjoyed > my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. Thank you for all the hard work you did and all the fun we had together. Special thanks for the CAPE! I will miss you a lot. I hope your next challenge will be a good one too. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I > will > need to abdicate my position as Nova PTL mid-cycle. As noted in the > last > meeting [1], I'm calling for a volunteer to take over for the > remainder > of Ussuri. Feel free to approach me privately if you prefer. I think we have to come up with a solution together as a nova community. Cheers, gibi > > Thanks, > efried > > [1] > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > From jichenjc at cn.ibm.com Fri Feb 21 09:17:37 2020 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 21 Feb 2020 09:17:37 +0000 Subject: [openstack-discuss][neutron] is neutron-openvswitch-agent needed for network connection on provider network? Message-ID: An HTML attachment was scrubbed... URL: From jichenjc at cn.ibm.com Fri Feb 21 10:32:49 2020 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Fri, 21 Feb 2020 10:32:49 +0000 Subject: [neutron] do we need neutron-openvswitch-agent running in order to make VMs spawned with provider network? Message-ID: An HTML attachment was scrubbed... URL: From skaplons at redhat.com Fri Feb 21 13:29:20 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 21 Feb 2020 14:29:20 +0100 Subject: [neutron] networking-bagpipe gate failure In-Reply-To: References: Message-ID: <66C3C22E-68DD-46A6-B596-C40C7F664B5F@redhat.com> Hi, Thx Akihiro for explanation of this issue. IMHO releasing beta version of neutron projects now should be better to solve this issue and thx for sending patch for that already :) > On 20 Feb 2020, at 15:41, Akihiro Motoki wrote: > > Hi, > > networking-bagpipe gate is broken now due to its dependencies. > The situation is complicated so I am summarizing it and exploring the > right solution. > > # what happens now > > Examples of the gate failure are [1] and [2], and the exact failure is > found at [3]. > It fails due to horizon dependency from networking-bgpvpn train > release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master > (horizon==18.0.0). > The neutron team has not released a beta for ussuri, so > requirements.txt tries to install networking-bgpvpn train which has > capping of horizon version. > The capping of horizon in networking-bgpvpn was introduced in [6] and > we cut a release so it started to cause the failure like this. > > We've explored several workarounds to avoid it including specifying > horizon in networking-bagpipe, specify horizon in required-projects in > networking-bagpipe and dropping networking-bgpvpn in requirements.txt > in networking-bagpipe, but all of them do not work. > > # possible solutions > > I am thinking two options. > The one is to cut a beta release in neutron stadium for ussuri. > The other is to uncap horizon in networking-bgpvpn train and release it. > > I believe both work but the first one would be better as it is time to > release beta for Ussuri. > Discussing it in the IRC, we are planning to release beta soon. > (ovn-octavia-provider is also waiting for a beta release of neutron.) > > # Side notes > > Capping dependencies in stable branches is not what we usually do. > Why we don't do this was discussed in the mailing list thread [4] and > it is highlighted in [5]. > > Thanks, > Akihiro Motoki (irc: amotoki) > > [1] https://review.opendev.org/#/c/708829/ > [2] https://review.opendev.org/#/c/703949/ > [3] https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 > [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 > [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html > [6] https://review.opendev.org/#/c/699456/ > — Slawek Kaplonski Senior software engineer Red Hat From ccamacho at redhat.com Fri Feb 21 14:46:03 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Fri, 21 Feb 2020 15:46:03 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 Thanks Kevin for the hard work! On Fri, Feb 21, 2020 at 3:43 PM James Slagle wrote: > +1. Thanks Kevin for your contributions and leadership. > > On Wed, Feb 19, 2020 at 5:32 PM Wesley Hayutin > wrote: > > > > Greetings, > > > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > > > First of all very well done, and thank you for all the contributions! > > Secondly, I'd like to propose Kevin to core.. > > > > Please vote by replying to this email. > > Thank you > > > > +1 > > > > -- > -- James Slagle > -- > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Fri Feb 21 15:10:27 2020 From: zhangbailin at inspur.com (=?utf-8?B?QnJpbiBaaGFuZyjlvKDnmb7mnpcp?=) Date: Fri, 21 Feb 2020 15:10:27 +0000 Subject: =?utf-8?B?562U5aSNOiBbbGlzdHMub3BlbnN0YWNrLm9yZ+S7o+WPkV1SZTogW25vdmFd?= =?utf-8?B?W3NmZV0gU3VwcG9ydCByZS1jb25maWd1cmUgZGVsZXRlZF9vbl90ZXJtaW5h?= =?utf-8?Q?tion_in_server?= In-Reply-To: <413aa140-feba-072e-ddb8-74d73f1e825a@fried.cc> References: <4309526b9c3b0e3072054110c3731386@sslemail.net> <413aa140-feba-072e-ddb8-74d73f1e825a@fried.cc> Message-ID: Thanks Eric, gibi, alex_xu: I have already update that SPEC, I know it's later now, thanks anyway, at least it has a solution that everyone can agree on. -----邮件原件----- 发件人: Eric Fried [mailto:openstack at fried.cc] 发送时间: 2020年2月20日 23:20 收件人: openstack-discuss at lists.openstack.org 主题: [lists.openstack.org代发]Re: [nova][sfe] Support re-configure deleted_on_termination in server We discussed this in today's Nova meeting. We agreed to grant the exception if those can be resolved and the spec can get two +2s by EOB tomorrow (Friday 20200221). http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-62 On 2/17/20 8:57 PM, Brin Zhang(张百林) wrote: > Hi, nova: > > We would like to have a spec freeze exception for the spec: > Support re-configure deleted_on_termination in server [1], and it’s > PoC code in [2] > > > > I will attend the nova meeting on February 20 2020 1400 UTCas much as > possible. > > > > [1] https://review.opendev.org/#/c/580336/ > > [2] https://review.opendev.org/#/c/693828/ > > > > brinzhang > From dougal at redhat.com Fri Feb 21 15:13:31 2020 From: dougal at redhat.com (Dougal Matthews) Date: Fri, 21 Feb 2020 15:13:31 +0000 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1! (This time with a reply to the list. Oops) On Wed, 19 Feb 2020 at 22:30, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Fri Feb 21 15:16:30 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Fri, 21 Feb 2020 15:16:30 +0000 Subject: =?gb2312?B?tPC4tDogW2xpc3RzLm9wZW5zdGFjay5vcme0+reiXVthbGxdW25vdmFdW3B0?= =?gb2312?Q?l]_Another_one_bites_the_dust?= In-Reply-To: References: <30458f24181ab7b6d7ce73041f18ff49@sslemail.net> Message-ID: <3d8a6288598f4170a10473e22b7f93ae@inspur.com> Eric, thank you for your efforts for Nova and I'm glad to meet you in Nova. We will, as always, make Nova stronger, and the Nova community will become stronger and stronger.. No matter what challenges you accept, I think you will be very smooth and bless you. -----邮件原件----- 发件人: Eric Fried [mailto:openstack at fried.cc] 发送时间: 2020年2月20日 1:44 收件人: OpenStack Discuss 主题: [lists.openstack.org代发][all][nova][ptl] Another one bites the dust Dear OpenStack- Due to circumstances beyond my control, my job responsibilities will be changing shortly and I will be leaving the community. I have enjoyed my time here immensely; I have never loved a job, my colleagues, or the tools of the trade more than I have here. My last official day will be March 31st (though some portion of my remaining time will be vacation -- TBD). Unfortunately this means I will need to abdicate my position as Nova PTL mid-cycle. As noted in the last meeting [1], I'm calling for a volunteer to take over for the remainder of Ussuri. Feel free to approach me privately if you prefer. Thanks, efried [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log. html#l-180 From skaplons at redhat.com Fri Feb 21 15:43:17 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 21 Feb 2020 16:43:17 +0100 Subject: [neutron] do we need neutron-openvswitch-agent running in order to make VMs spawned with provider network? In-Reply-To: References: Message-ID: <2D0FD57A-9F56-4790-8D0D-0F7E8A0517FB@redhat.com> Hi, Stopping neutron-ovs-agent shouldn’t cause any failure on data plane for existing VMs. You should check if open flow rules are there in place and where exactly connectivity is broken on the host. > On 21 Feb 2020, at 11:32, Chen CH Ji wrote: > > We are running provider network with VLAN /FLAT, does the neutron-openvswitch-agent is mandatory to be running in order to make the VM deployed able to connect outside? > > e.g. I have a VM instance-00000027, I login the VM and ping out side, in the mean time, use another session to > systemctl stop neutron-openvswitch-agent.service then wait for a few minutes, the ping will stop, does this is an expected behavior? > I know nova-compute is not running won't affect VM functionality, but does neutron openvswitch agent is needed or not? Thanks a lot > > [root at kvm02 ~]# virsh list > Id Name State > ----------------------------------- > 4 instance-00000027 running > > [root at kvm02 ~]# virsh console 4 > Connected to domain instance-00000027 > Escape character is ^] > [root at dede ~]# ping > [root at dede ~]# ping 172.16.32.1 > PING 172.16.32.1 (172.16.32.1) 56(84) bytes of data. > 64 bytes from 172.16.32.1: icmp_seq=1 ttl=64 time=19.8 ms > 64 bytes from 172.16.32.1: icmp_seq=2 ttl=64 time=0.667 ms > 64 bytes from 172.16.32.1: icmp_seq=3 ttl=64 time=0.774 ms > > Ji Chen > z Infrastructure as a Service architect > Phone: 10-82451493 > E-mail: jichenjc at cn.ibm.com > — Slawek Kaplonski Senior software engineer Red Hat From sean.mcginnis at gmx.com Fri Feb 21 17:23:36 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 21 Feb 2020 11:23:36 -0600 Subject: [all] Proposed release schedule for Victoria Message-ID: Hello everyone, Looking for feedback and increasing awareness for a proposed release schedule for Victoria: https://review.opendev.org/#/c/708471/ Mainly, looking for any feedback on any conflicts with major holidays, external factors, and other things that could potentially be an issue for this schedule. Until the logs are culled, here is a convenient link to see what the schedule looks like: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_fd4/708471/2/check/openstack-tox-docs/fd4df14/docs/victoria/schedule.html Here are the key dates for the schedule: May 13: Ussuri release June 8-12: OpenDev+PTG June 18: Victoria milestone 1 July 30: Victoria milestone 2 Sept 3: Final non-client lib releases Sept 10: Victoria milestone 3, feature freeze, final lib releases Sept 24: RC1 deadline Oct 8: Final RCs Oct 14: Victoria release Oct 19-23: Open Infrastructure Summit Part of the reason for the slightly shorter schedule was to wrap up prior to the Summit. Another version was looked at to extend past the Summit, but given the various deadlines and crunch that happens around that time, that ended up needed to be extended too far out to avoid conflicts. Please comment on the patch, or reply here, if it looks like there would be any major issues with this release schedule. Thanks! Sean From ralonsoh at redhat.com Fri Feb 21 17:23:54 2020 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Fri, 21 Feb 2020 17:23:54 +0000 Subject: [neutron] Network segment ranges feature Message-ID: Hello Neutrinos: First of all, a reference: https://bugs.launchpad.net/neutron/+bug/1863423 I detected some problems with this feature. The service plugin, when enabled and according to the spec [1], should segregate the segmentation range in several ranges. Each one (or many) of those ranges could be assigned to a project. When a network is created in a specific project, should receive a segmentation ID from the project network segment ranges. In case of not having any range assigned, a segmentation ID from a default range will be provided. How the current implementation actually works: When a driver is loaded (VLAN, VXLAN, GRE or Geneve), it creates a default network segment range, based on the static configuration provided in the plugin ("network_vlan_ranges", "vni_ranges", etc). Then the admin can create project specific network segment ranges (remember: always per driver type). Those project specific ranges cannot overlap but the project specific ranges CAN overlap the default one. A valid state could be: VLAN: - Default: 100:1000 - Project1: 100:200 - Project2: 200:300 When assigning segmentation IDs to new networks, the driver will query [2]: - The existing ranges per project. - The default range (always present). The result is that, if the project network segment range runs out of segment IDs, it will retrieve those ones from the default range. That will lead, eventually, to a clash between ranges. Project 1 will be able to allocate a network with a segmentation ID from Project 2 range. E.g.: [3] The problem is how the queries are done: [4] --> select one of each commented line to reproduce "query_project_id" and "query_shared" in [2]. Now we know the feature is not properly working, my question is what the proper behaviour should be. Some alternatives: 1) If a project has one or more network segment ranges, the segmentation ID should be retrieved ONLY from those ranges. If a project does not have any range, then it will retrieve from the shared pool BUT NEVER selecting a segment ID belonging to other project ID (ufff this query is going to be complex). --> IMO, the best solution. 2) If a project has one or more network segment ranges, the segmentation ID should be retrieved first from those ranges and then from the shared one, BUT NEVER selecting a segment ID belonging to other project ID. Same for range-less projects. --> IMO, if the administrator has assigned a range for a project, the project should ONLY use this pool. This is not a good solution. Can I have your feedback? Regards. [1] https://specs.openstack.org/openstack/neutron-specs/specs/stein/network-segment-range-management.html [2]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L89-L121 [3]http://paste.openstack.org/show/789872/ [4]http://paste.openstack.org/show/789873/ From emilien at redhat.com Fri Feb 21 17:28:16 2020 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 21 Feb 2020 12:28:16 -0500 Subject: [tripleo] Turning off Paunch by default Message-ID: Hi folks, In the long-term effort of simplification and convergence to Ansible, the replacement of Paunch has passed a big milestone where it has proven to be stable enough to be the default. A lot of testing has happened, including updates and upgrades; all positive so far. The role itself is documented here: https://docs.openstack.org/tripleo-ansible/latest/roles/role-tripleo-container-manage.html It has functional testing (Molecule), testing things like container config updates, idempotency, and many more use cases. You can take a look: https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_container_manage/molecule/default/playbook.yml The role itself is using custom filters; which are all unit tested. Some modules were created as well. More testing has to happen on these modules but they have proven to work and even improve the operator experience to cover more scenarios and improve the logging and debugging process. The role doesn't support Docker, and won't at this point; it doesn't fit with our roadmap. With that patch, any new deployment (not using Docker) will now use the new role and not Paunch anymore: https://review.opendev.org/#/c/700738/ Which means: standalone, undercloud, overcloud. If you already have a running deployment, you can either run a minor update which will update THT and roll your cloud; or you can also set EnablePaunch: False and the containers will be restarted with the new config. If for some reason, you need to disable it, here's how: - undercloud.conf: undercloud_enable_paunch = false - standalone/overcloud: EnablePaunch: False It is not supported to roll back to a Paunch deployment once your containers are managed by Ansible; if you try it, an error will raise and the deployment/update/upgrade will stop. If you find any problem, bug or have any feedback for improvement, please let me know and we'll make it better. During the following weeks, we'll watch for performances and see if it degraded or if it's better. Paunch's replacement shouldn't make it faster because Ansible has a lot of overhead; however we want to make sure that it doesn't cause issues at scale. Which is why we're going to observe that now and do more testing at scale now it's the default. I want to thank all the reviewers involved in that effort, and specially Sagi, Kevin and Alex for their great feedback. Thanks, -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpawlik at redhat.com Fri Feb 21 17:40:46 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Fri, 21 Feb 2020 18:40:46 +0100 Subject: [tripleo] RDO image server migration In-Reply-To: References: Message-ID: Hello, Migration of images.rdoproject.org has been finished. However, if there are any problems, do not hesitate to contact me. Regards, Daniel On Mon, Feb 17, 2020 at 4:39 PM Daniel Pawlik wrote: > Hello, > > Todays migration of images server to the new cloud provider was not > finished. > We are planning to continue tomorrow (18th Feb), on 10 AM UTC. > > What was done today: > - moved rhel-8 build base image to new image server > > What will be done tomorrow: > - change DNS record > - disable upload images to old host > - sync old images (if some are available) > > Migration should be transparent to the end user. However, you have to keep > in mind the unforeseen events that may occur. > Write access to the old server will be disabled and until DNS propagation > is not done, you could have read-only access. > > > If you have any doubts or concerns, please do not hesitate to contact: > - Daniel Pawlik > - Javier Pena > > Regards, > Daniel > -- -- Regards, Daniel Pawlik -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Fri Feb 21 20:16:15 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Fri, 21 Feb 2020 15:16:15 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: Neil, Would it be possible to grab some of your time for a quick conference call around this issue before anything is changed? David Comay and I should be available to talk at your convenience some time next week? Chris On Wed, Feb 12, 2020 at 1:12 PM Neil Jerram wrote: > On Wed, Feb 12, 2020 at 4:52 PM David Comay wrote: > >> >> >>>> My primary concern which isn't really governance would be around making >>>> sure the components in `networking-calico` are kept in-sync with the parent >>>> classes it inherits from Neutron itself. Is there a plan to keep these >>>> in-sync together going forward? >>>> >>> >>> Thanks for this question. I think the answer is that it will be a >>> planned effort, from now on, for us to support new OpenStack versions. >>> From Kilo through to Rocky we have aimed (and managed, so far as I know) to >>> maintain a unified networking-calico codebase that works with all of those >>> versions. However our code does not support Python 3, and OpenStack master >>> now requires Python 3, so we have to invest work in order to have even the >>> possibility of working with Train and later. More generally, it has been >>> frustrating, over the last 2 years or so, to track OpenStack master as the >>> CI requires, because breaking changes (in other OpenStack code) are made >>> frequently and we get hit by them when trying to fix or enhance something >>> (typically unrelated) in networking-calico. >>> >> >> I don't know the history here around `calico-dhcp-agent` but has there >> been previous efforts to propose integrating the changes made to it into >> `neutron-dhcp-agent`? It seems the best solution would be to make the >> functionality provided by the former into the latter rather than relying on >> parent classes from the former. I suspect there are details here on why >> that might be difficult but it seems solving that would be helpful in the >> long-term. >> > > No efforts that I know of. The difference is that calico-dhcp-agent is > driven by information in the Calico etcd datastore, where > neutron-dhcp-agent is driven via a message queue from the Neutron server. > I think it has improved since, but when we originated calico-dhcp-agent a > few years ago, the message queue wasn't scaling very well to hundreds of > nodes. We can certainly keep reintegrating in mind as a possibility. > > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Fri Feb 21 20:22:20 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 21 Feb 2020 21:22:20 +0100 Subject: [all][dev] reno linting anyone? Message-ID: Hello fellow OpenStackers! I wonder whether any project applies any linting to release notes? How do you maintain quality of release notes? -yoctozepto From nate.johnston at redhat.com Fri Feb 21 20:41:31 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Fri, 21 Feb 2020 15:41:31 -0500 Subject: [neutron] Network segment ranges feature In-Reply-To: References: Message-ID: <20200221204131.wifjhcuk5wkvjgoi@firewall> On Fri, Feb 21, 2020 at 05:23:54PM +0000, Rodolfo Alonso wrote: > Hello Neutrinos: > > First of all, a reference: https://bugs.launchpad.net/neutron/+bug/1863423 > > I detected some problems with this feature. The service plugin, when enabled and according to the > spec [1], should segregate the segmentation range in several ranges. Each one (or many) of those > ranges could be assigned to a project. When a network is created in a specific project, should > receive a segmentation ID from the project network segment ranges. In case of not having any range > assigned, a segmentation ID from a default range will be provided. > > How the current implementation actually works: When a driver is loaded (VLAN, VXLAN, GRE or Geneve), > it creates a default network segment range, based on the static configuration provided in the plugin > ("network_vlan_ranges", "vni_ranges", etc). Then the admin can create project specific network > segment ranges (remember: always per driver type). Those project specific ranges cannot overlap but > the project specific ranges CAN overlap the default one. A valid state could be: > > VLAN: > - Default: 100:1000 > - Project1: 100:200 > - Project2: 200:300 > > When assigning segmentation IDs to new networks, the driver will query [2]: > - The existing ranges per project. > - The default range (always present). > > The result is that, if the project network segment range runs out of segment IDs, it will retrieve > those ones from the default range. That will lead, eventually, to a clash between ranges. Project 1 > will be able to allocate a network with a segmentation ID from Project 2 range. E.g.: [3] > > The problem is how the queries are done: [4] --> select one of each commented line to reproduce > "query_project_id" and "query_shared" in [2]. > > Now we know the feature is not properly working, my question is what the proper behaviour should be. > Some alternatives: > > 1) If a project has one or more network segment ranges, the segmentation ID should be retrieved ONLY > from those ranges. If a project does not have any range, then it will retrieve from the shared pool > BUT NEVER selecting a segment ID belonging to other project ID (ufff this query is going to be > complex). --> IMO, the best solution. I like this. I would say that we should maintain a state of all parts of the range that have already been assigned. We load that into memory at process start and then we compare against it before we allow any other range requests to be implemented. My inclination would be to, at start, load all ranges up into a bit vector array that indicates what sections of the default range are in use. Also comvert the request into a bit vector array. - Assuming 1 is available and 0 is allocated - If there are any zero bits in "defaultrange | rangerequest" then you are allocating a used range - Adding a range as marked off in the bit vector is as simple as "defaultrange & rangerequest" So I think this will not be difficult to implement, and can be done in a way that avoids nasty SQL. Nate > 2) If a project has one or more network segment ranges, the segmentation ID should be retrieved > first from those ranges and then from the shared one, BUT NEVER selecting a segment ID belonging to > other project ID. Same for range-less projects. --> IMO, if the administrator has assigned a range > for a project, the project should ONLY use this pool. This is not a good solution. > > Can I have your feedback? > > Regards. > > > [1] > https://specs.openstack.org/openstack/neutron-specs/specs/stein/network-segment-range-management.html > [2]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L89-L121 > [3]http://paste.openstack.org/show/789872/ > [4]http://paste.openstack.org/show/789873/ > > > > > From smooney at redhat.com Fri Feb 21 20:58:58 2020 From: smooney at redhat.com (Sean Mooney) Date: Fri, 21 Feb 2020 20:58:58 +0000 Subject: [all][dev] reno linting anyone? In-Reply-To: References: Message-ID: On Fri, 2020-02-21 at 21:22 +0100, Radosław Piliszek wrote: > Hello fellow OpenStackers! > > I wonder whether any project applies any linting to release notes? > How do you maintain quality of release notes? we build the release notes with sphinx and execute the command with warnigns treated as error in nova and other poject. so we are using sphinx as our lintier. quality is maintianed through code review not through tools. tools just take some of the burden away for syntactic checks. style, tone acurracy are all maintianed by humans. > > -yoctozepto > From Moiz.Mohammed at charter.com Fri Feb 21 20:59:41 2020 From: Moiz.Mohammed at charter.com (Mohammed, Moiz) Date: Fri, 21 Feb 2020 20:59:41 +0000 Subject: [kolla]: Openstack kolla Message-ID: Hello, I am installing Openstack Kolla on CentOs 7 with multimode configuration and facing some issues during pre-checks.I am using openstack stein release and master built. Command: ./kolla-ansible -i ../../multinode prechecks Error: TASK [haproxy : Checking if kolla_internal_vip_address is in the same network as api_interface on all nodes] ************************************************************************************************** fatal: [moiz-kolla-controller-01]: FAILED! => {"changed": false, "cmd": ["ip", "-o", "addr", "show", "dev", "eth0"], "delta": "0:00:00.002984", "end": "2020-02-21 18:11:42.365444", "failed_when_result": true, "rc": 0, "start": "2020-02-21 18:11:42.362460", "stderr": "", "stderr_lines": [], "stdout": "2: eth0 inet 192.168.130.203/24 brd 192.168.130.255 scope global noprefixroute dynamic eth0\\ valid_lft 81807sec preferred_lft 81807sec\n2: eth0 inet6 fe80::f816:3eff:fe7f:4d2/64 scope link \\ valid_lft forever preferred_lft forever", "stdout_lines": ["2: eth0 inet 192.168.130.203/24 brd 192.168.130.255 scope global noprefixroute dynamic eth0\\ valid_lft 81807sec preferred_lft 81807sec", "2: eth0 inet6 fe80::f816:3eff:fe7f:4d2/64 scope link \\ valid_lft forever preferred_lft forever"]} [cid:image003.png at 01D5E8CF.EC6EB410] Moiz Mohammed | Systems Engineer - NSM Compute (704)-378-2934 O | (415) 866-3183 M | Oncall Phone: 1-866-577-0007 Ext 802 7815 Cresent Executive Dr, Floor#2 | Charlotte, NC 28217 [cid:image002.png at 01D3AA4F.EFCE7660] E-MAIL CONFIDENTIALITY NOTICE: The contents of this e-mail message and any attachments are intended solely for the addressee(s) and may contain confidential and/or legally privileged information. If you are not the intended recipient of this message or if this message has been addressed to you in error, please immediately alert the sender by reply e-mail and then delete this message and any attachments. If you are not the intended recipient, you are notified that any use, dissemination, distribution, copying, or storage of this message or any attachment is strictly prohibited. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 21842 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 8747 bytes Desc: image004.png URL: From wilkers.steve at gmail.com Fri Feb 21 23:30:37 2020 From: wilkers.steve at gmail.com (Steve Wilkerson) Date: Fri, 21 Feb 2020 17:30:37 -0600 Subject: [openstack-helm][neutron] neutron-dhcp-agent stuck in Init state In-Reply-To: References: Message-ID: You’ll need to deploy the nova chart, as the neutron dhcp agent has a dependency on the nova compute service. For reference, see: https://github.com/openstack/openstack-helm/blob/master/neutron/values.yaml#L227 In the future, you can get the logs from the init container to provide more insight into what’s preventing the containers from starting. Cheers, srwilkers On Fri, Feb 21, 2020 at 8:46 AM Roman Gorshunov wrote: > Hello Sanjay, > > Do you have logs and detailed info for the neutron-dhcp-agent-default pod > and possibly containers inside it which are stuck in Init phase? > neutron-server and rabbitmq seem to be running fine. > > Here are some docs and code you could get hints from: > - > https://docs.openstack.org/openstack-helm/latest/devref/networking.html#neutron-dhcp-agent > - > https://github.com/openstack/openstack-helm/blob/master/neutron/templates/daemonset-dhcp-agent.yaml > - > https://github.com/openstack/openstack-helm/blob/master/neutron/templates/bin/_neutron-dhcp-agent-init.sh.tpl > - > https://github.com/openstack/openstack-helm/blob/master/neutron/templates/bin/_neutron-dhcp-agent.sh.tpl > > Best regards, > — > Roman Gorshunov > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Sat Feb 22 10:04:06 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Sat, 22 Feb 2020 10:04:06 +0000 (UTC) Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> Message-ID: <2130957943.152281.1582365846726@mail.yahoo.com> Hi all, I am curious to find out if any Distribution or Products based on Openstack  Train or Usuri are seeking the latest certifications based on 2019.02. Similarly does any Hardware Driver of Software application seeking OpenStack compatibility Logo? Finally does anyone think that Open Infra Distro like Airship or StarlingX should promote Open Infra Airship or Open Infra  StarlingX powered as a new way to promote eco system surrounding them similar to OpenStack compatible drivers and software. Will then Argo, customize, Metal3.io or Ironic be qualified as Open Infra Airship compatible? If so how tempest can help in testing the above comments? Refer to this market place below as how Distos and Products leverage OpenStack logos and branding programs. https://www.openstack.org/marketplace/distros/ Discussions and feedback are welcome. A healthy debate as how k8s modules used in Open Infra can be certified will be a good start. ThanksPrakash Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Feb 22 14:21:54 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 22 Feb 2020 14:21:54 +0000 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <2130957943.152281.1582365846726@mail.yahoo.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> Message-ID: <20200222142154.b5y5rbi7ljb32jxm@yuggoth.org> On 2020-02-22 10:04:06 +0000 (+0000), prakash RAMCHANDRAN wrote: [...] > does anyone think that Open Infra Distro like Airship or StarlingX > should promote Open Infra Airship or Open Infra  StarlingX powered > as a new way to promote eco system surrounding them similar to > OpenStack compatible drivers and software. Will then Argo, > customize, Metal3.io or Ironic be qualified as Open Infra Airship > compatible? Those are probably questions for the Airship and StarlingX communities, so I don't know how much input the OpenStack community is going to have (or should expect to have) on those topics. > If so how tempest can help in testing the above comments? [...] Tempest is a QA tool for validating OpenStack APIs, so it could in theory be used to test any OpenStack services deployed within/using Airship or StarlingX. The reason the OpenStack logo programs rely on Tempest is because it's what the OpenStack community has used for testing OpenStack services. If there are interoperability problems between different distributions or deployments of Airship and StarlingX then it would make sense to test them with whatever tools those projects are using for testing their software, like the OpenStack logo programs are doing with Tempest. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ralonsoh at redhat.com Sat Feb 22 20:06:47 2020 From: ralonsoh at redhat.com (Rodolfo Alonso) Date: Sat, 22 Feb 2020 20:06:47 +0000 Subject: [neutron] Network segment ranges feature In-Reply-To: <20200221204131.wifjhcuk5wkvjgoi@firewall> References: <20200221204131.wifjhcuk5wkvjgoi@firewall> Message-ID: <5de85c1b3403d03d4041e808b42dd0993c1ee1eb.camel@redhat.com> Hello On Fri, 2020-02-21 at 15:41 -0500, Nate Johnston wrote: > On Fri, Feb 21, 2020 at 05:23:54PM +0000, Rodolfo Alonso wrote: > > Hello Neutrinos: > > > > First of all, a reference: https://bugs.launchpad.net/neutron/+bug/1863423 > > > > I detected some problems with this feature. The service plugin, when enabled and according to > > the > > spec [1], should segregate the segmentation range in several ranges. Each one (or many) of those > > ranges could be assigned to a project. When a network is created in a specific project, should > > receive a segmentation ID from the project network segment ranges. In case of not having any > > range > > assigned, a segmentation ID from a default range will be provided. > > > > How the current implementation actually works: When a driver is loaded (VLAN, VXLAN, GRE or > > Geneve), > > it creates a default network segment range, based on the static configuration provided in the > > plugin > > ("network_vlan_ranges", "vni_ranges", etc). Then the admin can create project specific network > > segment ranges (remember: always per driver type). Those project specific ranges cannot overlap > > but > > the project specific ranges CAN overlap the default one. A valid state could be: > > > > VLAN: > > - Default: 100:1000 > > - Project1: 100:200 > > - Project2: 200:300 > > > > When assigning segmentation IDs to new networks, the driver will query [2]: > > - The existing ranges per project. > > - The default range (always present). > > > > The result is that, if the project network segment range runs out of segment IDs, it will > > retrieve > > those ones from the default range. That will lead, eventually, to a clash between ranges. > > Project 1 > > will be able to allocate a network with a segmentation ID from Project 2 range. E.g.: [3] > > > > The problem is how the queries are done: [4] --> select one of each commented line to reproduce > > "query_project_id" and "query_shared" in [2]. > > > > Now we know the feature is not properly working, my question is what the proper behaviour should > > be. > > Some alternatives: > > > > 1) If a project has one or more network segment ranges, the segmentation ID should be retrieved > > ONLY > > from those ranges. If a project does not have any range, then it will retrieve from the shared > > pool > > BUT NEVER selecting a segment ID belonging to other project ID (ufff this query is going to be > > complex). --> IMO, the best solution. > > I like this. I would say that we should maintain a state of all parts of the > range that have already been assigned. We load that into memory at process > start and then we compare against it before we allow any other range requests to > be implemented. > > My inclination would be to, at start, load all ranges up into a bit vector array > that indicates what sections of the default range are in use. Also comvert the > request into a bit vector array. > > - Assuming 1 is available and 0 is allocated > - If there are any zero bits in "defaultrange | rangerequest" then you are > allocating a used range > - Adding a range as marked off in the bit vector is as simple as "defaultrange & > rangerequest" > > So I think this will not be difficult to implement, and can be done in a way > that avoids nasty SQL. > > Nate > That's a good idea and originally I designed a solution based on a cached mapping. But then I realized that, every time we use some kind of shortcut to the DB mapping any resource in a controller, then we have plenty of problems with HA. The DB is, IMO, the best resource for sync resources in HA. There is a way to do this in two steps: - Retrieve the ranges assigned to other projects. Those ranges, by definition, do no overlap. Then join if possible those ranges to have the minimum set of gaps. E.g.: 0-100,120-500,720-1000. - The second DB call will be very similar to the current one, but using those ranges in the opposite way, to filter out those segment IDs within them but belonging to the default one. Regardless of the implementation, what I would like is to have is an idea of how the feature should work. IMO, (1) is the desirable way: if a project has segment ranges, use them (and only those segments). If not, use the default range but never a segment ID belonging to another project range. > > > 2) If a project has one or more network segment ranges, the segmentation ID should be retrieved > > first from those ranges and then from the shared one, BUT NEVER selecting a segment ID belonging > > to > > other project ID. Same for range-less projects. --> IMO, if the administrator has assigned a > > range > > for a project, the project should ONLY use this pool. This is not a good solution. > > > > Can I have your feedback? > > > > Regards. > > > > > > [1] > > https://specs.openstack.org/openstack/neutron-specs/specs/stein/network-segment-range-management.html > > [2] > > https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L89-L121 > > [3]http://paste.openstack.org/show/789872/ > > [4]http://paste.openstack.org/show/789873/ > > > > > > > > > > From kevin at cloudnull.com Sat Feb 22 22:54:25 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Sat, 22 Feb 2020 16:54:25 -0600 Subject: [tripleo] Turning off Paunch by default In-Reply-To: References: Message-ID: Great work on the role and all of the integrations Emilien. Lots of effort went into this simplification and It is awesome to see this become the default. On Fri, Feb 21, 2020 at 11:32 Emilien Macchi wrote: > Hi folks, > > In the long-term effort of simplification and convergence to Ansible, the > replacement of Paunch has passed a big milestone where it has proven to be > stable enough to be the default. > A lot of testing has happened, including updates and upgrades; all > positive so far. > The role itself is documented here: > > https://docs.openstack.org/tripleo-ansible/latest/roles/role-tripleo-container-manage.html > It has functional testing (Molecule), testing things like container config > updates, idempotency, and many more use cases. > You can take a look: > > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_container_manage/molecule/default/playbook.yml > The role itself is using custom filters; which are all unit tested. Some > modules were created as well. More testing has to happen on these modules > but they have proven to work and even improve the operator experience to > cover more scenarios and improve the logging and debugging process. > The role doesn't support Docker, and won't at this point; it doesn't fit > with our roadmap. > > With that patch, any new deployment (not using Docker) will now use the > new role and not Paunch anymore: > https://review.opendev.org/#/c/700738/ > Which means: standalone, undercloud, overcloud. > > If you already have a running deployment, you can either run a minor > update which will update THT and roll your cloud; or you can also set > EnablePaunch: False and the containers will be restarted with the new > config. > > If for some reason, you need to disable it, here's how: > - undercloud.conf: undercloud_enable_paunch = false > - standalone/overcloud: EnablePaunch: False > > It is not supported to roll back to a Paunch deployment once your > containers are managed by Ansible; if you try it, an error will raise and > the deployment/update/upgrade will stop. > > > If you find any problem, bug or have any feedback for improvement, please > let me know and we'll make it better. > During the following weeks, we'll watch for performances and see if it > degraded or if it's better. Paunch's replacement shouldn't make it faster > because Ansible has a lot of overhead; however we want to make sure that it > doesn't cause issues at scale. Which is why we're going to observe that now > and do more testing at scale now it's the default. > > I want to thank all the reviewers involved in that effort, and specially > Sagi, Kevin and Alex for their great feedback. > Thanks, > > -- > Emilien Macchi > -- -- Kevin Carter IRC: Cloudnull -------------- next part -------------- An HTML attachment was scrubbed... URL: From kevin at cloudnull.com Sat Feb 22 23:11:29 2020 From: kevin at cloudnull.com (Carter, Kevin) Date: Sat, 22 Feb 2020 17:11:29 -0600 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: Thank you all. Joining the TripleO team has been an awesome experience, a breadth of fresh air. I’ve learned a lot from everyone in this community over these last few months, which has been fun and exciting, and I’m looking forward to learning more as we go forward together. I appreciate your trust and confidence; I’ll do my best not to let you down. Thank you again. — Kevin Carter IRC: Cloudnull On Fri, Feb 21, 2020 at 09:17 Dougal Matthews wrote: > +1! > > (This time with a reply to the list. Oops) > > On Wed, 19 Feb 2020 at 22:30, Wesley Hayutin wrote: > >> Greetings, >> >> I'm sure by now you have all seen the contributions from Kevin to the >> tripleo-ansible project, transformation, mistral to ansible etc. In his >> short tenure in TripleO Kevin has accomplished a lot and is the number #3 >> contributor to tripleo for the ussuri cycle! >> >> First of all very well done, and thank you for all the contributions! >> Secondly, I'd like to propose Kevin to core.. >> >> Please vote by replying to this email. >> Thank you >> >> +1 >> > -- -- Kevin Carter IRC: Cloudnull -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Feb 23 01:43:57 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 22 Feb 2020 19:43:57 -0600 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <20200222142154.b5y5rbi7ljb32jxm@yuggoth.org> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> <20200222142154.b5y5rbi7ljb32jxm@yuggoth.org> Message-ID: <1706fb7205c.fac5878231807.1811462628419721521@ghanshyammann.com> ---- On Sat, 22 Feb 2020 08:21:54 -0600 Jeremy Stanley wrote ---- > On 2020-02-22 10:04:06 +0000 (+0000), prakash RAMCHANDRAN wrote: > [...] > > does anyone think that Open Infra Distro like Airship or StarlingX > > should promote Open Infra Airship or Open Infra StarlingX powered > > as a new way to promote eco system surrounding them similar to > > OpenStack compatible drivers and software. Will then Argo, > > customize, Metal3.io or Ironic be qualified as Open Infra Airship > > compatible? > > Those are probably questions for the Airship and StarlingX > communities, so I don't know how much input the OpenStack community > is going to have (or should expect to have) on those topics. > > > If so how tempest can help in testing the above comments? > [...] > > Tempest is a QA tool for validating OpenStack APIs, so it could in > theory be used to test any OpenStack services deployed within/using > Airship or StarlingX. The reason the OpenStack logo programs rely on > Tempest is because it's what the OpenStack community has used for > testing OpenStack services. If there are interoperability problems > between different distributions or deployments of Airship and > StarlingX then it would make sense to test them with whatever tools > those projects are using for testing their software, like the > OpenStack logo programs are doing with Tempest. Jeremy explained clearly. Tempest framework can be used via Tempest plugins to validate the things beyond OpenStack services via API interaction or some more backend-specific or control/data plan verification. It might need more brainstorming to achieve that. NOTE: Tempest itself can not be extended for their testing, it is more of providing the basic testing framework for OpenStack upstream + distro or Cloud. But the preferred way is to use the existing testing tooling used by Airship or StarlingX. Because the key thing here is long term maintenance of different tooling used for CI/CD and certification program. Using a single tooling for both purposes makes more sense. -gmann > -- > Jeremy Stanley > From emccormick at cirrusseven.com Sun Feb 23 04:21:26 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Sat, 22 Feb 2020 23:21:26 -0500 Subject: [kolla]: Openstack kolla In-Reply-To: References: Message-ID: Hi Moiz, On Fri, Feb 21, 2020, 6:03 PM Mohammed, Moiz wrote: > > Hello, > > I am installing Openstack Kolla on CentOs 7 with multimode configuration > and facing some issues during pre-checks.I am using openstack stein release > and master built. > > Command: ./kolla-ansible -i ../../multinode prechecks > > > Error: > > TASK [haproxy : Checking if kolla_internal_vip_address is in the same > network as api_interface on all nodes] > ************************************************************************************************** > > fatal: [moiz-kolla-controller-01]: FAILED! => {"changed": false, "cmd": > ["ip", "-o", "addr", "show", "dev", "eth0"], "delta": "0:00:00.002984", > "end": "2020-02-21 18:11:42.365444", "failed_when_result": true, "rc": 0, > "start": "2020-02-21 18:11:42.362460", "stderr": "", "stderr_lines": [], > "stdout": "2: eth0 inet 192.168.130.203/24 brd 192.168.130.255 scope > global noprefixroute dynamic eth0\\ valid_lft 81807sec preferred_lft > 81807sec\n2: eth0 inet6 fe80::f816:3eff:fe7f:4d2/64 scope link \\ > valid_lft forever preferred_lft forever", "stdout_lines": ["2: eth0 inet > 192.168.130.203/24 brd 192.168.130.255 scope global noprefixroute dynamic > eth0\\ valid_lft 81807sec preferred_lft 81807sec", "2: eth0 inet6 > fe80::f816:3eff:fe7f:4d2/64 scope link \\ valid_lft forever > preferred_lft forever"]} > This is telling you that the external VIP address you specified in your config is not in the subnet found on the interface specified as the api_interface. It beieves that interface is set to eth0. If you have nodes with different interface layouts, you will want to specify things like api_interface, cluster_interface, etc. in your ansible inventory instead of the global config. I do this always just in case new hardware gets introduced later with different mappings. If you are able to use IRC. I recommend dropping by #openstack-kolla on Freenode for more real-time help if you're still stuck. Cheers, Erik > > > > > > *Moiz Mohammed* | * Systems Engineer – NSM Compute* > > (704)-378-2934 O | (415) 866-3183 M | Oncall Phone: 1-866-577-0007 Ext 802 > > 7815 Cresent Executive Dr, Floor#2 | Charlotte, NC 28217 > > [image: cid:image002.png at 01D3AA4F.EFCE7660] > > > The contents of this e-mail message and > any attachments are intended solely for the > addressee(s) and may contain confidential > and/or legally privileged information. If you > are not the intended recipient of this message > or if this message has been addressed to you > in error, please immediately alert the sender > by reply e-mail and then delete this message > and any attachments. If you are not the > intended recipient, you are notified that > any use, dissemination, distribution, copying, > or storage of this message or any attachment > is strictly prohibited. > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 8747 bytes Desc: not available URL: From donny at fortnebula.com Sun Feb 23 05:38:54 2020 From: donny at fortnebula.com (Donny Davis) Date: Sun, 23 Feb 2020 00:38:54 -0500 Subject: =?UTF-8?B?UmU6IFtsaXN0cy5vcGVuc3RhY2sub3Jn5Luj5Y+RXVthbGxdW25vdmFdW3B0bF0gQW5vdA==?= =?UTF-8?B?aGVyIG9uZSBiaXRlcyB0aGUgZHVzdA==?= In-Reply-To: <3d8a6288598f4170a10473e22b7f93ae@inspur.com> References: <30458f24181ab7b6d7ce73041f18ff49@sslemail.net> <3d8a6288598f4170a10473e22b7f93ae@inspur.com> Message-ID: On Fri, Feb 21, 2020 at 10:21 AM Brin Zhang(张百林) wrote: > Eric, thank you for your efforts for Nova and I'm glad to meet you in > Nova. We will, as always, make Nova stronger, and the Nova community will > become stronger and stronger.. No matter what challenges you accept, I > think you will be very > smooth and bless you. > > -----邮件原件----- > 发件人: Eric Fried [mailto:openstack at fried.cc] > 发送时间: 2020年2月20日 1:44 > 收件人: OpenStack Discuss > 主题: [lists.openstack.org代发][all][nova][ptl] Another one bites the dust > > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time > here immensely; I have never loved a job, my colleagues, or the tools of > the > trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining > time will be vacation -- TBD). Unfortunately this means I will need to > abdicate my position as Nova PTL mid-cycle. As noted in the last meeting > [1], I'm calling for a volunteer to take over for the remainder of Ussuri. > Feel free to approach me privately if you prefer. > > Thanks, > efried > > [1] > > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log. > html#l-180 > > > Nothing makes me sadder than to see you leave Eric. You have been a great leader for this mission critical project. You have been an even better mentor to me. Nobody sees all the private messages where I ask for help from someone who literally has better things to do.... but takes the time to answer anyways. Godspeed good friend, I wish you the best of luck in your new role. There is no doubt in my mind you will bring something awesome to wherever you land. You epitomize my favorite saying "No mission too difficult. No sacrifice too great. Duty First!" PS. I have your phone number, so you can expect the dumb questions to come in there post 31 Mar. -- ~/DonnyD C: 805 814 6800 -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sun Feb 23 05:49:32 2020 From: donny at fortnebula.com (Donny Davis) Date: Sun, 23 Feb 2020 00:49:32 -0500 Subject: [neutron] Why network performance is extremely bad and linearly related with number of VMs? In-Reply-To: <1ab7f827833846cd9ee1fd3fe93ce407@inspur.com> References: <1ab7f827833846cd9ee1fd3fe93ce407@inspur.com> Message-ID: So I am curious as to what your question is. Are you asking about ovs bridges learning MAC's of other compute nodes or why network performance is affected when you run more than one instance per node. I have not observed this behaviour in my experience. Could you tell us more about the configuration of your deployment? I understand you are currently using linux bridges that are connected to openvswitch bridges? Why not just use ovs? OVS can handle security groups. On Fri, Feb 21, 2020 at 9:48 AM Yi Yang (杨燚)-云服务集团 wrote: > Hi, All > > Anybody has noticed network performance between VMs is extremely bad, it is > basically linearly related with numbers of VMs in same compute node. In my > case, if I launch one VM per compute node and run iperf3 tcp and udp, > performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP > packets, it can reach 180000 pps (packets per second), but if I launch two > VMs per compute node (note: they are in the same subnet) and only run pps > test case, that will be decrease to about 90000 pps, if I launch 3 VMs per > compute node, that will be about 50000 pps, I tried to find out the root > cause, other VMs in this subnet (they are in the same compute node as > iperf3 > client) can receive all the packets iperf3 client VM sent out although > destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of > iperf3 server VM in another compute node, by further check, I did find qemu > instances of these VMs have higher CPU utilization and corresponding vhost > kernel threads also also higher CPU utilization, to be importantly, I did > find ovs was broadcasting these packets because all the ovs bridges didn’t > learn this destination MAC. I tried this in Queens and Rocky, the same > issue > is there. By the way, we’re using linux bridge for security group, so VM > tap interface is attached into linux bridge which is connected to br-int by > veth pair. > > Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched > many VMs: > > > recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et > h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412, > used:0.000s, flags:SP., > > actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10 > > .3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18 > ,19 > > $ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51 > $ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51 > $ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51 > $ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51 > > All the bridges can’t learn this MAC. > > My question is why ovs bridges can’t learn MACs of other compute nodes, is > this common issue of all the Openstack versions? Is there any known > existing > way to fix it? Look forward to hearing your insights and solutions, thank > you in advance and have a good day. > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sun Feb 23 06:16:04 2020 From: donny at fortnebula.com (Donny Davis) Date: Sun, 23 Feb 2020 01:16:04 -0500 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: I use local NVME storage for FortNebula. Its very fast, and for "cloudy" things I prefer availability of data to be above the infrastructure layer. I used to use Ceph for all things, but in my experience... if performance is a requirement, local storage is pretty hard to beat. I am in the process of moving the object store to ceph, and all seems to be well in terms of performance using ceph for that use case. On Tue, Feb 18, 2020 at 6:41 AM Alfredo De Luca wrote: > Thanks Burak and Ignazio. > Appreciate it > > > > On Thu, Feb 13, 2020 at 10:19 PM Burak Hoban > wrote: > >> Hi guys, >> >> We use Dell EMC VxFlex OS, which in its current version allows for free >> use and commercial (in version 3.5 a licence is needed, but its perpetual). >> It's similar to Ceph but more geared towards scale and performance etc (it >> use to be called ScaleIO). >> >> Other than that, I know of a couple sites using SAN storage, but a lot of >> people just seem to use Ceph. >> >> Cheers, >> >> Burak >> >> ------------------------------ >> >> Message: 2 >> Date: Thu, 13 Feb 2020 18:20:29 +0100 >> From: Ignazio Cassano >> To: Alfredo De Luca >> Cc: openstack-discuss >> Subject: Re: [CINDER] Distributed storage alternatives >> Message-ID: >> < >> CAB7j8cXLQWh5fx-E9AveUEa6OncDwCL6BOGc-Pm2TX4FKwnUKg at mail.gmail.com> >> Content-Type: text/plain; charset="utf-8" >> >> Hello Alfredo, I think best opensource solution is ceph. >> As far as commercial solutions are concerned we are working with network >> appliance (netapp) and emc unity. >> Regards >> Ignazio >> >> Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha >> scritto: >> >> > Hi all. >> > we 'd like to explore storage back end alternatives to CEPH for >> > Openstack >> > >> > I am aware of GlusterFS but what would you recommend for distributed >> > storage like Ceph and specifically for block device provisioning? >> > Of course must be: >> > >> > 1. *Reliable* >> > 2. *Fast* >> > 3. *Capable of good performance over WAN given a good network back >> > end* >> > >> > Both open source and commercial technologies and ideas are welcome. >> > >> > Cheers >> > >> > -- >> > *Alfredo* >> > >> > >> >> _____________________________________________________________________ >> >> The information transmitted in this message and its attachments (if any) >> is intended >> only for the person or entity to which it is addressed. >> The message may contain confidential and/or privileged material. Any >> review, >> retransmission, dissemination or other use of, or taking of any action in >> reliance >> upon this information, by persons or entities other than the intended >> recipient is >> prohibited. >> >> If you have received this in error, please contact the sender and delete >> this e-mail >> and associated material from any computer. >> >> The intended recipient of this e-mail may only use, reproduce, disclose >> or distribute >> the information contained in this e-mail and any attached files, with the >> permission >> of the sender. >> >> This message has been scanned for viruses. >> _____________________________________________________________________ >> > > > -- > *Alfredo* > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From lennyb at mellanox.com Sun Feb 23 06:46:50 2020 From: lennyb at mellanox.com (Lenny Verkhovsky) Date: Sun, 23 Feb 2020 06:46:50 +0000 Subject: [tripleo] RDO image server migration In-Reply-To: References: Message-ID: Hello Daniel, Opening http://images.rdoproject.org/ in the Web Browser shows nothing At this time 6:46 UTC From: Daniel Pawlik Sent: Friday, February 21, 2020 7:41 PM To: openstack-discuss at lists.openstack.org Subject: Re: [tripleo] RDO image server migration Hello, Migration of images.rdoproject.org has been finished. However, if there are any problems, do not hesitate to contact me. Regards, Daniel On Mon, Feb 17, 2020 at 4:39 PM Daniel Pawlik > wrote: Hello, Todays migration of images server to the new cloud provider was not finished. We are planning to continue tomorrow (18th Feb), on 10 AM UTC. What was done today: - moved rhel-8 build base image to new image server What will be done tomorrow: - change DNS record - disable upload images to old host - sync old images (if some are available) Migration should be transparent to the end user. However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik > - Javier Pena > Regards, Daniel -- -- Regards, Daniel Pawlik -------------- next part -------------- An HTML attachment was scrubbed... URL: From tdecacqu at redhat.com Sun Feb 23 10:36:45 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Sun, 23 Feb 2020 10:36:45 +0000 Subject: [tripleo] RDO image server migration In-Reply-To: References: Message-ID: <87lfoty4hu.tristanC@fedora> On Sun, Feb 23, 2020 at 06:46 Lenny Verkhovsky wrote: > Hello Daniel, > Opening http://images.rdoproject.org/ in the Web Browser shows nothing > At this time 6:46 UTC > Hello Lenny, That seems unrelated to the migration, I think this server does not serve a top level index.html, see: https://web.archive.org/web/20190721095056/http://images.rdoproject.org:80/ To get the latest builds, use https://images.rdoproject.org/centos7/ instead. Thank you for the report, please let us know if you have any other issues with the server. -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From anlin.kong at gmail.com Sun Feb 23 21:40:10 2020 From: anlin.kong at gmail.com (Lingxian Kong) Date: Mon, 24 Feb 2020 10:40:10 +1300 Subject: [cloud-provider-openstack] [kubernetes] Migrating from in-tree provider to external openstack-cloud-controller-manager Message-ID: For those who still haven't heard of cloud-provider-openstack[1], cloud-provider-openstack is an implementation of Kubernetes Cloud Controller Manager[2], it's a sub-project under Kubernetes, which provides a couple of binaries/services relevant to OpenStack and Kubernetes integration, more documentation could be found here[3]. The demo below shows how to migrate from in-tree openstack cloud provider to external openstack-cloud-controller-manager. The Kubernetes cluster in the demo was set up using kubeadm. https://asciinema.org/a/303399?speed=2 [1]: https://github.com/kubernetes/cloud-provider-openstack [2]: https://kubernetes.io/docs/tasks/administer-cluster/running-cloud-controller/ [3]: https://github.com/kubernetes/cloud-provider-openstack/tree/master/docs - Best regards, Lingxian Kong Catalyst Cloud -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Sun Feb 23 22:12:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sun, 23 Feb 2020 16:12:04 -0600 Subject: [TC] W release naming In-Reply-To: References: Message-ID: <06a444a2-6f30-a739-515e-282e3a6af74b@gmx.com> > As a reminder for everyone, this naming poll is the first that follows > our new process of having the electorate being members of the Technical > Committee. More details can be found in the governance documentation for > release naming: > > https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process > Looks like we are still short of everyone from the TC voting. This is last call for any TC members that have not cast a vote for the W release name to get those votes in before we wrap up the polling period and pass along the top name(s) to the Foundation for legal vetting. From satish.txt at gmail.com Sun Feb 23 23:10:02 2020 From: satish.txt at gmail.com (Satish Patel) Date: Sun, 23 Feb 2020 18:10:02 -0500 Subject: [neutron] Why network performance is extremely bad and linearly related with number of VMs? In-Reply-To: References: Message-ID: <52E415A7-AF3E-4798-803C-391123729345@gmail.com> What is max age time in Linux bridge? If it’s zero then it won’t learn Mac and flush arp table. Sent from my iPhone > On Feb 23, 2020, at 12:52 AM, Donny Davis wrote: > >  > So I am curious as to what your question is. Are you asking about ovs bridges learning MAC's of other compute nodes or why network performance is affected when you run more than one instance per node. > > I have not observed this behaviour in my experience. > Could you tell us more about the configuration of your deployment? > I understand you are currently using linux bridges that are connected to openvswitch bridges? Why not just use ovs? OVS can handle security groups. > > > >> On Fri, Feb 21, 2020 at 9:48 AM Yi Yang (杨燚)-云服务集团 wrote: >> Hi, All >> >> Anybody has noticed network performance between VMs is extremely bad, it is >> basically linearly related with numbers of VMs in same compute node. In my >> case, if I launch one VM per compute node and run iperf3 tcp and udp, >> performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP >> packets, it can reach 180000 pps (packets per second), but if I launch two >> VMs per compute node (note: they are in the same subnet) and only run pps >> test case, that will be decrease to about 90000 pps, if I launch 3 VMs per >> compute node, that will be about 50000 pps, I tried to find out the root >> cause, other VMs in this subnet (they are in the same compute node as iperf3 >> client) can receive all the packets iperf3 client VM sent out although >> destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of >> iperf3 server VM in another compute node, by further check, I did find qemu >> instances of these VMs have higher CPU utilization and corresponding vhost >> kernel threads also also higher CPU utilization, to be importantly, I did >> find ovs was broadcasting these packets because all the ovs bridges didn’t >> learn this destination MAC. I tried this in Queens and Rocky, the same issue >> is there. By the way, we’re using linux bridge for security group, so VM >> tap interface is attached into linux bridge which is connected to br-int by >> veth pair. >> >> Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched >> many VMs: >> >> recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et >> h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412, >> used:0.000s, flags:SP., >> actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10 >> .3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18 >> ,19 >> >> $ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51 >> $ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51 >> $ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51 >> $ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51 >> >> All the bridges can’t learn this MAC. >> >> My question is why ovs bridges can’t learn MACs of other compute nodes, is >> this common issue of all the Openstack versions? Is there any known existing >> way to fix it? Look forward to hearing your insights and solutions, thank >> you in advance and have a good day. > > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: From changzhi at cn.ibm.com Mon Feb 24 00:41:23 2020 From: changzhi at cn.ibm.com (Zhi CZ Chang) Date: Mon, 24 Feb 2020 00:41:23 +0000 Subject: [neutron] do we need neutron-openvswitch-agent running in order to make VMs spawned with provider network? In-Reply-To: References: Message-ID: An HTML attachment was scrubbed... URL: From maciej.szwed at intel.com Mon Feb 24 07:37:41 2020 From: maciej.szwed at intel.com (Szwed, Maciej) Date: Mon, 24 Feb 2020 07:37:41 +0000 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: Hi Alfredo, Please take a look at SPDK: https://docs.openstack.org/cinder/latest/configuration/block-storage/drivers/spdk-volume-driver.html https://spdk.io/ I can provide you with more information if you feel that this is the storage type you would like to use. Regards, Maciej From: Alfredo De Luca Sent: Thursday, February 13, 2020 1:41 PM To: openstack-discuss Subject: [CINDER] Distributed storage alternatives Hi all. we 'd like to explore storage back end alternatives to CEPH for Openstack I am aware of GlusterFS but what would you recommend for distributed storage like Ceph and specifically for block device provisioning? Of course must be: 1. Reliable 2. Fast 3. Capable of good performance over WAN given a good network back end Both open source and commercial technologies and ideas are welcome. Cheers -- Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony.pearce at cinglevue.com Mon Feb 24 08:49:33 2020 From: tony.pearce at cinglevue.com (Tony Pearce) Date: Mon, 24 Feb 2020 16:49:33 +0800 Subject: Cinder / Nova - Select cinder backend based on nova availability zone? Message-ID: Apologies in advance if this seems trivial but I am looking for some direction on this and I may have found a bug while testing also. Some background - I have 2 physical hosts which I am testing with (host1, host2) and I have 2 separate cinder backends (backend1, backend2). Backend1 can only be utilised by host1. Same for backend2 - it can only be utilised by host2. So they are paired together like: host1:backend1 host2:backend2 So I wanted to select a Cinder storage back-end based on nova availability zone and to do this when creating an instance through horizon (not creating a volume directly). Also I wanted to avoid the use of metadata input on each instance create or by using metadata from images (such as cinder_img_volume_type) [2] . Because I can foresee a necessity to then have a number of images which reference each AZ or backend individually. - Is it possible to select a backend based on nova AZ? If so, could anyone share any resources to me that could help me understand how to achieve it? Because I failed at achieving the above, I then decided to use one way which had worked for me in the past, which was to use the image metadata "cinder_img_volume_type". However I find that this is not working. The “default” volume type is selected (if cinder.conf has it) or if no default, then `__DEFAULT__` is being selected. The link at [2] states that first, a volume type is used based on the volume type selected and if not chosen/set then the 2nd method is "cinder_img_volume_type" from the image metadata and then the 3rd and final is the default from cinder.conf. I have tested with fresh deployment using Kayobe as well as RDO’s packstack. Openstack version is *Train* Steps to reproduce: 1. Install packstack 2. Update cinder.conf with enabled_backends and the [backend] 3. Add the volume type to reference the backend (for reference, I call this volume type `number-1`) 4. Upload an image and add metadata `cinder_img_volume_type` and the name as mentioned in step 3: number-1 5. Try and create an instance using horizon. Source = image and create new volume 6. Result = volume type / backend as chosen in the image metadata is not used and instance goes into error status. After fresh-deploying the RDO Packstack, I enabled debug logs and tested again. In the cinder-api.log I see “"volume_type": null,” and then the next debug log immediately after logged as “Create volume request body:” has “volume_type': None”. I was searching for a list of the supported image metadata, in case it had changed but the pages seem empty one rocky/stein/train [3] or not yet updated. Selecting backend based on nova AZ:: I was searching how to achieve this and I came across this video on the subject of AZs [1]. Although it seems only in the context of creating volumes (not with creating instances with volume from an image, for example). I have tried creating a host aggregate in nova, with AZ name `host1az`. I've also created a backend in Cinder (cinder.conf) with `backend_availability_zone = host1az`. But this does not appear to achieve the desired result, either and the cinder api logs are showing “"availability_zone": null” during the volume create part of the launch instance from Horizon. I also tried setting RESKEY [3] in the volume type, but again similar situation seen; although I dont think this option is the correct context for what I am attempting. Could anyone please nudge me in the right direction on this? Any pointers appreciated at this point. Thanks in advance. References: [1] https://www.youtube.com/watch?v=a5332_Ew9JA [2] https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html [3] https://docs.openstack.org/cinder/train/contributor/api/cinder.api.schemas.volume_image_metadata.html [4] https://docs.openstack.org/cinder/rocky/admin/blockstorage-availability-zone-type.html Regards, Tony Pearce -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Feb 24 10:00:48 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 24 Feb 2020 11:00:48 +0100 Subject: [openstack-discuss][neutron] is neutron-openvswitch-agent needed for network connection on provider network? In-Reply-To: References: Message-ID: <60B22D75-C2F4-45E5-AC2E-B87800AB4772@redhat.com> Hi, It isn’t expected behaviour that data plane is broken just because neutron-ovs-agent is stopped. Agent should configure flows/iptables/ports and it should works even if agent is down. Can You check exactly what is causing packet drops? Is it some missing open flow rule, something in iptables? Or maybe something else? > On 21 Feb 2020, at 10:17, Chen CH Ji wrote: > > We are running provider network with VLAN /FLAT, does the neutron-openvswitch-agent is mandatory to be running in order to make the VM deployed able to connect outside? > > e.g. I have a VM instance-00000027, I login the VM and ping out side, in the mean time, use another session to > systemctl stop neutron-openvswitch-agent.service then wait for a few minutes, the ping will stop, does this is an expected behavior? > I know nova-compute is not running won't affect VM functionality, but does neutron openvswitch agent is needed or not? Thanks a lot > > [root at kvm02 ~]# virsh list > Id Name State > ----------------------------------- > 4 instance-00000027 running > > [root at kvm02 ~]# virsh console 4 > Connected to domain instance-00000027 > Escape character is ^] > [root at dede ~]# ping > [root at dede ~]# ping 172.16.32.1 > PING 172.16.32.1 (172.16.32.1) 56(84) bytes of data. > 64 bytes from 172.16.32.1: icmp_seq=1 ttl=64 time=19.8 ms > 64 bytes from 172.16.32.1: icmp_seq=2 ttl=64 time=0.667 ms > 64 bytes from 172.16.32.1: icmp_seq=3 ttl=64 time=0.774 ms > > Ji Chen > z Infrastructure as a Service architect > Phone: 10-82451493 > E-mail: jichenjc at cn.ibm.com > — Slawek Kaplonski Senior software engineer Red Hat From jichenjc at cn.ibm.com Mon Feb 24 10:10:50 2020 From: jichenjc at cn.ibm.com (Chen CH Ji) Date: Mon, 24 Feb 2020 10:10:50 +0000 Subject: [openstack-discuss][neutron] is neutron-openvswitch-agent needed for network connection on provider network? In-Reply-To: <60B22D75-C2F4-45E5-AC2E-B87800AB4772@redhat.com> References: <60B22D75-C2F4-45E5-AC2E-B87800AB4772@redhat.com>, Message-ID: An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Feb 24 10:12:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 24 Feb 2020 11:12:25 +0100 Subject: [neutron] Network segment ranges feature In-Reply-To: References: Message-ID: Hi, > On 21 Feb 2020, at 18:23, Rodolfo Alonso wrote: > > Hello Neutrinos: > > First of all, a reference: https://bugs.launchpad.net/neutron/+bug/1863423 > > I detected some problems with this feature. The service plugin, when enabled and according to the > spec [1], should segregate the segmentation range in several ranges. Each one (or many) of those > ranges could be assigned to a project. When a network is created in a specific project, should > receive a segmentation ID from the project network segment ranges. In case of not having any range > assigned, a segmentation ID from a default range will be provided. > > How the current implementation actually works: When a driver is loaded (VLAN, VXLAN, GRE or Geneve), > it creates a default network segment range, based on the static configuration provided in the plugin > ("network_vlan_ranges", "vni_ranges", etc). Then the admin can create project specific network > segment ranges (remember: always per driver type). Those project specific ranges cannot overlap but > the project specific ranges CAN overlap the default one. A valid state could be: > > VLAN: > - Default: 100:1000 > - Project1: 100:200 > - Project2: 200:300 > > When assigning segmentation IDs to new networks, the driver will query [2]: > - The existing ranges per project. > - The default range (always present). > > The result is that, if the project network segment range runs out of segment IDs, it will retrieve > those ones from the default range. That will lead, eventually, to a clash between ranges. Project 1 > will be able to allocate a network with a segmentation ID from Project 2 range. E.g.: [3] > > The problem is how the queries are done: [4] --> select one of each commented line to reproduce > "query_project_id" and "query_shared" in [2]. > > Now we know the feature is not properly working, my question is what the proper behaviour should be. > Some alternatives: > > 1) If a project has one or more network segment ranges, the segmentation ID should be retrieved ONLY > from those ranges. If a project does not have any range, then it will retrieve from the shared pool > BUT NEVER selecting a segment ID belonging to other project ID (ufff this query is going to be > complex). --> IMO, the best solution. I like this solution and IMO this makes sense but we will not be able to backport it to the stable branches as it changes API behaviour for end user - it may immediately after update notice errors on creation of new networks when before update it was possible using segmentation_id from shared range. > > 2) If a project has one or more network segment ranges, the segmentation ID should be retrieved > first from those ranges and then from the shared one, BUT NEVER selecting a segment ID belonging to > other project ID. Same for range-less projects. --> IMO, if the administrator has assigned a range > for a project, the project should ONLY use this pool. This is not a good solution. This IMO would be better solution for stable branches. As a bug is pretty serious and one tenant can “stole” segmentation ids dedicated to the other tenant I think we should think about fix in stable branches too so I would probably go with this solution. > > Can I have your feedback? > > Regards. > > > [1] > https://specs.openstack.org/openstack/neutron-specs/specs/stein/network-segment-range-management.html > [2]https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/helpers.py#L89-L121 > [3]http://paste.openstack.org/show/789872/ > [4]http://paste.openstack.org/show/789873/ > > > > > — Slawek Kaplonski Senior software engineer Red Hat From bcafarel at redhat.com Mon Feb 24 10:37:40 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Mon, 24 Feb 2020 11:37:40 +0100 Subject: [neutron] Bug deputy report (week starting on 2020-02-17) Message-ID: Picking up after amotoki's report [0], here is last week's report. This includes all bugs reported up to 1864374 (included) Some gate issues (python2 is still fighting us back, etc), new "interesting" bugs on different topics. Bugs marked with "***" do not have current assignee Critical * socket.timeout error in dvr CI jobs cause SSH issues - https://bugs.launchpad.net/neutron/+bug/1863858 Under investigation by slaweq - neutron-tempest-plugin-designate-scenario fails on stable branches with "SyntaxError: invalid syntax" installing dnspython - https://bugs.launchpad.net/neutron/+bug/1864015 Patch merged to pin designate plugin - https://review.opendev.org/708825 - "ping" prepended to ip netns commands - https://bugs.launchpad.net/neutron/+bug/1864186 Patch merged: https://review.opendev.org/709100 High * When updating the network segment ID, the project_id should be provided - https://bugs.launchpad.net/neutron/+bug/1863619 Needed parameter to get private ranges belonging to the project, fix in progress: https://review.opendev.org/708196/ * [OVN] Provider driver IPv6 traffic doesn't work - https://bugs.launchpad.net/neutron/+bug/1863892 IPv6 load balancers with OVN provider do not work, in progress by maciejjozefczyk * [OVN] DHCP doesn't work while instance has disabled port security - https://bugs.launchpad.net/neutron/+bug/1864027 Behaviour introduced by a recent fix, root cause may be in core OVN *** IP allocation for stateless IPv6 does not filter on segment when fixed-ips contain a subnet_id - https://bugs.launchpad.net/neutron/+bug/1864225 Title sums it up nicely, stateless IPv6 + segments issue *** When adding another stateless subnet, implicit address allocation happen for port with deferred ip allocation - https://bugs.launchpad.net/neutron/+bug/1864333 IPv6 allocation happens for deffered allocation port Medium *** Unhandled error - WSREP has not yet prepared node for application use - https://bugs.launchpad.net/neutron/+bug/1863579 Unassigned. Transient startup error seen in kolla-ansible upgrade CI when database is not ready, we can probably add a retry here * [neutron-tempest-plugin] test_trunk.TrunkTestInheritJSONBase.test_add_subport fails if unordered - https://bugs.launchpad.net/neutron/+bug/1863707 Results order failure in test, fix in progress: https://review.opendev.org/708305/ * [OVN]Could not support more than one qos rule in one policy - https://bugs.launchpad.net/neutron/+bug/1863852 Fix in progress: https://review.opendev.org/708586 * Missing information in log message for Invalid tunnel type - https://bugs.launchpad.net/neutron/+bug/1863888 Add some log extra information, fix in progress: https://review.opendev.org/708634 * [OVN] OVN LoadBalancer VIP shouldn't have addresses set - https://bugs.launchpad.net/neutron/+bug/1863893 Regression fix, in progress by maciejjozefczyk * [OVN] Remove dependency on port_object - https://bugs.launchpad.net/neutron/+bug/1863987 Suggested fix: https://review.opendev.org/708806/ Low * Default security group rules not created during list of SG rules - https://bugs.launchpad.net/neutron/+bug/1864171 Fix in progress: https://review.opendev.org/709070 *** BGP dynamic routing in neutron - No Info about capability to receiv/learn dynamic routes - https://bugs.launchpad.net/neutron/+bug/1864219 Doc enhancement request on BGP limitations *** ml2 ovs does not flush iptables switching to FW ovs - https://bugs.launchpad.net/neutron/+bug/1864374 Switching from iptables to ovs firewall is not a trivial task, this may end up being documentation bug too Related to neutron * Upgrade from Rocky to Stein, router namespace disappear - https://bugs.launchpad.net/neutron/+bug/1863982 Issue in kolla-ansible upgrade Rocky->Stein, failing to restore iptables rules. It does not seem to be neutron-specific, but opinions welcome [0] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012610.html -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Feb 24 11:37:38 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 24 Feb 2020 12:37:38 +0100 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <2130957943.152281.1582365846726@mail.yahoo.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> Message-ID: prakash RAMCHANDRAN wrote: > I am curious to find out if any Distribution or Products based on > Openstack  Train or Usuri are seeking the latest certifications based on > 2019.02. If you look at https://www.openstack.org/marketplace/ you can find the mention of the version of the guideline that products are tested against. For example: https://www.openstack.org/marketplace/public-clouds/warescale/warescale-public-cloud appears to be valid against 2019.11 guidelines https://www.openstack.org/marketplace/public-clouds/deutsche-telekom/open-telekom-cloud appears to be valid against 2019.06 guidelines etc. > Similarly does any Hardware Driver of Software application seeking > OpenStack compatibility Logo? We don't have a trademark-driven certification program for drivers. Each project documents which vendor drivers are compatible. See for example for Cinder drivers, those who are "supported vendor driver" have been tested against current versions of OpenStack through 3rd-party testing: https://docs.openstack.org/cinder/latest/reference/support-matrix.html -- Thierry Carrez (ttx) From thierry at openstack.org Mon Feb 24 12:42:16 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 24 Feb 2020 13:42:16 +0100 Subject: [largescale-sig] Next meeting: Feb 26, 9utc Message-ID: <49fea818-6829-863a-3469-b2ec2b7d1e0b@openstack.org> Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, Feb 26 at 9 UTC[1] in #openstack-meeting on IRC: [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200226T09 As always, the agenda for our meeting is available at: https://etherpad.openstack.org/p/large-scale-sig-meeting Feel free to add topics to it! A quick reminder of the TODOs from our last meeting: - masahito to update spec based on initial feedback - everyone to review and comment on https://review.opendev.org/#/c/704733/ Talk to you all on Wednesday, -- Thierry Carrez From isanjayk5 at gmail.com Mon Feb 24 13:43:59 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Mon, 24 Feb 2020 19:13:59 +0530 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster In-Reply-To: References: Message-ID: Hi , Is there any plan to provide local storage support for cinder in future if it is not readily available in openstack-helm current version 3? I appreciate your reply as it will give us clear expectations for our future deployment and make our planning accordingly. thanks and best regards, Sanjay On Mon, Feb 17, 2020 at 2:54 PM Sanjay K wrote: > Hi Tin, > Is there any support for local storage for cinder deployment? If yes, how > can I try that? Since Ceph is not part of our production deployment, I > can't include Ceph in our deployment. > > thanks and regards, > Sanjay > > On Mon, Feb 17, 2020 at 2:34 PM Tin Lam wrote: > >> Hello, Sanjay - >> >> IIRC, cinder service in OSH never supported the NFS provisioner. Can you >> try using the Ceph storage charts instead? >> >> Regards, >> Tin >> >> On Mon, Feb 17, 2020 at 2:54 AM Sanjay K wrote: >> >>> Hello openstack-helm team, >>> >>> I am trying to deploy stein cinder service in my k8s cluster using >>> persistent volume and persistent volume claim for local storage or NFS >>> storage. However after deploying cinder in my cluster, the cinder pods >>> remains in Init state even though the PV and PVC are created. >>> >>> Please look at my below post on openstack forum and guide me how to >>> resolve this issue. >>> >>> >>> https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ >>> >>> thank you for your help and support on this. >>> >>> best regards, >>> Sanjay >>> >> >> >> -- >> Regards, >> >> Tin Lam >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvoelker at vmware.com Mon Feb 24 14:09:46 2020 From: mvoelker at vmware.com (Mark Voelker) Date: Mon, 24 Feb 2020 14:09:46 +0000 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <2130957943.152281.1582365846726@mail.yahoo.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> Message-ID: <775D37EA-DC9B-4B80-A002-51C89A3A9E62@vmware.com> Hi Prakash, I am curious to find out if any Distribution or Products based on Openstack Train or Usuri are seeking the latest certifications based on 2019.02. Hm, there actually isn’t a 2019.02 guideline--were you perhaps referring to 2019.06 or 2019.11? 2019.06 does cover Train but not Usuri [1], 2019.11 covers both [2]. As an FYI, the OpenStack Marketplace does list which guideline a particular product was most recently tested against (refer to https://www.openstack.org/marketplace/distros/ for example, and look for the green “TESTED” checkmark and accompanying guideline version), though this obviously doesn’t tell you what testing might be currently in flight. [1] https://opendev.org/openstack/interop/src/branch/master/2019.06.json#L75 [2] https://opendev.org/openstack/interop/src/branch/master/2019.11.json#L75 At Your Service, Mark T. Voelker On Feb 22, 2020, at 5:04 AM, prakash RAMCHANDRAN > wrote: Hi all, I am curious to find out if any Distribution or Products based on Openstack Train or Usuri are seeking the latest certifications based on 2019.02. Similarly does any Hardware Driver of Software application seeking OpenStack compatibility Logo? Finally does anyone think that Open Infra Distro like Airship or StarlingX should promote Open Infra Airship or Open Infra StarlingX powered as a new way to promote eco system surrounding them similar to OpenStack compatible drivers and software. Will then Argo, customize, Metal3.io or Ironic be qualified as Open Infra Airship compatible? If so how tempest can help in testing the above comments? Refer to this market place below as how Distos and Products leverage OpenStack logos and branding programs. https://www.openstack.org/marketplace/distros/ Discussions and feedback are welcome. A healthy debate as how k8s modules used in Open Infra can be certified will be a good start. Thanks Prakash Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Mon Feb 24 14:44:44 2020 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Mon, 24 Feb 2020 14:44:44 +0000 Subject: [ptl][release][stable][EM] Extended Maintenance - Rocky In-Reply-To: <8c51036b-050a-3aa7-ef97-4f89c1ba7fe0@est.tech> References: <8c51036b-050a-3aa7-ef97-4f89c1ba7fe0@est.tech> Message-ID: Hi Teams, PTLs, Stable Branch Liaisons, This is a reminder that Rocky Extended Maintenance transition is near. Please try to schedule a release if it's necessary, for the coming days. (The estimated transition day was today, but it seems a couple of teams are about to initiate a release.) Thanks, Előd On 2020. 01. 27. 17:34, Elõd Illés wrote: > Hi, > > In less than one month Rocky is planned to enter into Extended > Maintenance phase [1] (estimated date: 2020-02-24). > > I have generated the list of *open* and *unreleased* changes in > *stable/rocky* for the follows-policy tagged repositories [2]. These > lists could help the teams, who are planning to do a final release on > Rocky before moving stable/rocky branches to Extended Maintenance. Feel > free to edit them! > > * At the transition date the Release Team will tag the latest (Rocky) > releases of repositories with *rocky-em* tag. > * After the transition stable/rocky will be still open for bugfixes, but > there won't be any official releases. > > NOTE: teams, please focus on wrapping up your libraries first if there > is any concern about changes, in order to avoid any broken releases! > > Thanks, > > Előd > > [1] https://releases.openstack.org/ > [2] https://etherpad.openstack.org/p/rocky-final-release-before-em > From lennyb at mellanox.com Mon Feb 24 14:47:24 2020 From: lennyb at mellanox.com (Lenny Verkhovsky) Date: Mon, 24 Feb 2020 14:47:24 +0000 Subject: [tripleo] openstack-train mirror for CentOS aarch64 Message-ID: Hi, I see that there is no openstack-train repo for aarch64 for CentOS[1] This repo is also used by kolla project. Who maintains it and if there are any plans for adding train repo? [1] http://mirror.centos.org/altarch/7/cloud/aarch64/ Best Regards Lenny Verkhovsky Mellanox Technologies office:+972 74 712 92 44 irc:lennyb -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Feb 24 15:08:02 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 24 Feb 2020 10:08:02 -0500 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config In-Reply-To: References: Message-ID: not a keystone person, but I can offer you this: https://opendev.org/openstack/sahara/src/commit/75df1e93872a3a6b761d0eb89ca87de0b2b3620f/sahara/utils/openstack/keystone.py#L32 It's a nasty workaround for getting config values from keystone_authtoken which are supposed to private for keystonemiddleware only. It's probably a bad idea. On Fri, Feb 21, 2020 at 9:43 AM Justin Cattle wrote: > > Just to add, it also doesn't seem to be registering the password option from keystone_authtoken either. > > So, makes me think the auth plugin isn't loading , or not the right one at least ?? > > > Cheers, > Just > > > On Thu, 20 Feb 2020 at 20:55, Justin Cattle wrote: >> >> Hi, >> >> >> I'm reaching out for help with a strange issue I've found. Running openstack queens, on ubuntu xenial. >> >> We have a bunch of different sites with the same set-up, recently upgraded from mitaka to queens. However, on this one site, after the upgrade, we cannot start neutron-server. The reason is, that the ml2 plugin throws an error because it can't find auth_url from the keystone_authtoken section of neutron.conf. However, it is there in the file. >> >> The ml2 plugin is calico, it fails with this error: >> >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico [-] Exception in function %s: TypeError: expected string or buffer >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico Traceback (most recent call last): >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/logutils.py", line 21, in wrapped >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return fn(*args, **kwargs) >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/plugins/ml2/drivers/calico/mech_calico.py", line 347, in _post_fork_init >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico auth_url=re.sub(r'/v3/?$', '', auth_url) + >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/re.py", line 155, in sub >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return _compile(pattern, flags).sub(repl, string, count) >> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico TypeError: expected string or buffer >> >> >> When you look at the code, this is because neither auth_url or is found in cfg.CONF.keystone_authtoken. The config defintely exists. >> >> I have copied the neutron.conf config from a working site, same error. I have copied the entire /etc/neutron directory from a working site, same error. >> >> I have check with strace, and /etc/neutron/neutron.conf is the only neutron.conf being parsed. >> >> Here is the keystone_authtoken part of the config: >> >> [keystone_authtoken] >> auth_uri=https://api-srv-cloud.host.domain:5000 >> region_name=openstack >> memcached_servers=1.2.3.4:11211 >> auth_type=password >> auth_url=https://api-srv-cloud.host.domain:5000 >> username=neutron >> password=xxxxxxxxxxxxxxxxxxxxxxxxx >> user_domain_name=Default >> project_name=services >> project_domain_name=Default >> >> >> I'm struggling to understand how the auth_url config is really registered in via oslo_config. >> I found an excellent exchagne on the ML here: >> >> https://openstack.nimeyo.com/115150/openstack-keystone-devstack-confusion-auth_url-middleware >> >> This seems to indicate auth_url is only registered if a particular auth plugin requires it. But I can't find the plugin code that does it, so I'm not sure how/where to debug it properly. >> >> If anyone has any ideas, I would really appreciate some input or pointers. >> >> Thanks! >> >> >> Cheers, >> Just > > > Notice: > This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. > > If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. > > References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. From nicolas.ghirlanda at everyware.ch Mon Feb 24 09:18:22 2020 From: nicolas.ghirlanda at everyware.ch (Nicolas Ghirlanda) Date: Mon, 24 Feb 2020 10:18:22 +0100 Subject: [neutron][dhcp] "additional" dhcp port in state "reserved_dhcp_port", non existing namespace on network node Message-ID: <118dc1a9-3780-de30-26cd-8f90fe8ca49c@everyware.ch> Dear mailinglist, We have the following behavour, which occured in about 3% of our networks. We have a default configured of 3 dhcp servers per subnet, but in about 3% of all networks, we have unexpectedly 4, from which one is in state "reserved_dhcp_port", also there is NO namespace on that control/network node. Setup: Openstack Rocky, Networkmode legacy # openstack port list --device-owner network:dhcp --network a2d4605d-997e-4807-9250-c0c80af3183e +--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+ | ID                                   | Name | MAC Address       | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+ | 40237ca8-aed6-49ee-xxxx-49acf97797e4 | | fa:16:3e:e1:33:83 | ip_address='172.16.0.4', subnet_id='e4380af7-13b4-xxxx-90ff-6b0a2818052b' | ACTIVE | | 6a3898c6-6520-43e7-xxxx-e6b2639f2600 | | fa:16:3e:ff:f0:7b | ip_address='172.16.0.2', subnet_id='e4380af7-13b4-xxxx-90ff-6b0a2818052b' | ACTIVE | | 89479d32-881d-45b9-xxxx-1b41ed8fd703 | | fa:16:3e:ef:7d:dc | ip_address='172.16.0.6', subnet_id='e4380af7-13b4-xxxx-90ff-6b0a2818052b' | ACTIVE | | eaeb0ef7-48c6-43ea-xxxx-8615ed704e48 | | fa:16:3e:66:7c:96 | ip_address='172.16.0.3', subnet_id='e4380af7-13b4-xxxx-90ff-6b0a2818052b' | ACTIVE | +--------------------------------------+------+-------------------+---------------------------------------------------------------------------+--------+ full output of one of that port: # openstack port show 89479d32-881d-45b9-xxxx-1b41ed8fd703 +-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field                   | Value | +-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | admin_state_up          | UP | | allowed_address_pairs | | | binding_host_id         | ctl5 | | binding_profile | | | binding_vif_details     | datapath_type='system', ovs_hybrid_plug='True', port_filter='True' | | binding_vif_type        | ovs | | binding_vnic_type       | normal | | created_at              | 2020-01-16T11:22:14Z | | data_plane_status       | None | | description | | | device_id               | reserved_dhcp_port | | device_owner            | network:dhcp | | dns_assignment          | None | | dns_domain              | None | | dns_name                | None | | extra_dhcp_opts | | | fixed_ips               | ip_address='172.16.0.6', subnet_id='e4380af7-13b4-40f9-xxxx-xxxx' | | id                      | 89479d32-881d-45b9-xxxx-1b41ed8fd703 | | location                | Munch({'project': Munch({'domain_id': None, 'id': u'a772e4ab888e4fxxxx', 'name': 'admin', 'domain_name': 'Default'}), 'cloud': '', 'region_name': 'ch-zh1', 'zone': None}) | | mac_address             | fa:16:3e:ef:7d:xx | | name | | | network_id              | a2d4605d-997e-4807-xxxx-xxx | | port_security_enabled   | False | | project_id              | 232ecbeb96fd4663xxxx | | propagate_uplink_status | None | | qos_policy_id           | None | | resource_request        | None | | revision_number         | 8 | | security_group_ids | | | status                  | ACTIVE | | tags | | | trunk_details           | None | | updated_at              | 2020-02-14T09:16:57Z | +-------------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ ctl5:~# ip netns |grep a2d4605d ctl5:~# *Workaround:* I can easily delete those ports manually and everything seems to be ok (openstack delete port ) When I tried figuring out, what happens, when I disable dhcp in an affected subnet, the following happens (different example): The 6f34 is a reserved port (last line) # openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-af71-a0bc5335f9c5 +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID                                   | Name | MAC Address       | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 23f9e5d7-75a2-440a-xxxx-d82000286fbe |      | fa:16:3e:8b:e2:a8 | ip_address='192.168.50.3', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | | 352bc1a4-e002-440b-xxxx-0383cfb1632b |      | fa:16:3e:29:46:6b | ip_address='192.168.50.5', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | | 6ee79608-7834-4015-xxxx-fda3f47b5e59 |      | fa:16:3e:0c:b6:60 | ip_address='192.168.50.4', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | | 6f3435f8-23c0-427c-xxxx-fca1a6910aff |      | fa:16:3e:2d:35:b6 | ip_address='192.168.50.2', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ - I disable dhcp (openstack subnet set --no-dhcp ), expected to remove all dhcp ports, but the what happens is that all "healthy" dhcp ports will be removed, the reserved port remains # openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-af71-a0bc5335f9c5 +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID                                   | Name | MAC Address       | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 6f3435f8-23c0-427c-xxxx-fca1a6910aff |      | fa:16:3e:2d:35:b6 | ip_address='192.168.50.2', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ - when I now reenable dhcp, nothing happens, still the single, reserved port is existing, no new ports are created. Just the state went from ACTIVE to DOWN to ACTIVE # openstack subnet set --dhcp 882b360a-eae1-4736-a276-af752f552440 # openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-xxxx-a0bc5335f9c5 +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID                                   | Name | MAC Address       | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 6f3435f8-23c0-427c-xxxx-fca1a6910aff |      | fa:16:3e:2d:35:b6 | ip_address='192.168.50.2', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | DOWN   | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ s# openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-xxxx-a0bc5335f9c5 +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID                                   | Name | MAC Address       | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 6f3435f8-23c0-427c-xxxx-fca1a6910aff |      | fa:16:3e:2d:35:b6 | ip_address='192.168.50.2', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ - after deleting the port manually and disable/enable dhcp, 3 new working ports are created # openstack port delete 6f3435f8-23c0-427c-xxxx-fca1a6910aff # openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-xxxx-a0bc5335f9c5 # openstack subnet set --no-dhcp 882b360a-eae1-4736-xxxx-af752f552440 # openstack subnet set --dhcp 882b360a-eae1-4736-xxxx-af752f552440 # openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-xxxx-a0bc5335f9c5 # openstack port list --device-owner network:dhcp --network 8589eb04-228f-4ae1-xxxx-a0bc5335f9c5 +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | ID                                   | Name | MAC Address       | Fixed IP Addresses | Status | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ | 9fd63011-2b1f-410a-xxxx-40d42779c637 |      | fa:16:3e:15:be:fd | ip_address='192.168.50.4', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | | cdd32840-7899-47b2-xxxx-7d36eae515a2 |      | fa:16:3e:4c:40:5c | ip_address='192.168.50.3', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | | dc1eeba2-4afd-4502-xxxx-7370d086e553 |      | fa:16:3e:29:29:5d | ip_address='192.168.50.2', subnet_id='882b360a-eae1-4736-xxxx-af752f552440' | ACTIVE | +--------------------------------------+------+-------------------+-----------------------------------------------------------------------------+--------+ Any ideas where those are coming from? thanks a lot Nicolas -- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 2818 bytes Desc: not available URL: From acormier at 128technology.com Mon Feb 24 15:44:12 2020 From: acormier at 128technology.com (Austin Cormier) Date: Mon, 24 Feb 2020 10:44:12 -0500 Subject: [ironic][neutron][ops] Ironic multi-tenant networking with VXLAN Overlay Message-ID: We are using Openstack + Ironic to test routing software which requires the use of VLANs. Using VXLan overlays allows the VM's to have nested VLANs without issue as they are encapsulated into a VXLAN and sent across. However, we are struggling to find a path forward in extending the VXLAN overlay out to the hardware device. Unfortunately, the networking-generic-switch driver does not support VXLAN but this seems to fairly easy to extend (given the right switch). The challenge I'm facing is that with Neutron OVS + L2Pop, there seems to be no easy way to extend the L2 population to include an external device. The only path forward that I can see is to use the Linux Bridge driver + VXLAN multicast and find a switch that also supports VXLAN multicast so it can participate in the overlay dynamically. The HPE FlexFabric seems to do this from the documentation: https://support.hpe.com/hpesc/public/docDisplay?docId=emr_na-c04567545 Can anyone help confirm that the switch above will work and or give alternative suggestions on approaching this problem? -Austin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Feb 24 15:54:47 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 24 Feb 2020 15:54:47 +0000 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso wrote: > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: >> >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: >> > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek >> > wrote: >> > > >> > > I know it was for masakari. >> > > Gaëtan had to grab crmsh from opensuse: >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ >> > > >> > > -yoctozepto >> > >> > Thanks Wes for getting this discussion going. I've been looking at >> > CentOS 8 today and trying to assess where we are. I created an >> > Etherpad to track status: >> > https://etherpad.openstack.org/p/kolla-centos8 >> > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. I've been working on the backport of kolla CentOS 8 patches to the stable/train branch. It looks like these packages which were added to master are not present in Train. > >> >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error >> code when installing packages. It often happens on the rabbitmq and >> grafana images. There is a prompt about importing GPG keys prior to >> the error. >> >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log >> >> Related bug report? https://github.com/containers/libpod/issues/4431 >> >> Anyone familiar with it? >> > > Didn't know about this issue. > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > >> > >> > > >> > > pon., 27 sty 2020 o 10:13 Marcin Juszkiewicz >> > > napisał(a): >> > > > >> > > > W dniu 27.01.2020 o 09:48, Alfredo Moralejo Alonso pisze: >> > > > > How is crmsh used in these images?, ha packages included in >> > > > > HighAvailability repo in CentOS includes pcs and some crm_* commands in pcs >> > > > > and pacemaker-cli packages. IMO, tt'd be good to switch to those commands >> > > > > to manage the cluster. >> > > > >> > > > No idea. Gaëtan Trellu may know - he created those images. >> > > > >> > > >> From openstack at nemebean.com Mon Feb 24 16:17:06 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 24 Feb 2020 10:17:06 -0600 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config In-Reply-To: References: Message-ID: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> On 2/24/20 9:08 AM, Jeremy Freudberg wrote: > not a keystone person, but I can offer you this: > https://opendev.org/openstack/sahara/src/commit/75df1e93872a3a6b761d0eb89ca87de0b2b3620f/sahara/utils/openstack/keystone.py#L32 > > It's a nasty workaround for getting config values from > keystone_authtoken which are supposed to private for > keystonemiddleware only. It's probably a bad idea. Yeah, config opts should generally not be referenced by other projects. The oslo.config deprecation mechanism doesn't handle the case where an opt gets renamed but is still being referred to in the code by its old name. I realize that's not what happened here, but in general it's a good reason not to do this. If a given config value needs to be exposed to consumers of a library it should be explicitly provided via an API. I realize that's not what happened here, but it demonstrates the fragility of referring to another project's config opts directly. It's also possible that a project could change when its opts get registered, which may be what's happening here. If this plugin code is running before keystoneauth has registered its opts that might explain why it's not being found. That may also explain why it's working in some other environments - if the timing of when the opts are registered versus when the plugin code gets called is different it might cause that kind of varying behavior with otherwise identical code/configuration. I have vague memories of this having come up before, but I can't remember exactly what the recommendation was. Hopefully someone from Keystone can chime in. > > > On Fri, Feb 21, 2020 at 9:43 AM Justin Cattle wrote: >> >> Just to add, it also doesn't seem to be registering the password option from keystone_authtoken either. >> >> So, makes me think the auth plugin isn't loading , or not the right one at least ?? >> >> >> Cheers, >> Just >> >> >> On Thu, 20 Feb 2020 at 20:55, Justin Cattle wrote: >>> >>> Hi, >>> >>> >>> I'm reaching out for help with a strange issue I've found. Running openstack queens, on ubuntu xenial. >>> >>> We have a bunch of different sites with the same set-up, recently upgraded from mitaka to queens. However, on this one site, after the upgrade, we cannot start neutron-server. The reason is, that the ml2 plugin throws an error because it can't find auth_url from the keystone_authtoken section of neutron.conf. However, it is there in the file. >>> >>> The ml2 plugin is calico, it fails with this error: >>> >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico [-] Exception in function %s: TypeError: expected string or buffer >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico Traceback (most recent call last): >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/logutils.py", line 21, in wrapped >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return fn(*args, **kwargs) >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/plugins/ml2/drivers/calico/mech_calico.py", line 347, in _post_fork_init >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico auth_url=re.sub(r'/v3/?$', '', auth_url) + >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/re.py", line 155, in sub >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return _compile(pattern, flags).sub(repl, string, count) >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico TypeError: expected string or buffer >>> >>> >>> When you look at the code, this is because neither auth_url or is found in cfg.CONF.keystone_authtoken. The config defintely exists. >>> >>> I have copied the neutron.conf config from a working site, same error. I have copied the entire /etc/neutron directory from a working site, same error. >>> >>> I have check with strace, and /etc/neutron/neutron.conf is the only neutron.conf being parsed. >>> >>> Here is the keystone_authtoken part of the config: >>> >>> [keystone_authtoken] >>> auth_uri=https://api-srv-cloud.host.domain:5000 >>> region_name=openstack >>> memcached_servers=1.2.3.4:11211 >>> auth_type=password >>> auth_url=https://api-srv-cloud.host.domain:5000 >>> username=neutron >>> password=xxxxxxxxxxxxxxxxxxxxxxxxx >>> user_domain_name=Default >>> project_name=services >>> project_domain_name=Default >>> >>> >>> I'm struggling to understand how the auth_url config is really registered in via oslo_config. >>> I found an excellent exchagne on the ML here: >>> >>> https://openstack.nimeyo.com/115150/openstack-keystone-devstack-confusion-auth_url-middleware >>> >>> This seems to indicate auth_url is only registered if a particular auth plugin requires it. But I can't find the plugin code that does it, so I'm not sure how/where to debug it properly. >>> >>> If anyone has any ideas, I would really appreciate some input or pointers. >>> >>> Thanks! >>> >>> >>> Cheers, >>> Just >> >> >> Notice: >> This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. >> >> If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. >> >> References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. > From openstack at nemebean.com Mon Feb 24 16:21:07 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 24 Feb 2020 10:21:07 -0600 Subject: [oslo] Meeting Time Poll In-Reply-To: <7be71b4b-7c5f-3d99-dcff-65cb629b77a7@nemebean.com> References: <7be71b4b-7c5f-3d99-dcff-65cb629b77a7@nemebean.com> Message-ID: <6c17366e-6ccb-45ce-ad24-2c9f20deefdc@nemebean.com> And the new time for the Oslo meeting is... *drumroll please* ...unchanged! That's right, after surveying the Oslo team and looking at when people were available, we realized that DST will solve most of our conflicts in a few weeks. \o/ As a result we will keep the Oslo meeting at 1500 UTC on Monday, unless it turns out that (like me) everyone else forgot about DST while filling out the survey and that time doesn't actually work well. :-) Thanks to everyone who participated in the poll! -Ben On 2/10/20 4:52 PM, Ben Nemec wrote: > Hello again, > > We have a few regular attendees of the Oslo meeting who have conflicts > with the current meeting time. As a result, we would like to find a new > time to hold the meeting. I've created a Doodle poll[0] for everyone to > give their input on times. It's mostly limited to times that reasonably > overlap the working day in the US and Europe since that's where most of > our attendees are located (yes, I know, that's a self-fulfilling prophecy). > > If you attend the Oslo meeting, please fill out the poll so we can > hopefully find a time that works better for everyone. Thanks! > > -Ben > > /me finally checks this one off the action items for next week :-) > > 0: https://doodle.com/poll/zmyhrhewtes6x9ty > From amy at demarco.com Mon Feb 24 17:03:03 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 24 Feb 2020 11:03:03 -0600 Subject: RHOSP-like installation In-Reply-To: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> References: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Message-ID: Volodymyr, The latest docs can be found at https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/ Thanks, Any (spotz) On Thu, Feb 13, 2020 at 5:05 PM Volodymyr Litovka wrote: > Dear colleagues, > > while having a good experience with Openstack on Ubuntu, we're facing a > plenty of questions re RHOSP installation. > > The primary requirement for our team re RHOSP is to get a knowledge on > RHOSP - how to install it and maintain. As far as I understand, RDO is > the closest way to reach this target but which kind of installation it's > better to use? - > * plain RDO installation as described in generic Openstack guide at > https://docs.openstack.org/install-guide/index.html (specifics in > RHEL/CentOS sections) > * or TripleO installation as described in http://tripleo.org/install/ > * or, may be, it is possible to use RHOSP in kind of trial mode to get > enough knowledge on this platform? > > Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're > going to use in "ultraconverged" mode - as both controller and agent > (compute/network/storage) nodes (controllers, though, can be in > virsh-controlled VMs). In case of TripleO scenario, 4th server can be > used for undercloud role. This installation is intended not for > production use, but rather for learning purposes, so no special > requirements for productivity. The only special requirement - to be > functionally as much as close to canonical RHOSP platform. > > I will highly appreciate your suggestions on this issue. > > Thank you. > > -- > Volodymyr Litovka > "Vision without Execution is Hallucination." -- Thomas Edison > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From grant at civo.com Mon Feb 24 18:07:45 2020 From: grant at civo.com (Grant Morley) Date: Mon, 24 Feb 2020 18:07:45 +0000 Subject: Neutron metadata service not responding due to rabbitmq dropping messages Message-ID: Hi all, We have recently come across an issue where our metadata service stops responding. If you try to curl the service from within an instance you get: % curl http://169.254.169.254

504 Gateway Time-out

The server didn't respond in time. After doing some digging around on our neutron nodes I noticed we were getting loads of RabbitMQ timeout errors whilst trying to process message requests: 2020-02-24 07:28:09.747 26378 ERROR neutron.common.rpc [-] Timeout in RPC method get_ports. Waiting for 26 seconds before next attempt. If the server is not down, consider increasing the rpc_response_timeout option as Neutron server(s) may be overloaded and unable to respond quickly enough.: MessagingTimeout: Timed out waiting for a reply to message ID a14c4a1395864cd980c1ec563a5c48aa The servers are fairly busy, however we do not have a massive installation >1500 instances and roughly 850 routers. However if I restart the "neutron-metadata-agent" service and the "neutron-server" service it seems to fix the issue for a while but ultimately it comes back. I did increase the "rpc_timeout" on the netutron nodes to 120 seconds but that seems quite long to me. Likewise the RabbitMQ servers are not overly busy, we seem to get a constant stream of only 40+ messages in the queue at one time and that can spike depending on workload. Does anyone know of any tuning or tweaking we can do to the metadata service in either Neutron or Nova that might help? We are running OpenStack Queens if that helps. Many thanks, Grant From openstack at fried.cc Mon Feb 24 18:55:15 2020 From: openstack at fried.cc (Eric Fried) Date: Mon, 24 Feb 2020 12:55:15 -0600 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config In-Reply-To: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> References: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> Message-ID: Hi Jeremy- >From what I understand, ksm is set up so that it registers the conf opts for the [keystone_authtoken] section implicitly when you import keystonemiddleware.auth_token, which imports _opts [1], which defines and registers the options [2]. I'm not an expert, but I believe you(r code) should be relying on the above exclusively, and not trying to find/register these options in any other way. Assuming that's already happening, then as others suggested, it may be a matter of import ordering. HTH, efried [1] https://opendev.org/openstack/keystonemiddleware/src/branch/stable/queens/keystonemiddleware/auth_token/__init__.py#L240 [2] https://opendev.org/openstack/keystonemiddleware/src/branch/stable/queens/keystonemiddleware/auth_token/_opts.py#L219-L220 From whayutin at redhat.com Mon Feb 24 19:07:54 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 24 Feb 2020 12:07:54 -0700 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Mon, Feb 24, 2020 at 8:55 AM Mark Goddard wrote: > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > wrote: > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > >> > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > >> > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > >> > wrote: > >> > > > >> > > I know it was for masakari. > >> > > Gaëtan had to grab crmsh from opensuse: > >> > > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > >> > > > >> > > -yoctozepto > >> > > >> > Thanks Wes for getting this discussion going. I've been looking at > >> > CentOS 8 today and trying to assess where we are. I created an > >> > Etherpad to track status: > >> > https://etherpad.openstack.org/p/kolla-centos8 > >> > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know > if you find some issue with it. > > I've been working on the backport of kolla CentOS 8 patches to the > stable/train branch. It looks like these packages which were added to > master are not present in Train. > > I'll help check in on that for you Mark. Thank you!! > > > >> > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > >> code when installing packages. It often happens on the rabbitmq and > >> grafana images. There is a prompt about importing GPG keys prior to > >> the error. > >> > >> Example: > https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > >> > >> Related bug report? https://github.com/containers/libpod/issues/4431 > >> > >> Anyone familiar with it? > >> > > > > Didn't know about this issue. > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are > interested in using it from there instead of rabbit repo. > > > >> > > >> > > > >> > > pon., 27 sty 2020 o 10:13 Marcin Juszkiewicz > >> > > napisał(a): > >> > > > > >> > > > W dniu 27.01.2020 o 09:48, Alfredo Moralejo Alonso pisze: > >> > > > > How is crmsh used in these images?, ha packages included in > >> > > > > HighAvailability repo in CentOS includes pcs and some crm_* > commands in pcs > >> > > > > and pacemaker-cli packages. IMO, tt'd be good to switch to > those commands > >> > > > > to manage the cluster. > >> > > > > >> > > > No idea. Gaëtan Trellu may know - he created those images. > >> > > > > >> > > > >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeremyfreudberg at gmail.com Mon Feb 24 19:37:08 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Mon, 24 Feb 2020 14:37:08 -0500 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config In-Reply-To: References: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> Message-ID: The hacky stuff in Sahara will definitely be removed when I get the chance. I had only added it as a temporary measure until users could transition to using a new section in Sahara's own configuration. (the "trustee" section in sahara.conf, much like the "trustee" section in heat.conf) From kennelson11 at gmail.com Mon Feb 24 19:44:08 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Mon, 24 Feb 2020 11:44:08 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 Message-ID: Hello! There is some debate about where the content of the actual docs should live. This grew out of the discussion about how to setup the includes so that the correct info shows up where we want it. 'Correct' in that last sentence being where the debate is. There are two main schools of thought: 1. All the content of the docs should live in the top level CONTRIBUTING.rst and the sphinx glue should live in doc/source/contributor/contributing.rst. A patch has already been merged for this approach[1]. There is also a patch to update the goal to match[2]. This approach keeps all the info in one place so that if things change in the future, its easier to keep things straight. All the content is also more easily discoverable when looking at a repo in GitHub (or similar) or checking out the code because it is at the top most level of the repo and not hidden in the docs sub directory. 2. The new patch[3] says that the content should live in /doc/source/contributor/contributing.rst and a skeleton with only the most important version should live in the top level CONTRIBUTING.rst. This approach argues that people don't want to read a wall of text when viewing the code on GitHub (or similar) or checking it out and looking at the top level CONTRIBUTING.rst and as such only the important details should be kept in that file. These important details being that we don't accept patches in github and where to report bugs (both of which are included in the larger format of the content). So what do people think? Which approach do they prefer? I am anxious to get this settled ASAP so that projects have time to complete the goal in time. Previous updates if you missed them[4][5]. Please feel free to add other ideas or make corrections to my summaries of the approaches if I missed things :) -Kendall (diablo_rojo) [1] Merged Template Patch (school of thought 1): https://review.opendev.org/#/c/708511/ [2] Goal Update Patch: https://review.opendev.org/#/c/707736/ [3] Current Template Patch (school of thought 2): https://review.opendev.org/#/c/708672/ [4] Update #1: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012364.html [5] Update #2: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012570.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Mon Feb 24 20:47:41 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 24 Feb 2020 21:47:41 +0100 Subject: [neutron] Team meeting 25.02.2020 cancellee Message-ID: Hi, I’m on PTO tomorrow and I will not be able to chair our team meeting. So lets cancel it. See You all on the meeting next week. One reminder, if You are planning to attend Vancouver PTG in June, please add Your name to the list in [1]. I need to give number of rough number of attendees by Sunday, March 2nd. Please add Your name there even if it’s not yet confirmed. [1] https://etherpad.openstack.org/p/neutron-victoria-ptg — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Mon Feb 24 20:56:27 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 24 Feb 2020 21:56:27 +0100 Subject: [ptl][release][stable][EM] Extended Maintenance - Rocky In-Reply-To: References: <8c51036b-050a-3aa7-ef97-4f89c1ba7fe0@est.tech> Message-ID: <79F22CA6-2830-4F21-BCBD-FE2B1C96E249@redhat.com> Hi, In Neutron we still have some patches to merge to stable/rocky branch before we will cut last release and go to EM. Will it be ok if we will do it before end of this week? > On 24 Feb 2020, at 15:44, Elõd Illés wrote: > > Hi Teams, PTLs, Stable Branch Liaisons, > > This is a reminder that Rocky Extended Maintenance transition is near. > > Please try to schedule a release if it's necessary, for the coming days. > (The estimated transition day was today, but it seems a couple of teams > are about to initiate a release.) > > Thanks, > > Előd > > > On 2020. 01. 27. 17:34, Elõd Illés wrote: >> Hi, >> >> In less than one month Rocky is planned to enter into Extended >> Maintenance phase [1] (estimated date: 2020-02-24). >> >> I have generated the list of *open* and *unreleased* changes in >> *stable/rocky* for the follows-policy tagged repositories [2]. These >> lists could help the teams, who are planning to do a final release on >> Rocky before moving stable/rocky branches to Extended Maintenance. Feel >> free to edit them! >> >> * At the transition date the Release Team will tag the latest (Rocky) >> releases of repositories with *rocky-em* tag. >> * After the transition stable/rocky will be still open for bugfixes, but >> there won't be any official releases. >> >> NOTE: teams, please focus on wrapping up your libraries first if there >> is any concern about changes, in order to avoid any broken releases! >> >> Thanks, >> >> Előd >> >> [1] https://releases.openstack.org/ >> [2] https://etherpad.openstack.org/p/rocky-final-release-before-em >> > — Slawek Kaplonski Senior software engineer Red Hat From sean.mcginnis at gmx.com Mon Feb 24 21:00:15 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 24 Feb 2020 15:00:15 -0600 Subject: [ptl][release][stable][EM] Extended Maintenance - Rocky In-Reply-To: <79F22CA6-2830-4F21-BCBD-FE2B1C96E249@redhat.com> References: <8c51036b-050a-3aa7-ef97-4f89c1ba7fe0@est.tech> <79F22CA6-2830-4F21-BCBD-FE2B1C96E249@redhat.com> Message-ID: <1468c6c8-8f2b-b27f-1ba8-5bc6af9ce145@gmx.com> On 2/24/20 2:56 PM, Slawek Kaplonski wrote: > Hi, > > In Neutron we still have some patches to merge to stable/rocky branch before we will cut last release and go to EM. Will it be ok if we will do it before end of this week? That should be fine. As long as these are something in the works and just waiting for them to make it through the review and gating process, we should be able to wait a few days so they are included in a final release. Sean From johnsomor at gmail.com Mon Feb 24 22:25:30 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 24 Feb 2020 14:25:30 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: References: Message-ID: Hi Kendall, Personally I lean towards option #2, simply because I like the idea of potentially splitting out the contributing content into multiple files. This aligns to our current contributor guide which has pages for various ways you can contribute to OpenStack [1]. For example, I could envision a page in the Octavia documentation for new code contributors, a page for the PTL role (this goal), a page for provider driver contributors [2], etc. Really this content seems like more information than should be on a single page (coming from someone that has authored some long pages already, see above provider driver guide[2]). My $0.02. Thanks again for leading this effort. Michael [1] https://docs.openstack.org/contributors [2] https://docs.openstack.org/octavia/latest/contributor/guides/providers.html On Mon, Feb 24, 2020 at 11:47 AM Kendall Nelson wrote: > > Hello! > > There is some debate about where the content of the actual docs should live. This grew out of the discussion about how to setup the includes so that the correct info shows up where we want it. 'Correct' in that last sentence being where the debate is. There are two main schools of thought: > > 1. All the content of the docs should live in the top level CONTRIBUTING.rst and the sphinx glue should live in doc/source/contributor/contributing.rst. A patch has already been merged for this approach[1]. There is also a patch to update the goal to match[2]. This approach keeps all the info in one place so that if things change in the future, its easier to keep things straight. All the content is also more easily discoverable when looking at a repo in GitHub (or similar) or checking out the code because it is at the top most level of the repo and not hidden in the docs sub directory. > > 2. The new patch[3] says that the content should live in /doc/source/contributor/contributing.rst and a skeleton with only the most important version should live in the top level CONTRIBUTING.rst. This approach argues that people don't want to read a wall of text when viewing the code on GitHub (or similar) or checking it out and looking at the top level CONTRIBUTING.rst and as such only the important details should be kept in that file. These important details being that we don't accept patches in github and where to report bugs (both of which are included in the larger format of the content). > > So what do people think? Which approach do they prefer? > > I am anxious to get this settled ASAP so that projects have time to complete the goal in time. > > Previous updates if you missed them[4][5]. > > Please feel free to add other ideas or make corrections to my summaries of the approaches if I missed things :) > > -Kendall (diablo_rojo) > > [1] Merged Template Patch (school of thought 1): https://review.opendev.org/#/c/708511/ > [2] Goal Update Patch: https://review.opendev.org/#/c/707736/ > [3] Current Template Patch (school of thought 2): https://review.opendev.org/#/c/708672/ > [4] Update #1:http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012364.html > [5] Update #2: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012570.html From yangyi01 at inspur.com Tue Feb 25 01:30:12 2020 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Tue, 25 Feb 2020 01:30:12 +0000 Subject: =?utf-8?B?562U5aSNOiBbbmV1dHJvbl0gV2h5IG5ldHdvcmsgcGVyZm9ybWFuY2UgaXMg?= =?utf-8?B?ZXh0cmVtZWx5IGJhZCBhbmQgbGluZWFybHkgcmVsYXRlZCB3aXRoIG51bWJl?= =?utf-8?Q?r_of_VMs=3F?= In-Reply-To: <52E415A7-AF3E-4798-803C-391123729345@gmail.com> References: <52E415A7-AF3E-4798-803C-391123729345@gmail.com> Message-ID: <7073e8d849a14dedabd7dfcbdf100584@inspur.com> Satish, do you know how to get max age time by brctl cmd? Is it setageing or setmaxage to set max age? 发件人: Satish Patel [mailto:satish.txt at gmail.com] 发送时间: 2020年2月24日 7:10 收件人: Donny Davis 抄送: Yi Yang (杨燚)-云服务集团 ; openstack-discuss at lists.openstack.org 主题: Re: [neutron] Why network performance is extremely bad and linearly related with number of VMs? What is max age time in Linux bridge? If it’s zero then it won’t learn Mac and flush arp table. Sent from my iPhone On Feb 23, 2020, at 12:52 AM, Donny Davis > wrote:  So I am curious as to what your question is. Are you asking about ovs bridges learning MAC's of other compute nodes or why network performance is affected when you run more than one instance per node. I have not observed this behaviour in my experience. Could you tell us more about the configuration of your deployment? I understand you are currently using linux bridges that are connected to openvswitch bridges? Why not just use ovs? OVS can handle security groups. On Fri, Feb 21, 2020 at 9:48 AM Yi Yang (杨燚)-云服务集团 > wrote: Hi, All Anybody has noticed network performance between VMs is extremely bad, it is basically linearly related with numbers of VMs in same compute node. In my case, if I launch one VM per compute node and run iperf3 tcp and udp, performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP packets, it can reach 180000 pps (packets per second), but if I launch two VMs per compute node (note: they are in the same subnet) and only run pps test case, that will be decrease to about 90000 pps, if I launch 3 VMs per compute node, that will be about 50000 pps, I tried to find out the root cause, other VMs in this subnet (they are in the same compute node as iperf3 client) can receive all the packets iperf3 client VM sent out although destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of iperf3 server VM in another compute node, by further check, I did find qemu instances of these VMs have higher CPU utilization and corresponding vhost kernel threads also also higher CPU utilization, to be importantly, I did find ovs was broadcasting these packets because all the ovs bridges didn’t learn this destination MAC. I tried this in Queens and Rocky, the same issue is there. By the way, we’re using linux bridge for security group, so VM tap interface is attached into linux bridge which is connected to br-int by veth pair. Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched many VMs: recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412, used:0.000s, flags:SP., actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10 .3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18 ,19 $ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51 All the bridges can’t learn this MAC. My question is why ovs bridges can’t learn MACs of other compute nodes, is this common issue of all the Openstack versions? Is there any known existing way to fix it? Look forward to hearing your insights and solutions, thank you in advance and have a good day. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From yangyi01 at inspur.com Tue Feb 25 01:38:03 2020 From: yangyi01 at inspur.com (=?utf-8?B?WWkgWWFuZyAo5p2o54eaKS3kupHmnI3liqHpm4blm6I=?=) Date: Tue, 25 Feb 2020 01:38:03 +0000 Subject: =?utf-8?B?562U5aSNOiBbbmV1dHJvbl0gV2h5IG5ldHdvcmsgcGVyZm9ybWFuY2UgaXMg?= =?utf-8?B?ZXh0cmVtZWx5IGJhZCBhbmQgbGluZWFybHkgcmVsYXRlZCB3aXRoIG51bWJl?= =?utf-8?Q?r_of_VMs=3F?= In-Reply-To: References: Message-ID: <5edd39065b5f4ed8a599018e4005266a@inspur.com> We checked Openstack Rocky, it doesn’t have this issue, this Openstack is Queens, is it a bug of Queens? I know OVS can handle security group, but our current system used linux bridge for security group, we can’t change it. My issue is br-int can’t learn MACs of VMs in another compute node. 发件人: Donny Davis [mailto:donny at fortnebula.com] 发送时间: 2020年2月23日 13:50 收件人: Yi Yang (杨燚)-云服务集团 抄送: openstack-discuss at lists.openstack.org 主题: Re: [neutron] Why network performance is extremely bad and linearly related with number of VMs? So I am curious as to what your question is. Are you asking about ovs bridges learning MAC's of other compute nodes or why network performance is affected when you run more than one instance per node. I have not observed this behaviour in my experience. Could you tell us more about the configuration of your deployment? I understand you are currently using linux bridges that are connected to openvswitch bridges? Why not just use ovs? OVS can handle security groups. On Fri, Feb 21, 2020 at 9:48 AM Yi Yang (杨燚)-云服务集团 > wrote: Hi, All Anybody has noticed network performance between VMs is extremely bad, it is basically linearly related with numbers of VMs in same compute node. In my case, if I launch one VM per compute node and run iperf3 tcp and udp, performance is good, it is about 4Gbps and 1.7Gbps, for 16 bytes small UDP packets, it can reach 180000 pps (packets per second), but if I launch two VMs per compute node (note: they are in the same subnet) and only run pps test case, that will be decrease to about 90000 pps, if I launch 3 VMs per compute node, that will be about 50000 pps, I tried to find out the root cause, other VMs in this subnet (they are in the same compute node as iperf3 client) can receive all the packets iperf3 client VM sent out although destination MAC isn’t broadcast MAC or multicast MAC, actually it is MAC of iperf3 server VM in another compute node, by further check, I did find qemu instances of these VMs have higher CPU utilization and corresponding vhost kernel threads also also higher CPU utilization, to be importantly, I did find ovs was broadcasting these packets because all the ovs bridges didn’t learn this destination MAC. I tried this in Queens and Rocky, the same issue is there. By the way, we’re using linux bridge for security group, so VM tap interface is attached into linux bridge which is connected to br-int by veth pair. Here is output of “ovs-appctl dpif/dump-flows br-int” after I launched many VMs: recirc_id(0),in_port(12),eth(src=fa:16:3e:49:26:51,dst=fa:16:3e:a7:0a:3a),et h_type(0x0800),ipv4(tos=0/0x3,frag=no), packets:11012944, bytes:726983412, used:0.000s, flags:SP., actions:push_vlan(vid=1,pcp=0),2,set(tunnel(tun_id=0x49,src=10.3.2.17,dst=10 .3.2.16,ttl=64,tp_dst=4789,flags(df|key))),pop_vlan,9,8,11,13,14,15,16,17,18 ,19 $ sudo ovs-appctl fdb/show br-floating | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-tun | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-bond1 | grep fa:16:3e:49:26:51 $ sudo ovs-appctl fdb/show br-int | grep fa:16:3e:49:26:51 All the bridges can’t learn this MAC. My question is why ovs bridges can’t learn MACs of other compute nodes, is this common issue of all the Openstack versions? Is there any known existing way to fix it? Look forward to hearing your insights and solutions, thank you in advance and have a good day. -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3600 bytes Desc: not available URL: From rui.zang at yandex.com Tue Feb 25 02:24:15 2020 From: rui.zang at yandex.com (rui zang) Date: Tue, 25 Feb 2020 10:24:15 +0800 Subject: Cinder / Nova - Select cinder backend based on nova availability zone? In-Reply-To: References: Message-ID: <4384601582597455@iva8-89610aea0561.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Tue Feb 25 06:48:17 2020 From: licanwei_cn at 163.com (licanwei) Date: Tue, 25 Feb 2020 14:48:17 +0800 (GMT+08:00) Subject: [Watcher]We will have IRC meeting at 8:00 UTC tomorrow In-Reply-To: <131cfd81.29f2.1707b1234fe.Coremail.licanwei_cn@163.com> References: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> <7190d46f.2b59.1703dc876c2.Coremail.licanwei_cn@163.com> <131cfd81.29f2.1707b1234fe.Coremail.licanwei_cn@163.com> Message-ID: <68aa250e.2b18.1707b1a7a1b.Coremail.licanwei_cn@163.com> please update the meeting agenda if you have something want to discuss. Thanks | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 On 02/13/2020 17:01, licanwei wrote: Hi, next time i will send the mail one day in advance. hope to meet you next time! Thanks licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 On 02/12/2020 17:58, info at dantalion.nl wrote: Hello Lican, Sorry but I am unable to attend at such short notice as in my timezone I won't be awake unless it is for the Watcher meeting. To be able to adjust my transit schedule I will have to know a day in advance. Hope to be there next time. Kind regards, Corne Lukken On 2/12/20 3:31 AM, licanwei wrote: > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > 签名由 网易邮箱大师 定制 > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pramchan at yahoo.com Tue Feb 25 07:16:13 2020 From: pramchan at yahoo.com (prakash RAMCHANDRAN) Date: Tue, 25 Feb 2020 07:16:13 +0000 (UTC) Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <775D37EA-DC9B-4B80-A002-51C89A3A9E62@vmware.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> <775D37EA-DC9B-4B80-A002-51C89A3A9E62@vmware.com> Message-ID: <1706963754.79322.1582614973577@mail.yahoo.com> Mark, Glad you pointed to right code. Reviewed and stand corrected. My mis-underdtanding was I considered 20919.01 as first release and 2019.02 as second release of the year. However based on your comments and reference I understand that it is year.month of release , thus 2019.11 includes 'usuri' and previous 3 releases as listed pointed by you. - "os_trademark_approval": { - "target_approval": "2019.11", - "replaces": "2019.06", - "releases": ["rocky", "stein", "train", "ussuri"], - "status": "approved" - } - }, - That clears that I should have asked for 2019.11. Few more questions on Tempedt tests. I read some where that we have about 1500 Tempest tests overall. Is that correct? The interop code lines have gone down from 3836 lines to 3232 in train to usuri. Looks contrary to growth, any comments? Question then is  60 compute and 40 storage lines I see in test cases, do we have stats for Tempest tests what's the distribution of 1500 tests across Platform, compute, storage etc. Where and how can I get that. information as documented report. Based on above should we expect decrease or increase for say 2020.05 Vancouver release? How does one certify a kubernetes cluster running openstak modules, one module per docker container in a kubrrnetes  cluster using tempest say like in Airship Control Plane on k8s worker node, which is a OpenStack over kubernetes cluster. Is this possible and if so what test we need to modify to test and certify a Containerized OpenStak in Airship as OpenStack Platform? Can we even certify if for say 2019.11? This should open up exciting possibilities if practical to extend OpeStack powered platform to Airship. Like to hear anyone who has insight to educate us on that. ThanksPrakash Sent from Yahoo Mail on Android On Mon, Feb 24, 2020 at 6:09 AM, Mark Voelker wrote: Hi Prakash, I am curious to find out if any Distribution or Products based on Openstack  Train or Usuri are seeking the latest certifications based on 2019.02. Hm, there actually isn’t a 2019.02 guideline--were you perhaps referring to 2019.06 or 2019.11?  2019.06 does cover Train but not Usuri [1], 2019.11 covers both [2].  As an FYI, the OpenStack Marketplace does list which guideline a particular product was most recently tested against (refer to https://www.openstack.org/marketplace/distros/ for example, and look for the green “TESTED” checkmark and accompanying guideline version), though this obviously doesn’t tell you what testing might be currently in flight. [1] https://opendev.org/openstack/interop/src/branch/master/2019.06.json#L75[2] https://opendev.org/openstack/interop/src/branch/master/2019.11.json#L75 At Your Service, Mark T. Voelker On Feb 22, 2020, at 5:04 AM, prakash RAMCHANDRAN wrote: Hi all, I am curious to find out if any Distribution or Products based on Openstack  Train or Usuri are seeking the latest certifications based on 2019.02. Similarly does any Hardware Driver of Software application seeking OpenStack compatibility Logo? Finally does anyone think that Open Infra Distro like Airship or StarlingX should promote Open Infra Airship or Open Infra  StarlingX powered as a new way to promote eco system surrounding them similar to OpenStack compatible drivers and software. Will then Argo, customize,Metal3.io or Ironic be qualified as Open Infra Airship compatible? If so how tempest can help in testing the above comments? Refer to this market place below as how Distos and Products leverage OpenStack logos and branding programs. https://www.openstack.org/marketplace/distros/ Discussions and feedback are welcome. A healthy debate as how k8s modules used in Open Infra can be certified will be a good start. ThanksPrakash Sent from Yahoo Mail on Android -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Tue Feb 25 07:58:56 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 25 Feb 2020 08:58:56 +0100 Subject: [tc] Presence at OpenDev Vancouver in June Message-ID: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Hello, I am filling the survey for the room booking during the OpenDev summit in Vancouver. Do you already know if you will attend, this way we have a first guesstimate of the attendance? Can you answer it here by Thursday, please? FYI: I thought of booking for min 1.5 days, maximum time 2 days, as last time we finished the marathon in two days. For the room format, I like the roundtable (was wondering to add extra seats next to the roundtable for people joining), however it's only around 10 persons per table, which won't be enough for us all if everyone attends. Regards, JP From jean-philippe at evrard.me Tue Feb 25 08:20:08 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 25 Feb 2020 09:20:08 +0100 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: References: Message-ID: <126679609026faf1ab64f9f9a807247896c5d938.camel@evrard.me> On Mon, 2020-02-24 at 14:25 -0800, Michael Johnson wrote: > For example, I could envision a page in the Octavia documentation for > new code contributors, a page for the PTL role (this goal), a page > for > provider driver contributors [2], etc. I think one of the ideas was to standardize, so maybe we shouldn't eventually diverge too much? (Which will happen anyway, but maybe we should keep a mental note to try to remain standard, and behave all the same way...) > Really this content seems like more information than should be on a > single page (coming from someone that has authored some long pages > already, see above provider driver guide[2]). Agreed. > My $0.02. Thanks again for leading this effort. Hey, a cent is worth a lot apparently, thanks for giving two of those! ;) Thanks indeed for the work Kendall and crew. > On Mon, Feb 24, 2020 at 11:47 AM Kendall Nelson < > kennelson11 at gmail.com> wrote: > > So what do people think? Which approach do they prefer? I prefer option 2, but I would say that I also trust you to come with the best solution with your experience. Waiting for review on the alternative patch chain. Regards, JP From Dong.Ding at dell.com Tue Feb 25 02:14:13 2020 From: Dong.Ding at dell.com (Dong.Ding at dell.com) Date: Tue, 25 Feb 2020 02:14:13 +0000 Subject: [manila] share group replication spike/questions Message-ID: Hi, guys, As we talked about the topic in a virtual PTG few months ago. https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (Support promoting several shares in group (DELL EMC: dingdong) I'm trying to write a manila-spec for it. It's my first experience to implement such feature in framework. I need to double check with you something, and hope you can give me some guides like: 1. Where is the extra-spec defined for group/group type, it's in Manila repo, right? (like manila.db.sqlalchemy.models....) 2. The command cli should be implemented for 'python-manilaclinet' repo, right? (I have never touched this repo before) 3. Where is the rest-api should be implemented? 4. And more tips you have? like any other related project should be changed? Just list what I know, and more details questions will be raised when implementing, I think. FYI Thanks, Ding Dong -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony.pearce at cinglevue.com Tue Feb 25 08:55:54 2020 From: tony.pearce at cinglevue.com (Tony Pearce) Date: Tue, 25 Feb 2020 16:55:54 +0800 Subject: Cinder / Nova - Select cinder backend based on nova availability zone? In-Reply-To: <4384601582597455@iva8-89610aea0561.qloud-c.yandex.net> References: <4384601582597455@iva8-89610aea0561.qloud-c.yandex.net> Message-ID: Hi Rui, thank you for the reply. I did find that link previously but I was unable to get it working. Because I am unable to get it working, I tried to use the `cinder_img_volume_type` in the glance image metadata to select a backend (which is also not working - bug: https://bugs.launchpad.net/cinder/+bug/1864616) When I try and use the RESKEY:availability_zones in the volume type it does not work for me and what is happening is that the volume type of `__DEFAULT__` is being selected. I peformed these steps to test, are these steps correct? 1. create the 2 cinder backends 2. create 2 volume types (one for each backend). 3. add the RESKEY:availability_zones extra specs to the volume type, like: RESKEY:availability_zones="host1" for the first volume type and RESKEY:availability_zones="host2" for the 2nd volume type 4. create 2 host aggrigates with AZ "host1" and "host2" for each host1 / host2 and assign a host to host1 5. try to launch an instance and choose the availability zone "host1" from the dropdown 6. result = instance goes error state volume goes into in-use volume shows "host" as storage at backend and the "backend" is incorrectly chosen In addition to that, the "volume type" selected (which I expected to be host1) is showing as __DEFAUT__ At this point I am not sure if I am doing something incorrect and this is the reason why it is not working, or, I have correct config and this is a bug experience. I'd be grateful if you or anyone else that reads this could confirm. Thanks and regards *Tony Pearce* | *Senior Network Engineer / Infrastructure Lead**Cinglevue International * Email: tony.pearce at cinglevue.com Web: http://www.cinglevue.com *Australia* 1 Walsh Loop, Joondalup, WA 6027 Australia. Direct: +61 8 6202 0036 | Main: +61 8 6202 0024 Note: This email and all attachments are the sole property of Cinglevue International Pty Ltd. (or any of its subsidiary entities), and the information contained herein must be considered confidential, unless specified otherwise. If you are not the intended recipient, you must not use or forward the information contained in these documents. If you have received this message in error, please delete the email and notify the sender. On Tue, 25 Feb 2020 at 10:24, rui zang wrote: > I don't think there is auto-alignment between nova az and cinder az. > Probably you may want to look at this > https://docs.openstack.org/cinder/rocky/admin/blockstorage-availability-zone-type.html > Cinder AZ can be associated with volume types. When you create a VM, you > can specify the VM nova az (eg. host1), volume type (cinder az, eg, also > host1), thus by aligning nova az and cinder az manually, maybe your goal > can be achieved. I believe there are also other similar configuration ways > to make it work like this. > > Thanks, > Zang, Rui > > 24.02.2020, 16:53, "Tony Pearce" : > > > > Apologies in advance if this seems trivial but I am looking for some > direction on this and I may have found a bug while testing also. > > > Some background - I have 2 physical hosts which I am testing with (host1, > host2) and I have 2 separate cinder backends (backend1, backend2). Backend1 > can only be utilised by host1. Same for backend2 - it can only be utilised > by host2. So they are paired together like: host1:backend1 host2:backend2 > > > So I wanted to select a Cinder storage back-end based on nova availability > zone and to do this when creating an instance through horizon (not creating > a volume directly). Also I wanted to avoid the use of metadata input on > each instance create or by using metadata from images (such as > cinder_img_volume_type) [2] . Because I can foresee a necessity to then > have a number of images which reference each AZ or backend individually. > > > > > - Is it possible to select a backend based on nova AZ? If so, could > anyone share any resources to me that could help me understand how to > achieve it? > > > > Because I failed at achieving the above, I then decided to use one way > which had worked for me in the past, which was to use the image metadata > "cinder_img_volume_type". However I find that this is not working. The > “default” volume type is selected (if cinder.conf has it) or if no default, > then `__DEFAULT__` is being selected. The link at [2] states that first, a > volume type is used based on the volume type selected and if not chosen/set > then the 2nd method is "cinder_img_volume_type" from the image metadata and > then the 3rd and final is the default from cinder.conf. > > > > I have tested with fresh deployment using Kayobe as well as RDO’s > packstack. > > Openstack version is *Train* > > > Steps to reproduce: > 1. Install packstack > > 2. Update cinder.conf with enabled_backends and the [backend] > > 3. Add the volume type to reference the backend (for reference, I call > this volume type `number-1`) > > 4. Upload an image and add metadata `cinder_img_volume_type` and the name > as mentioned in step 3: number-1 > > 5. Try and create an instance using horizon. Source = image and create new > volume > > 6. Result = volume type / backend as chosen in the image metadata is not > used and instance goes into error status. > > > After fresh-deploying the RDO Packstack, I enabled debug logs and tested > again. In the cinder-api.log I see “"volume_type": null,” and then the next > debug log immediately after logged as “Create volume request body:” has > “volume_type': None”. > > > I was searching for a list of the supported image metadata, in case it had > changed but the pages seem empty one rocky/stein/train [3] or not yet > updated. > > > Selecting backend based on nova AZ:: > > > I was searching how to achieve this and I came across this video on the > subject of AZs [1]. Although it seems only in the context of creating > volumes (not with creating instances with volume from an image, for > example). > > I have tried creating a host aggregate in nova, with AZ name `host1az`. > I've also created a backend in Cinder (cinder.conf) with > `backend_availability_zone = host1az`. But this does not appear to achieve > the desired result, either and the cinder api logs are showing > “"availability_zone": null” during the volume create part of the launch > instance from Horizon. > > I also tried setting RESKEY [3] in the volume type, but again similar > situation seen; although I dont think this option is the correct context > for what I am attempting. > > > Could anyone please nudge me in the right direction on this? Any pointers > appreciated at this point. Thanks in advance. > > References: > > [1] *https://www.youtube.com/watch?v=a5332_Ew9JA* > > > [2] *https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html* > > > [3] > *https://docs.openstack.org/cinder/train/contributor/api/cinder.api.schemas.volume_image_metadata.html* > > > > [4] > *https://docs.openstack.org/cinder/rocky/admin/blockstorage-availability-zone-type.html* > > > > > Regards, > > *Tony Pearce* > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gr at ham.ie Tue Feb 25 09:05:30 2020 From: gr at ham.ie (Graham Hayes) Date: Tue, 25 Feb 2020 09:05:30 +0000 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <5987e0cc-14b4-4c79-ba81-715a23858e65@ham.ie> On 25/02/2020 07:58, Jean-Philippe Evrard wrote: > Hello, > > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? > > Can you answer it here by Thursday, please? > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. > > Regards, > JP > > I should be attending Vancouver, so sign me up! - Graham From rico.lin.guanyu at gmail.com Tue Feb 25 09:08:51 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 25 Feb 2020 17:08:51 +0800 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: On Tue, Feb 25, 2020 at 4:05 PM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > > Hello, > > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? > > Can you answer it here by Thursday, please? I'm working on finding budget before I can confirm with you and I can only get the answer one month before the event. > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. I think 2 day sounds ideal > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. The same old question, will it possible if we adding chairs if we need it? > > Regards, > JP > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Tue Feb 25 09:15:49 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Tue, 25 Feb 2020 10:15:49 +0100 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <7a8663ed81967370548cd961027d00fc99bf5b9a.camel@evrard.me> On Tue, 2020-02-25 at 08:58 +0100, Jean-Philippe Evrard wrote: > Hello, > > I am filling the survey for the room booking during the OpenDev > summit > in Vancouver. Do you already know if you will attend, this way we > have > a first guesstimate of the attendance? Allow me to clarify my ideas here... (Sorry for any confusion created by my poor english wording this morning). I am not asking here for a committment to go, but instead I am asking to know who of you (anyone, not only tc members!) intend to join us or tc-members who already know they won't be able to join. That's what I meant by "a guesstimate" :) Sorry for the inconvenience... Regards, Jean-Philippe Evrard (evrardjp) From rico.lin.guanyu at gmail.com Tue Feb 25 09:34:23 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 25 Feb 2020 17:34:23 +0800 Subject: [heat] Presence at OpenDev Vancouver in June Message-ID: Hi all, As the deadline for OpenDev Vancouver team survey is near (before this weekend). We need to decide if we would like to reserved a room or not. And since I'm not that positively sure I will get the budget to travel this time. We need to make sure if anyone is going so we can have people to be the moderator. Note that I will not occupy a room if no others are able to be there. Or at least *might* be there. -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Feb 25 09:41:00 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 25 Feb 2020 17:41:00 +0800 Subject: [automation-sig][self-healing-sig][auto-scaling-sig] Presence at OpenDev Vancouver in June Message-ID: Hi all, As the deadline for OpenDev Vancouver team survey is near (before this weekend). We need to decide if we would like to reserve a room or not. And since I'm not that positively sure I will get the budget to travel this time. We need to make sure if anyone is going so we can have people to be the moderator. Note that I will not occupy a room if no others are able to be there. *might* going is totally fine as long as we can have people there as moderator -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From rico.lin.guanyu at gmail.com Tue Feb 25 09:42:00 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Tue, 25 Feb 2020 17:42:00 +0800 Subject: [Multi-Arch-SIG] Presence at OpenDev Vancouver in June Message-ID: Hi all, As the deadline for OpenDev Vancouver team survey is near (before this weekend). We need to decide if we would like to reserve a room or not. And since I'm not that positively sure I will get the budget to travel this time. We need to make sure if anyone is going so we can have people to be the moderator. Note that I will not occupy a room if no others are able to be there. *might* going is totally fine as long as we can have people there as moderator -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 25 10:02:30 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 11:02:30 +0100 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <1706963754.79322.1582614973577@mail.yahoo.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> <775D37EA-DC9B-4B80-A002-51C89A3A9E62@vmware.com> <1706963754.79322.1582614973577@mail.yahoo.com> Message-ID: prakash RAMCHANDRAN wrote: > [...] > The interop code lines have gone down from 3836 lines to 3232 in train > to usuri. > > Looks contrary to growth, any comments? If you look at the difference, most of it is due to the removal of the volumes-v2 API which was deprecated in favor of volumes-v3. 2019.11 is the guidelines where that old version is finally no longer required. -- Thierry Carrez (ttx) From thierry at openstack.org Tue Feb 25 10:04:33 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 11:04:33 +0100 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: Jean-Philippe Evrard wrote: > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? > > Can you answer it here by Thursday, please? > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. I shall be around for most of it. -- Thierry From thierry at openstack.org Tue Feb 25 10:23:56 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 11:23:56 +0100 Subject: [all][tc][uc] Uniting the TC and the UC Message-ID: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> Hi all, When our current project governance was established in 2012, we defined two separate bodies. The Technical Committee represented developers / code contributors to the open source project(s), while the User Committee represented the operators running the resulting software, as well as users of the APIs. That setup served us well in those early days. The focus on the upstream side was strongly around *development* of code, we did not have that many users, and even less users directly involved in upstream development. A separate User Committee resulted in the formation of an engaged community of users, and ensured that our community in general (and our events in particular) took the needs of operators into account. Fast-forward to 2020: the upstream focus is more on maintenance. We now have a lot of users, and thanks to the efforts of the UC they are increasingly directly involved in the open source project development, with several operators leading upstream project teams directly. There is now limited UC-specific activity, and as such it is struggling to get volunteers to step up for it: nobody nominated themselves for the last round of election. Keeping two separate bodies to represent them maintains the illusion that devs and ops are different breeds, and sometimes discourages operators from running for the TC and more directly influence the shape of the software. If anything, we need more ops representation at the TC, and seeing people like mnaser being elected at (and chairing) the TC was a great experience. I discussed the situation with the current UC members, and we decided it's time to consider having a single "community" and a single body to represent it. That body would tackle the traditional TC tasks (open source project governance and stewardship) but also the UC tasks (user survey, ambassador program...). That body would be elected by contributors, in a large sense that includes the AUC definition. I feel like that would help remove the artificial barriers discouraging users to get more directly involved with software development and maintenance. There are multiple ways to achieve that, in increasing order of complexity and bylaws changes needed: 1- No bylaws change As bylaws changes take a lot of time and energy, the simplest approach would be to merge the TC and UC without changing the bylaws at all. The single body (called TC) would incorporate the AUC criteria by adding all AUC members as extra-ATC. It would tackle all aspects of our community. To respect the letter of the bylaws, the TC would formally designate 5 of its members to be the 'UC' and those would select a 'UC chair'. But all tasks would be handled together. 2- Minimal bylaws change Same, but all mentions of the UC would be removed from the bylaws (affecting sections 4.12, 4.14, and Appendix 10). This would require a simple vote by the Board of Directors, and would avoid some confusion and having to formally designate a subset of the TC to respect the letter of the bylaws. 3- Maximal bylaws change Create a new body ("steering committee" for example) replacing the TC and the UC. This would require changing sections 4.1, 4.12, 4.13, 4.14, 4.20, 7.1.b, 7.4, 9.1, 9.2.d, and Appendixes 4 and 10. Section 4.13, 7.1 and 7.4 being heavily protected in the bylaws, they require special votes by each class of Foundation members, probably a multi-year effort. A lot of documentation would also need to be changed in case the TC is renamed. I personally don't think the benefit of the change offsets the large cost of option (3). Given the current vacancy in the UC makeup I would recommend we pursue option 1 immediately (starting at the next TC election in April), and propose minimal bylaws change to the board in parallel (option 2). Thoughts ? -- Thierry Carrez From prash.ing.pucsd at gmail.com Tue Feb 25 10:26:56 2020 From: prash.ing.pucsd at gmail.com (prashant) Date: Tue, 25 Feb 2020 15:56:56 +0530 Subject: [openstack-helm][Horizon] After Using a customize image of horizon container get fail. Message-ID: Hi All, We have build the horizon image via kolla and trying to deploy using openstack-helm chart. while doing it could able to pull the image but after start the container it get crashed. below logs we are getting: ------------------------------ + exec apache2 -DFOREGROUND AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 172.21.237.188. Set the 'ServerName' directive globally to suppress this message no listening sockets available, shutting down AH00015: Unable to open logs ----------------------------------- is any change need to be done for customize image for related to port? Thanks, Prashant -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 25 10:53:16 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 11:53:16 +0100 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: References: Message-ID: <98a8ffb8-e2ba-33eb-eae9-ad30db843c5c@openstack.org> Kendall Nelson wrote: > There is some debate about where the content of the actual docs should > live. This grew out of the discussion about how to setup the includes so > that the correct info shows up where we want it. 'Correct' in that last > sentence being where the debate is. There are two main schools of thought: > >    1. All the content of the docs should live in the top level > CONTRIBUTING.rst and the sphinx glue should live in > doc/source/contributor/contributing.rst. A patch has already been merged > for this approach[1]. There is also a patch to update the goal to > match[2]. This approach keeps all the info in one place so that if > things change in the future, its easier to keep things straight. All the > content is also more easily discoverable when looking at a repo in > GitHub (or similar) or checking out the code because it is at the top > most level of the repo and not hidden in the docs sub directory. > >     2.  The new patch[3] says that the content should live in > /doc/source/contributor/contributing.rst and a skeleton with only the > most important version should live in the top level CONTRIBUTING.rst. > This approach argues that people don't want to read a wall of text when > viewing the code on GitHub (or similar) or checking it out and looking > at the top level CONTRIBUTING.rst and as such only the important details > should be kept in that file. These important details being that we don't > accept patches in github and where to report bugs (both of which are > included in the larger format of the content). > > So what do people think? Which approach do they prefer? I personally prefer a single page approach (school 1). That said, I think we need to be very careful to keep this page to a minimum, to avoid the "wall of text" effect. In particular, I think: - "Communication" / "Contacting the Core team" could be collapsed into a single section - "Task tracking" / "Reporting a bug" could be collapsed into a single section - "Project team lead duties" sounds a bit overkill for a first-contact doc, can probably be documented elsewhere. - Sections could be reordered in order of likely involvement: How to talk with the team, So you want to report a bug, So you want to propose a change, So you want to propose a new feature. > I am anxious to get this settled ASAP so that projects have time to > complete the goal in time. Agree it would be good to come up with a consensus on this ASAP. Maybe the TC can settle it at next week meeting. -- Thierry Carrez (ttx) From sshnaidm at redhat.com Tue Feb 25 12:52:38 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Tue, 25 Feb 2020 14:52:38 +0200 Subject: [ansible-sig][openstack-ansible][tripleo] Making Openstack Ansible modules meeting biweekly In-Reply-To: References: Message-ID: Hi, all I think it's worth to make our meeting biweekly in this point, as we don't have many urgent issues to discuss. So today's meeting is canceled and the next one will be held in 03 March. Of course nothing blocks us to discuss anything at any time on #openstack-ansible-sig channel. Feel free to start talks there if you have any questions or topics to chat about. The topics in progress are: 1. Getting us a session on PTG 2. Pushing of first collection to Ansible Galaxy 3. Continue working on CI for modules and tests improvement. 4. Reviewing incoming code from contributors. Thanks On Mon, Feb 10, 2020 at 6:47 PM Sagi Shnaidman wrote: > Hi, all > > according to our poll about meeting time[1] the winner is: Tuesday 15.00 - > 16.00 UTC (3.00 PM - 4.00 PM UTC) > Please be aware that we meet in different IRC channel - > #openstack-ansible-sig > Thanks for voting and waiting for you tomorrow in #openstack-ansible-sig > at 15.00 UTC > > [1] https://xoyondo.com/dp/ITMGRZSvaZaONcz > -- > Best regards > Sagi Shnaidman > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From nate.johnston at redhat.com Tue Feb 25 13:23:49 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Tue, 25 Feb 2020 08:23:49 -0500 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <20200225132349.zictd6japyr5ngzp@firewall> On Tue, Feb 25, 2020 at 08:58:56AM +0100, Jean-Philippe Evrard wrote: > Hello, > > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? I fully expect to be there, and will be splitting my time between TC and neutron activities. Thanks, Nate > Can you answer it here by Thursday, please? > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. > > Regards, > JP > > From bcafarel at redhat.com Tue Feb 25 14:01:25 2020 From: bcafarel at redhat.com (Bernard Cafarelli) Date: Tue, 25 Feb 2020 15:01:25 +0100 Subject: [ptl][release][stable][EM] Extended Maintenance - Rocky In-Reply-To: <1468c6c8-8f2b-b27f-1ba8-5bc6af9ce145@gmx.com> References: <8c51036b-050a-3aa7-ef97-4f89c1ba7fe0@est.tech> <79F22CA6-2830-4F21-BCBD-FE2B1C96E249@redhat.com> <1468c6c8-8f2b-b27f-1ba8-5bc6af9ce145@gmx.com> Message-ID: On Mon, 24 Feb 2020 at 22:02, Sean McGinnis wrote: > On 2/24/20 2:56 PM, Slawek Kaplonski wrote: > > Hi, > > > > In Neutron we still have some patches to merge to stable/rocky branch > before we will cut last release and go to EM. Will it be ok if we will do > it before end of this week? > > That should be fine. As long as these are something in the works and > just waiting for them to make it through the review and gating process, > we should be able to wait a few days so they are included in a final > release. > Indeed, we are down to 2 backports [0], both ready to get in, just waiting for a gate fix [2] > > Sean > > [0] https://review.opendev.org/705400 and https://review.opendev.org/705199 [1] https://bugs.launchpad.net/neutron/+bug/1864471 -- Bernard Cafarelli -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Feb 25 14:35:28 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 25 Feb 2020 08:35:28 -0600 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <1707cc63017.102c19281117824.4592113278612517363@ghanshyammann.com> ---- On Tue, 25 Feb 2020 01:58:56 -0600 Jean-Philippe Evrard wrote ---- > Hello, > > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? > > Can you answer it here by Thursday, please? > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. I will join. But seems 2 days might be a little long but depends on topics we want to discuss. Along with TC room, I need to be at QA, Nova and a few more projects at runtime so hoping for no conflict in schedule. -gmann > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. > > Regards, > JP > > > From jungleboyj at gmail.com Tue Feb 25 14:38:38 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Tue, 25 Feb 2020 08:38:38 -0600 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <2efe7d4d-be90-3f2f-f8d1-f413113adbe8@gmail.com> On 2/25/2020 1:58 AM, Jean-Philippe Evrard wrote: > Hello, > > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? > > Can you answer it here by Thursday, please? > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. > > Regards, > JP > JP, I plan to be there so please count me. I like the round table format with people around us if we can make that work. Thanks! Jay From mnaser at vexxhost.com Tue Feb 25 14:57:12 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 25 Feb 2020 15:57:12 +0100 Subject: [openstack-ansible] ptg room Message-ID: Hi everyone, I'd like to ask who is planning to be at the OpenDev event in Vancouver to gauge the amount if interest around having a room for OpenStack-Ansible at the PTG. Thanks, Mohammed -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From gmann at ghanshyammann.com Tue Feb 25 15:00:42 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 25 Feb 2020 09:00:42 -0600 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <1706963754.79322.1582614973577@mail.yahoo.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> <775D37EA-DC9B-4B80-A002-51C89A3A9E62@vmware.com> <1706963754.79322.1582614973577@mail.yahoo.com> Message-ID: <1707cdd49b2.ddd44e5d119585.1167564760858241265@ghanshyammann.com> ---- On Tue, 25 Feb 2020 01:16:13 -0600 prakash RAMCHANDRAN wrote ---- > Mark, > Glad you pointed to right code. Reviewed and stand corrected. My mis-underdtanding was I considered 20919.01 as first release and 2019.02 as second release of the year. However based on your comments and reference I understand that it is year.month of release , thus 2019.11 includes 'usuri' and previous 3 releases as listed pointed by you. > "os_trademark_approval": { > "target_approval": "2019.11", > "replaces": "2019.06", > "releases": ["rocky", "stein", "train", "ussuri"], > "status": "approved" > } > }, > > > That clears that I should have asked for 2019.11. > Few more questions on Tempedt tests. > I read some where that we have about 1500 Tempest tests overall. Is that correct? Yeah, it might be little more or less but around 1500 in-tree tests in Tempest. > The interop code lines have gone down from 3836 lines to 3232 in train to usuri. > Looks contrary to growth, any comments? > Question then is 60 compute and 40 storage lines I see in test cases, do we have stats for Tempest tests what's the distribution of 1500 tests across Platform, compute, storage etc. Where and how can I get that. information as documented report. Those should be counted from interop guidelines where you have mapping of capabilities with test cases. Few or most of the capabilities have one test to verifying it or a few more than one test. For example, "compute-flavors-list" capability is verified by two tests[2]. This way you can count and identify the exact number of tests per compute, storage etc. If you would like to know about the Tempest test categorization, you can find it from the directory structure. We have structured the tests service wise directory, for example, all compute tests under tempest/api/compute [2]. > Based on above should we expect decrease or increase for say 2020.05 Vancouver release? > How does one certify a kubernetes cluster running openstak modules, one module per docker container in a kubrrnetes cluster using tempest say like in Airship Control Plane on k8s worker node, which is a OpenStack over kubernetes cluster. > Is this possible and if so what test we need to modify to test and certify a Containerized OpenStack in Airship as OpenStack Platform? I should be verified same way via Tempest. Tempest does not reply on how OpenStack is deployed it interacts via the public interface (where interop needs only user-facing API excluding admin API) of each service which should be accessible on k8s cluster also. Tests are not required to modify in this case. NOTE: we can always extend (writing new tests for current 6 services tests) Tempest for tests required for interop capabilities either that are new or improved in the verification of existing ones. I remember about the discussion on covering the API microversion. We keep adding new tests to cover the new microverison where we need integration tests but we can add API tests also if interop requires those. > Can we even certify if for say 2019.11? > This should open up exciting possibilities if practical to extend OpeStack powered platform to Airship. > Like to hear anyone who has insight to educate us on that. > ThanksPrakash > Sent from Yahoo Mail on Android > On Mon, Feb 24, 2020 at 6:09 AM, Mark Voelker wrote: Hi Prakash, > > I am curious to find out if any Distribution or Products based on Openstack Train or Usuri are seeking the latest certifications based on 2019.02. > Hm, there actually isn’t a 2019.02 guideline--were you perhaps referring to 2019.06 or 2019.11? 2019.06 does cover Train but not Usuri [1], 2019.11 covers both [2]. As an FYI, the OpenStack Marketplace does list which guideline a particular product was most recently tested against (refer to https://www.openstack.org/marketplace/distros/ for example, and look for the green “TESTED” checkmark and accompanying guideline version), though this obviously doesn’t tell you what testing might be currently in flight. > [1] https://opendev.org/openstack/interop/src/branch/master/2019.06.json#L75[2] https://opendev.org/openstack/interop/src/branch/master/2019.11.json#L75 > At Your Service, > Mark T. Voelker > > > On Feb 22, 2020, at 5:04 AM, prakash RAMCHANDRAN wrote: > Hi all, > I am curious to find out if any Distribution or Products based on Openstack Train or Usuri are seeking the latest certifications based on 2019.02. > Similarly does any Hardware Driver of Software application seeking OpenStack compatibility Logo? > Finally does anyone think that Open Infra Distro like Airship or StarlingX should promote Open Infra Airship or Open Infra StarlingX powered as a new way to promote eco system surrounding them similar to OpenStack compatible drivers and software. > Will then Argo, customize,Metal3.io or Ironic be qualified as Open Infra Airship compatible? > If so how tempest can help in testing the above comments? > Refer to this market place below as how Distos and Products leverage OpenStack logos and branding programs. > https://www.openstack.org/marketplace/distros/ > Discussions and feedback are welcome. A healthy debate as how k8s modules used in Open Infra can be certified will be a good start. > ThanksPrakash > > Sent from Yahoo Mail on Android [1] https://opendev.org/openstack/interop/src/commit/8f2e82b7db54cfff9315e5647bd2ba3dd6aacaad/2019.11.json#L260-L281 [2] https://opendev.org/openstack/tempest/src/branch/master/tempest/api/compute -gmann > From amy at demarco.com Tue Feb 25 15:13:06 2020 From: amy at demarco.com (Amy) Date: Tue, 25 Feb 2020 09:13:06 -0600 Subject: [openstack-ansible] ptg room In-Reply-To: References: Message-ID: I plan to be there. Amy (spotz) > On Feb 25, 2020, at 8:59 AM, Mohammed Naser wrote: > > Hi everyone, > > I'd like to ask who is planning to be at the OpenDev event in > Vancouver to gauge the amount if interest around having a room for > OpenStack-Ansible at the PTG. > > Thanks, > Mohammed > > -- > Mohammed Naser — vexxhost > ----------------------------------------------------- > D. 514-316-8872 > D. 800-910-1726 ext. 200 > E. mnaser at vexxhost.com > W. https://vexxhost.com > From mnaser at vexxhost.com Tue Feb 25 15:19:48 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 25 Feb 2020 16:19:48 +0100 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <2efe7d4d-be90-3f2f-f8d1-f413113adbe8@gmail.com> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> <2efe7d4d-be90-3f2f-f8d1-f413113adbe8@gmail.com> Message-ID: I'll be around. On Tue, Feb 25, 2020 at 3:42 PM Jay Bryant wrote: > > > On 2/25/2020 1:58 AM, Jean-Philippe Evrard wrote: > > Hello, > > > > I am filling the survey for the room booking during the OpenDev summit > > in Vancouver. Do you already know if you will attend, this way we have > > a first guesstimate of the attendance? > > > > Can you answer it here by Thursday, please? > > > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > > last time we finished the marathon in two days. > > > > For the room format, I like the roundtable (was wondering to add extra > > seats next to the roundtable for people joining), however it's only > > around 10 persons per table, which won't be enough for us all if > > everyone attends. > > > > Regards, > > JP > > > JP, > > I plan to be there so please count me. > > I like the round table format with people around us if we can make that > work. > > Thanks! > Jay > > > From mnaser at vexxhost.com Tue Feb 25 15:20:25 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Tue, 25 Feb 2020 16:20:25 +0100 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> Message-ID: On Tue, Feb 25, 2020 at 11:27 AM Thierry Carrez wrote: > > Hi all, > > When our current project governance was established in 2012, we defined > two separate bodies. The Technical Committee represented developers / > code contributors to the open source project(s), while the User > Committee represented the operators running the resulting software, as > well as users of the APIs. > > That setup served us well in those early days. The focus on the upstream > side was strongly around *development* of code, we did not have that > many users, and even less users directly involved in upstream > development. A separate User Committee resulted in the formation of an > engaged community of users, and ensured that our community in general > (and our events in particular) took the needs of operators into account. > > Fast-forward to 2020: the upstream focus is more on maintenance. We now > have a lot of users, and thanks to the efforts of the UC they are > increasingly directly involved in the open source project development, > with several operators leading upstream project teams directly. There is > now limited UC-specific activity, and as such it is struggling to get > volunteers to step up for it: nobody nominated themselves for the last > round of election. > > Keeping two separate bodies to represent them maintains the illusion > that devs and ops are different breeds, and sometimes discourages > operators from running for the TC and more directly influence the shape > of the software. If anything, we need more ops representation at the TC, > and seeing people like mnaser being elected at (and chairing) the TC was > a great experience. I discussed the situation with the current UC > members, and we decided it's time to consider having a single > "community" and a single body to represent it. > > That body would tackle the traditional TC tasks (open source project > governance and stewardship) but also the UC tasks (user survey, > ambassador program...). That body would be elected by contributors, in a > large sense that includes the AUC definition. I feel like that would > help remove the artificial barriers discouraging users to get more > directly involved with software development and maintenance. > > There are multiple ways to achieve that, in increasing order of > complexity and bylaws changes needed: > > 1- No bylaws change > As bylaws changes take a lot of time and energy, the simplest approach > would be to merge the TC and UC without changing the bylaws at all. The > single body (called TC) would incorporate the AUC criteria by adding all > AUC members as extra-ATC. It would tackle all aspects of our community. > To respect the letter of the bylaws, the TC would formally designate 5 > of its members to be the 'UC' and those would select a 'UC chair'. But > all tasks would be handled together. > > 2- Minimal bylaws change > Same, but all mentions of the UC would be removed from the bylaws > (affecting sections 4.12, 4.14, and Appendix 10). This would require a > simple vote by the Board of Directors, and would avoid some confusion > and having to formally designate a subset of the TC to respect the > letter of the bylaws. > > 3- Maximal bylaws change > Create a new body ("steering committee" for example) replacing the TC > and the UC. This would require changing sections 4.1, 4.12, 4.13, 4.14, > 4.20, 7.1.b, 7.4, 9.1, 9.2.d, and Appendixes 4 and 10. Section 4.13, 7.1 > and 7.4 being heavily protected in the bylaws, they require special > votes by each class of Foundation members, probably a multi-year effort. > A lot of documentation would also need to be changed in case the TC is > renamed. > > I personally don't think the benefit of the change offsets the large > cost of option (3). Given the current vacancy in the UC makeup I would > recommend we pursue option 1 immediately (starting at the next TC > election in April), and propose minimal bylaws change to the board in > parallel (option 2). I personally agree with this option as that seems like the best possible option moving forwards. > Thoughts ? > > -- > Thierry Carrez > From gmann at ghanshyammann.com Tue Feb 25 15:21:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 25 Feb 2020 09:21:47 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> Message-ID: <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> ---- On Tue, 25 Feb 2020 04:23:56 -0600 Thierry Carrez wrote ---- > Hi all, > > When our current project governance was established in 2012, we defined > two separate bodies. The Technical Committee represented developers / > code contributors to the open source project(s), while the User > Committee represented the operators running the resulting software, as > well as users of the APIs. > > That setup served us well in those early days. The focus on the upstream > side was strongly around *development* of code, we did not have that > many users, and even less users directly involved in upstream > development. A separate User Committee resulted in the formation of an > engaged community of users, and ensured that our community in general > (and our events in particular) took the needs of operators into account. > > Fast-forward to 2020: the upstream focus is more on maintenance. We now > have a lot of users, and thanks to the efforts of the UC they are > increasingly directly involved in the open source project development, > with several operators leading upstream project teams directly. There is > now limited UC-specific activity, and as such it is struggling to get > volunteers to step up for it: nobody nominated themselves for the last > round of election. > > Keeping two separate bodies to represent them maintains the illusion > that devs and ops are different breeds, and sometimes discourages > operators from running for the TC and more directly influence the shape > of the software. If anything, we need more ops representation at the TC, > and seeing people like mnaser being elected at (and chairing) the TC was > a great experience. I discussed the situation with the current UC > members, and we decided it's time to consider having a single > "community" and a single body to represent it. > > That body would tackle the traditional TC tasks (open source project > governance and stewardship) but also the UC tasks (user survey, > ambassador program...). That body would be elected by contributors, in a > large sense that includes the AUC definition. I feel like that would > help remove the artificial barriers discouraging users to get more > directly involved with software development and maintenance. > > There are multiple ways to achieve that, in increasing order of > complexity and bylaws changes needed: > > 1- No bylaws change > As bylaws changes take a lot of time and energy, the simplest approach > would be to merge the TC and UC without changing the bylaws at all. The > single body (called TC) would incorporate the AUC criteria by adding all > AUC members as extra-ATC. It would tackle all aspects of our community. > To respect the letter of the bylaws, the TC would formally designate 5 > of its members to be the 'UC' and those would select a 'UC chair'. But > all tasks would be handled together. Thanks a lot, ttx for starting this thread. option 1 looks more feasible way but few question/feedback: - How we will execute the "designate 5 members from TC and select UC chair"? If by volunteer call from TC? I think this can lead to the current situation where very few members are interested to serve as UC. I am afraid that we will get 5 volunteers and a chair. If by force? I do not think this is or should be an option :). But if then how we make sure the TC members are ok/good to handle the UC tasks on the non-technical side for example ambassador program. - Will there be any change in TC elections, votes weightage and nomination? by this change of a new subteam under TC? - I think along with the merging proposal, we should distribute the current tasks handled by UC among TC, Ops group, foundation etc. For example, the ambassador program, local user group interaction or any other managerial tasks should be excluded from TC scope. If we would like to merge all then I think we should rename TC to Steering Committee or something which is option 3. - What about merging UC into Ops team, they are more close to users/operators and active in terms of the meetup and providing feedback etc. -gmann > > 2- Minimal bylaws change > Same, but all mentions of the UC would be removed from the bylaws > (affecting sections 4.12, 4.14, and Appendix 10). This would require a > simple vote by the Board of Directors, and would avoid some confusion > and having to formally designate a subset of the TC to respect the > letter of the bylaws. > > 3- Maximal bylaws change > Create a new body ("steering committee" for example) replacing the TC > and the UC. This would require changing sections 4.1, 4.12, 4.13, 4.14, > 4.20, 7.1.b, 7.4, 9.1, 9.2.d, and Appendixes 4 and 10. Section 4.13, 7.1 > and 7.4 being heavily protected in the bylaws, they require special > votes by each class of Foundation members, probably a multi-year effort. > A lot of documentation would also need to be changed in case the TC is > renamed. > > I personally don't think the benefit of the change offsets the large > cost of option (3). Given the current vacancy in the UC makeup I would > recommend we pursue option 1 immediately (starting at the next TC > election in April), and propose minimal bylaws change to the board in > parallel (option 2). > > Thoughts ? > > -- > Thierry Carrez > > From amoralej at redhat.com Tue Feb 25 15:32:02 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 25 Feb 2020 16:32:02 +0100 Subject: [tripleo][RDO] Version of Ansible in RDO CentOS8 repository Message-ID: Hi all, During CentOS 8 dependencies preparation we've built ansible 2.9 in RDO dependencies repo which was released on Oct 2019, While testing TripleO with CentOS8 it has been discovered that the latest release of ceph-ansible does not support ansible 2.9 but only 2.8, so I'm opening discussion about the best way to move on in CentOS 8: - Make ceph-ansible 4.0 to work with ansible 2.8 *and* 2.9 so that the same releases can be used in CentOS7 with Stein and Train and CentOS8 Train and Ussuri. - Maintain separated ceph-ansible releases and builds for centos7/ansible 2.8 and centos8/ansible 2.9 able to deploy Nautilus. - Move ansible back to 2.8 in CentOS 8 Ussuri repository. I wonder if TripleO or other projects using ansible from RDO repositories has any requirement or need to move to ansible 2.9 in Ussuri cycle or can stay in 2.8 until next release, any thoughts? Best regards, Alfredo -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Tue Feb 25 16:00:14 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 25 Feb 2020 10:00:14 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> Message-ID: Option 1 is definitely the fastest though I do wish we could have a more unified name for the group. It's not just a UC issue of not having enough people to run, the last TC election I believe did not have an actual vote after nominations. Either we have a unified group that oversees the direction of the community or keep them separate as they are now. OPS Meetup reports to the UC in a sense and is not a totally different group. Amy (spotz) On Tue, Feb 25, 2020 at 9:23 AM Ghanshyam Mann wrote: > ---- On Tue, 25 Feb 2020 04:23:56 -0600 Thierry Carrez < > thierry at openstack.org> wrote ---- > > Hi all, > > > > When our current project governance was established in 2012, we defined > > two separate bodies. The Technical Committee represented developers / > > code contributors to the open source project(s), while the User > > Committee represented the operators running the resulting software, as > > well as users of the APIs. > > > > That setup served us well in those early days. The focus on the > upstream > > side was strongly around *development* of code, we did not have that > > many users, and even less users directly involved in upstream > > development. A separate User Committee resulted in the formation of an > > engaged community of users, and ensured that our community in general > > (and our events in particular) took the needs of operators into account. > > > > Fast-forward to 2020: the upstream focus is more on maintenance. We now > > have a lot of users, and thanks to the efforts of the UC they are > > increasingly directly involved in the open source project development, > > with several operators leading upstream project teams directly. There > is > > now limited UC-specific activity, and as such it is struggling to get > > volunteers to step up for it: nobody nominated themselves for the last > > round of election. > > > > Keeping two separate bodies to represent them maintains the illusion > > that devs and ops are different breeds, and sometimes discourages > > operators from running for the TC and more directly influence the shape > > of the software. If anything, we need more ops representation at the > TC, > > and seeing people like mnaser being elected at (and chairing) the TC > was > > a great experience. I discussed the situation with the current UC > > members, and we decided it's time to consider having a single > > "community" and a single body to represent it. > > > > That body would tackle the traditional TC tasks (open source project > > governance and stewardship) but also the UC tasks (user survey, > > ambassador program...). That body would be elected by contributors, in > a > > large sense that includes the AUC definition. I feel like that would > > help remove the artificial barriers discouraging users to get more > > directly involved with software development and maintenance. > > > > There are multiple ways to achieve that, in increasing order of > > complexity and bylaws changes needed: > > > > 1- No bylaws change > > As bylaws changes take a lot of time and energy, the simplest approach > > would be to merge the TC and UC without changing the bylaws at all. The > > single body (called TC) would incorporate the AUC criteria by adding > all > > AUC members as extra-ATC. It would tackle all aspects of our community. > > To respect the letter of the bylaws, the TC would formally designate 5 > > of its members to be the 'UC' and those would select a 'UC chair'. But > > all tasks would be handled together. > > Thanks a lot, ttx for starting this thread. > > option 1 looks more feasible way but few question/feedback: > - How we will execute the "designate 5 members from TC and select UC > chair"? > > If by volunteer call from TC? > I think this can lead to the current situation where very few members are > interested to > serve as UC. I am afraid that we will get 5 volunteers and a chair. > > If by force? > I do not think this is or should be an option :). But if then how we make > sure the TC > members are ok/good to handle the UC tasks on the non-technical side for > example > ambassador program. > > - Will there be any change in TC elections, votes weightage and > nomination? by this change > of a new subteam under TC? > > - I think along with the merging proposal, we should distribute the > current tasks handled by UC > among TC, Ops group, foundation etc. For example, the ambassador program, > local user group > interaction or any other managerial tasks should be excluded from TC > scope. > If we would like to merge all then I think we should rename TC to Steering > Committee or > something which is option 3. > > - What about merging UC into Ops team, they are more close to > users/operators and active > in terms of the meetup and providing feedback etc. > > -gmann > > > > > 2- Minimal bylaws change > > Same, but all mentions of the UC would be removed from the bylaws > > (affecting sections 4.12, 4.14, and Appendix 10). This would require a > > simple vote by the Board of Directors, and would avoid some confusion > > and having to formally designate a subset of the TC to respect the > > letter of the bylaws. > > > > 3- Maximal bylaws change > > Create a new body ("steering committee" for example) replacing the TC > > and the UC. This would require changing sections 4.1, 4.12, 4.13, 4.14, > > 4.20, 7.1.b, 7.4, 9.1, 9.2.d, and Appendixes 4 and 10. Section 4.13, > 7.1 > > and 7.4 being heavily protected in the bylaws, they require special > > votes by each class of Foundation members, probably a multi-year > effort. > > A lot of documentation would also need to be changed in case the TC is > > renamed. > > > > I personally don't think the benefit of the change offsets the large > > cost of option (3). Given the current vacancy in the UC makeup I would > > recommend we pursue option 1 immediately (starting at the next TC > > election in April), and propose minimal bylaws change to the board in > > parallel (option 2). > > > > Thoughts ? > > > > -- > > Thierry Carrez > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tim.Bell at cern.ch Tue Feb 25 16:09:06 2020 From: Tim.Bell at cern.ch (Tim Bell) Date: Tue, 25 Feb 2020 16:09:06 +0000 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> Message-ID: <81992DB5-CD01-4865-BAD6-6A8010A51ED5@cern.ch> Are there some stats on the active electorate sizes ? I have the impression that the voting AUCs were a smaller number of people than the contributors so a single election may result in less total representation compared to today’s UC & TC. Tim On 25 Feb 2020, at 17:00, Amy Marrich > wrote: Option 1 is definitely the fastest though I do wish we could have a more unified name for the group. It's not just a UC issue of not having enough people to run, the last TC election I believe did not have an actual vote after nominations. Either we have a unified group that oversees the direction of the community or keep them separate as they are now. OPS Meetup reports to the UC in a sense and is not a totally different group. Amy (spotz) On Tue, Feb 25, 2020 at 9:23 AM Ghanshyam Mann > wrote: ---- On Tue, 25 Feb 2020 04:23:56 -0600 Thierry Carrez > wrote ---- > Hi all, > > When our current project governance was established in 2012, we defined > two separate bodies. The Technical Committee represented developers / > code contributors to the open source project(s), while the User > Committee represented the operators running the resulting software, as > well as users of the APIs. > > That setup served us well in those early days. The focus on the upstream > side was strongly around *development* of code, we did not have that > many users, and even less users directly involved in upstream > development. A separate User Committee resulted in the formation of an > engaged community of users, and ensured that our community in general > (and our events in particular) took the needs of operators into account. > > Fast-forward to 2020: the upstream focus is more on maintenance. We now > have a lot of users, and thanks to the efforts of the UC they are > increasingly directly involved in the open source project development, > with several operators leading upstream project teams directly. There is > now limited UC-specific activity, and as such it is struggling to get > volunteers to step up for it: nobody nominated themselves for the last > round of election. > > Keeping two separate bodies to represent them maintains the illusion > that devs and ops are different breeds, and sometimes discourages > operators from running for the TC and more directly influence the shape > of the software. If anything, we need more ops representation at the TC, > and seeing people like mnaser being elected at (and chairing) the TC was > a great experience. I discussed the situation with the current UC > members, and we decided it's time to consider having a single > "community" and a single body to represent it. > > That body would tackle the traditional TC tasks (open source project > governance and stewardship) but also the UC tasks (user survey, > ambassador program...). That body would be elected by contributors, in a > large sense that includes the AUC definition. I feel like that would > help remove the artificial barriers discouraging users to get more > directly involved with software development and maintenance. > > There are multiple ways to achieve that, in increasing order of > complexity and bylaws changes needed: > > 1- No bylaws change > As bylaws changes take a lot of time and energy, the simplest approach > would be to merge the TC and UC without changing the bylaws at all. The > single body (called TC) would incorporate the AUC criteria by adding all > AUC members as extra-ATC. It would tackle all aspects of our community. > To respect the letter of the bylaws, the TC would formally designate 5 > of its members to be the 'UC' and those would select a 'UC chair'. But > all tasks would be handled together. Thanks a lot, ttx for starting this thread. option 1 looks more feasible way but few question/feedback: - How we will execute the "designate 5 members from TC and select UC chair"? If by volunteer call from TC? I think this can lead to the current situation where very few members are interested to serve as UC. I am afraid that we will get 5 volunteers and a chair. If by force? I do not think this is or should be an option :). But if then how we make sure the TC members are ok/good to handle the UC tasks on the non-technical side for example ambassador program. - Will there be any change in TC elections, votes weightage and nomination? by this change of a new subteam under TC? - I think along with the merging proposal, we should distribute the current tasks handled by UC among TC, Ops group, foundation etc. For example, the ambassador program, local user group interaction or any other managerial tasks should be excluded from TC scope. If we would like to merge all then I think we should rename TC to Steering Committee or something which is option 3. - What about merging UC into Ops team, they are more close to users/operators and active in terms of the meetup and providing feedback etc. -gmann > > 2- Minimal bylaws change > Same, but all mentions of the UC would be removed from the bylaws > (affecting sections 4.12, 4.14, and Appendix 10). This would require a > simple vote by the Board of Directors, and would avoid some confusion > and having to formally designate a subset of the TC to respect the > letter of the bylaws. > > 3- Maximal bylaws change > Create a new body ("steering committee" for example) replacing the TC > and the UC. This would require changing sections 4.1, 4.12, 4.13, 4.14, > 4.20, 7.1.b, 7.4, 9.1, 9.2.d, and Appendixes 4 and 10. Section 4.13, 7.1 > and 7.4 being heavily protected in the bylaws, they require special > votes by each class of Foundation members, probably a multi-year effort. > A lot of documentation would also need to be changed in case the TC is > renamed. > > I personally don't think the benefit of the change offsets the large > cost of option (3). Given the current vacancy in the UC makeup I would > recommend we pursue option 1 immediately (starting at the next TC > election in April), and propose minimal bylaws change to the board in > parallel (option 2). > > Thoughts ? > > -- > Thierry Carrez > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 25 16:38:24 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 25 Feb 2020 16:38:24 +0000 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <20200225163823.hvfdogognhtzoxim@yuggoth.org> On 2020-02-25 08:58:56 +0100 (+0100), Jean-Philippe Evrard wrote: > I am filling the survey for the room booking during the OpenDev > summit in Vancouver. Do you already know if you will attend, this > way we have a first guesstimate of the attendance? [...] I'm planning to be at event, spread thin as always, but would like the opportunity to sit in on some TC discussions (depending on what other schedule conflicts I have). I don't currently occupy a seat on the TC, of course, but you did clarify you wanted to know who was interested regardless of committee membership. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amoralej at redhat.com Tue Feb 25 17:05:49 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 25 Feb 2020 18:05:49 +0100 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Mon, Feb 24, 2020 at 8:08 PM Wesley Hayutin wrote: > > > On Mon, Feb 24, 2020 at 8:55 AM Mark Goddard wrote: > >> On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso >> wrote: >> > >> > >> > >> > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: >> >> >> >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: >> >> > >> >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek >> >> > wrote: >> >> > > >> >> > > I know it was for masakari. >> >> > > Gaëtan had to grab crmsh from opensuse: >> >> > > >> http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ >> >> > > >> >> > > -yoctozepto >> >> > >> >> > Thanks Wes for getting this discussion going. I've been looking at >> >> > CentOS 8 today and trying to assess where we are. I created an >> >> > Etherpad to track status: >> >> > https://etherpad.openstack.org/p/kolla-centos8 >> >> >> > >> > uwsgi and etcd are now available in rdo dependencies repo. Let me know >> if you find some issue with it. >> >> I've been working on the backport of kolla CentOS 8 patches to the >> stable/train branch. It looks like these packages which were added to >> master are not present in Train. >> >> > I'll help check in on that for you Mark. > Thank you!! > > I've just synced all deps in centos8 master to train, could you check again? > > >> > >> >> >> >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error >> >> code when installing packages. It often happens on the rabbitmq and >> >> grafana images. There is a prompt about importing GPG keys prior to >> >> the error. >> >> >> >> Example: >> https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log >> >> >> >> Related bug report? https://github.com/containers/libpod/issues/4431 >> >> >> >> Anyone familiar with it? >> >> >> > >> > Didn't know about this issue. >> > >> > BTW, there is rabbitmq-server in RDO dependencies repo if you are >> interested in using it from there instead of rabbit repo. >> > >> >> > >> >> > > >> >> > > pon., 27 sty 2020 o 10:13 Marcin Juszkiewicz >> >> > > napisał(a): >> >> > > > >> >> > > > W dniu 27.01.2020 o 09:48, Alfredo Moralejo Alonso pisze: >> >> > > > > How is crmsh used in these images?, ha packages included in >> >> > > > > HighAvailability repo in CentOS includes pcs and some crm_* >> commands in pcs >> >> > > > > and pacemaker-cli packages. IMO, tt'd be good to switch to >> those commands >> >> > > > > to manage the cluster. >> >> > > > >> >> > > > No idea. Gaëtan Trellu may know - he created those images. >> >> > > > >> >> > > >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 25 17:20:55 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 18:20:55 +0100 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> Message-ID: <6e3e97f7-ea2e-01df-777b-3105feee4862@openstack.org> Ghanshyam Mann wrote: > [...] > option 1 looks more feasible way but few question/feedback: > - How we will execute the "designate 5 members from TC and select UC chair"? > > If by volunteer call from TC? > I think this can lead to the current situation where very few members are interested to > serve as UC. I am afraid that we will get 5 volunteers and a chair. > > If by force? > I do not think this is or should be an option :). But if then how we make sure the TC > members are ok/good to handle the UC tasks on the non-technical side for example > ambassador program. The idea here is to satisfy the letter of the bylaws, not to build a subteam. The bylaws say the UC is a 5-member group, elected following a method they decide on, by an electorate that they define. So one way to not change the bylaws (while having a single election and a single group) is to say that the UC electorate is the elected TC+UC members, have them select 5 people, and address all questions with the whole group. I suspect we'd choose the people with the most operations experience, but it actually would not matter. -- Thierry Carrez (ttx) From thierry at openstack.org Tue Feb 25 17:23:22 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 18:23:22 +0100 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> Message-ID: <4b09d8a2-d34f-8886-75c9-869a0872d8cb@openstack.org> Amy Marrich wrote: > Option 1 is definitely the fastest though I do wish we could have a more > unified name for the group. I agree that would have been optimal, to create some rupture and clearly drive the point that Devs+Ops are welcome to the new body. Unfortunately, there is no way to do that without touching protected sections of the bylaws, at which point the change is just too costly to entertain, compared to the gain... -- Thierry Carrez (ttx) From johnsomor at gmail.com Tue Feb 25 17:33:24 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 25 Feb 2020 09:33:24 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: <98a8ffb8-e2ba-33eb-eae9-ad30db843c5c@openstack.org> References: <98a8ffb8-e2ba-33eb-eae9-ad30db843c5c@openstack.org> Message-ID: So, maybe a hybrid. For example, have a standard "Contributing.rst", with fixed template/content, that can link off to these other guides? (I feel like this might be the original idea and I just missed the concept, lol) Maybe we need two templates. one for "contributing.rst" in the root (boilerplate-ish) and one template for inside the documentation tree ("how-to-contribute.rst ???). This seems similar to what we have today, but with a more formal template for the "how-to-contribute" landing page. To some degree there is a strange overlap here with the existing "contributor" section of the documentation that has the gory coding details, etc. I still lean towards having the "slightly more verbose" version in the documentation tree as then we can use the sphinx magic for glossary linking, internal links, etc. Michael On Tue, Feb 25, 2020 at 2:56 AM Thierry Carrez wrote: > > Kendall Nelson wrote: > > There is some debate about where the content of the actual docs should > > live. This grew out of the discussion about how to setup the includes so > > that the correct info shows up where we want it. 'Correct' in that last > > sentence being where the debate is. There are two main schools of thought: > > > > 1. All the content of the docs should live in the top level > > CONTRIBUTING.rst and the sphinx glue should live in > > doc/source/contributor/contributing.rst. A patch has already been merged > > for this approach[1]. There is also a patch to update the goal to > > match[2]. This approach keeps all the info in one place so that if > > things change in the future, its easier to keep things straight. All the > > content is also more easily discoverable when looking at a repo in > > GitHub (or similar) or checking out the code because it is at the top > > most level of the repo and not hidden in the docs sub directory. > > > > 2. The new patch[3] says that the content should live in > > /doc/source/contributor/contributing.rst and a skeleton with only the > > most important version should live in the top level CONTRIBUTING.rst. > > This approach argues that people don't want to read a wall of text when > > viewing the code on GitHub (or similar) or checking it out and looking > > at the top level CONTRIBUTING.rst and as such only the important details > > should be kept in that file. These important details being that we don't > > accept patches in github and where to report bugs (both of which are > > included in the larger format of the content). > > > > So what do people think? Which approach do they prefer? > > I personally prefer a single page approach (school 1). That said, I > think we need to be very careful to keep this page to a minimum, to > avoid the "wall of text" effect. > > In particular, I think: > > - "Communication" / "Contacting the Core team" could be collapsed into a > single section > > - "Task tracking" / "Reporting a bug" could be collapsed into a single > section > > - "Project team lead duties" sounds a bit overkill for a first-contact > doc, can probably be documented elsewhere. > > - Sections could be reordered in order of likely involvement: How to > talk with the team, So you want to report a bug, So you want to propose > a change, So you want to propose a new feature. > > > I am anxious to get this settled ASAP so that projects have time to > > complete the goal in time. > > Agree it would be good to come up with a consensus on this ASAP. Maybe > the TC can settle it at next week meeting. > > -- > Thierry Carrez (ttx) > From thierry at openstack.org Tue Feb 25 17:36:06 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 25 Feb 2020 18:36:06 +0100 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <81992DB5-CD01-4865-BAD6-6A8010A51ED5@cern.ch> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> <81992DB5-CD01-4865-BAD6-6A8010A51ED5@cern.ch> Message-ID: <8083f873-cc98-8897-e031-0c36a5e549cd@openstack.org> Tim Bell wrote: > Are there some stats on the active electorate sizes ? > > I have the impression that the voting AUCs were a smaller number of > people than the contributors so a single election may result in less > total representation compared to today’s UC & TC. I don't have the exact numbers around, but yes, there are many more ATCs (voters in the current TC election) than AUCs (voters in the current UC election). So it's a valid concern that people with more of an operator background would have trouble getting elected if the electorate contains more people with more of a developer background. But I think it's a fallacy. There is plenty of overlap. Most engaged operators are already ATCs, and more and more contributors have operational experience. Data points show that when people with more operational background have nominated themselves for the TC in recent elections, they got elected. 10 of the 13 TC seats are currently occupied by people with some decent amount of operational experience. So yes, we clearly need to communicate that people with operational/usage experience are wanted in that new body. We need to communicate that there is a change, and a single body will now steward all aspects of the open source projects and not just the upstream aspects. But I'm not obsessing on the fact that a single election would somehow suppress operator voices... The problem recently has more been to find enough people willing to step up and spend extra time stewarding OpenStack, than to actually get elected. -- Thierry Carrez (ttx) From kennelson11 at gmail.com Tue Feb 25 17:47:21 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 25 Feb 2020 09:47:21 -0800 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <20200225163823.hvfdogognhtzoxim@yuggoth.org> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> <20200225163823.hvfdogognhtzoxim@yuggoth.org> Message-ID: I'll be there, but also spread thin, like Jeremy. -Kendall (diablo_rojo) On Tue, Feb 25, 2020 at 8:39 AM Jeremy Stanley wrote: > On 2020-02-25 08:58:56 +0100 (+0100), Jean-Philippe Evrard wrote: > > I am filling the survey for the room booking during the OpenDev > > summit in Vancouver. Do you already know if you will attend, this > > way we have a first guesstimate of the attendance? > [...] > > I'm planning to be at event, spread thin as always, but would like > the opportunity to sit in on some TC discussions (depending on what > other schedule conflicts I have). I don't currently occupy a seat on > the TC, of course, but you did clarify you wanted to know who was > interested regardless of committee membership. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Feb 25 18:05:41 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 25 Feb 2020 12:05:41 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <2527DE00-CF9A-46DA-8728-E1FDE49E5F9F@openstack.org> References: <8083f873-cc98-8897-e031-0c36a5e549cd@openstack.org> <2527DE00-CF9A-46DA-8728-E1FDE49E5F9F@openstack.org> Message-ID: <5E5561F5.5050104@openstack.org> > > Thanks a lot, ttx for starting this thread. > > option 1 looks more feasible way but few question/feedback: > - How we will execute the "designate 5 members from TC and select UC chair"? > > If by volunteer call from TC? > I think this can lead to the current situation where very few members are interested to > serve as UC. I am afraid that we will get 5 volunteers and a chair. > > If by force? > I do not think this is or should be an option :). But if then how we make sure the TC > members are ok/good to handle the UC tasks on the non-technical side for example > ambassador program. We have dedicated Foundation Staff that offer support for the Ambassador Program as well as User Survey. I don't have concerns about this falling between the cracks during the transition and it's something we can work with the new UC seats under the TC on. Cheers, Jimmy > > - Will there be any change in TC elections, votes weightage and nomination? by this change > of a new subteam under TC? > > - I think along with the merging proposal, we should distribute the current tasks handled by UC > among TC, Ops group, foundation etc. For example, the ambassador program, local user group > interaction or any other managerial tasks should be excluded from TC scope. > If we would like to merge all then I think we should rename TC to Steering Committee or > something which is option 3. > > - What about merging UC into Ops team, they are more close to users/operators and active > in terms of the meetup and providing feedback etc. > > -gmann > > >> Begin forwarded message: >> >> *From: *Thierry Carrez > > >> *Subject: **Re: [all][tc][uc] Uniting the TC and the UC* >> *Date: *February 25, 2020 at 11:36:06 AM CST >> *To: *openstack-discuss at lists.openstack.org >> >> >> Tim Bell wrote: >>> Are there some stats on the active electorate sizes ? >>> I have the impression that the voting AUCs were a smaller number of >>> people than the contributors so a single election may result in less >>> total representation compared to today’s UC & TC. >> >> I don't have the exact numbers around, but yes, there are many more >> ATCs (voters in the current TC election) than AUCs (voters in the >> current UC election). >> >> So it's a valid concern that people with more of an operator >> background would have trouble getting elected if the electorate >> contains more people with more of a developer background. >> >> But I think it's a fallacy. There is plenty of overlap. Most engaged >> operators are already ATCs, and more and more contributors have >> operational experience. Data points show that when people with more >> operational background have nominated themselves for the TC in recent >> elections, they got elected. 10 of the 13 TC seats are currently >> occupied by people with some decent amount of operational experience. >> >> So yes, we clearly need to communicate that people with >> operational/usage experience are wanted in that new body. We need to >> communicate that there is a change, and a single body will now >> steward all aspects of the open source projects and not just the >> upstream aspects. But I'm not obsessing on the fact that a single >> election would somehow suppress operator voices... >> >> The problem recently has more been to find enough people willing to >> step up and spend extra time stewarding OpenStack, than to actually >> get elected. >> >> -- >> Thierry Carrez (ttx) >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 25 18:07:48 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 25 Feb 2020 18:07:48 +0000 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <1707cf09739.11dcf95b7121099.3302299691239796640@ghanshyammann.com> Message-ID: <20200225180748.pfyslysjjp3reglh@yuggoth.org> On 2020-02-25 10:00:14 -0600 (-0600), Amy Marrich wrote: [...] > It's not just a UC issue of not having enough people to run, the > last TC election I believe did not have an actual vote after > nominations. [...] That's a fair point, but there's still a wide gulf between getting just enough (very excellent, by the way) candidates by the scheduled deadline to fill all open seats, and getting none whatsoever. I also knew other folks who were willing to nominate themselves if it got down to the wire and we didn't have enough, but they were confident in the ability of those who had already volunteered so did not feel it necessary. The Technical Committee also took this as a sign that it should work to gradually reduce its size over the course of the next year (thankfully the number of TC seats isn't baked into the OSF bylaws, unlike for the UC). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jimmy at openstack.org Tue Feb 25 18:14:52 2020 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 25 Feb 2020 12:14:52 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <5E5561F5.5050104@openstack.org> References: <8083f873-cc98-8897-e031-0c36a5e549cd@openstack.org> <2527DE00-CF9A-46DA-8728-E1FDE49E5F9F@openstack.org> <5E5561F5.5050104@openstack.org> Message-ID: <5E55641C.40301@openstack.org> Also worth noting 10 of the current TC members are AUCs, including the recent former TC Chair. > Jimmy McArthur > February 25, 2020 at 12:05 PM >> >> Thanks a lot, ttx for starting this thread. >> >> option 1 looks more feasible way but few question/feedback: >> - How we will execute the "designate 5 members from TC and select UC chair"? >> >> If by volunteer call from TC? >> I think this can lead to the current situation where very few members are interested to >> serve as UC. I am afraid that we will get 5 volunteers and a chair. >> >> If by force? >> I do not think this is or should be an option :). But if then how we make sure the TC >> members are ok/good to handle the UC tasks on the non-technical side for example >> ambassador program. > We have dedicated Foundation Staff that offer support for the > Ambassador Program as well as User Survey. I don't have concerns > about this falling between the cracks during the transition and it's > something we can work with the new UC seats under the TC on. > > Cheers, > Jimmy >> - Will there be any change in TC elections, votes weightage and nomination? by this change >> of a new subteam under TC? >> >> - I think along with the merging proposal, we should distribute the current tasks handled by UC >> among TC, Ops group, foundation etc. For example, the ambassador program, local user group >> interaction or any other managerial tasks should be excluded from TC scope. >> If we would like to merge all then I think we should rename TC to Steering Committee or >> something which is option 3. >> >> - What about merging UC into Ops team, they are more close to users/operators and active >> in terms of the meetup and providing feedback etc. >> >> -gmann >> >> >>> Begin forwarded message: >>> >>> *From: *Thierry Carrez >> > >>> *Subject: **Re: [all][tc][uc] Uniting the TC and the UC* >>> *Date: *February 25, 2020 at 11:36:06 AM CST >>> *To: *openstack-discuss at lists.openstack.org >>> >>> >>> Tim Bell wrote: >>>> Are there some stats on the active electorate sizes ? >>>> I have the impression that the voting AUCs were a smaller number of >>>> people than the contributors so a single election may result in >>>> less total representation compared to today’s UC & TC. >>> >>> I don't have the exact numbers around, but yes, there are many more >>> ATCs (voters in the current TC election) than AUCs (voters in the >>> current UC election). >>> >>> So it's a valid concern that people with more of an operator >>> background would have trouble getting elected if the electorate >>> contains more people with more of a developer background. >>> >>> But I think it's a fallacy. There is plenty of overlap. Most engaged >>> operators are already ATCs, and more and more contributors have >>> operational experience. Data points show that when people with more >>> operational background have nominated themselves for the TC in >>> recent elections, they got elected. 10 of the 13 TC seats are >>> currently occupied by people with some decent amount of operational >>> experience. >>> >>> So yes, we clearly need to communicate that people with >>> operational/usage experience are wanted in that new body. We need to >>> communicate that there is a change, and a single body will now >>> steward all aspects of the open source projects and not just the >>> upstream aspects. But I'm not obsessing on the fact that a single >>> election would somehow suppress operator voices... >>> >>> The problem recently has more been to find enough people willing to >>> step up and spend extra time stewarding OpenStack, than to actually >>> get elected. >>> >>> -- >>> Thierry Carrez (ttx) >>> >> > > Allison Price > February 25, 2020 at 11:57 AM > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at jimrollenhagen.com Tue Feb 25 18:16:04 2020 From: jim at jimrollenhagen.com (Jim Rollenhagen) Date: Tue, 25 Feb 2020 13:16:04 -0500 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: I sadly will not make it to Vancouver :( // jim On Tue, Feb 25, 2020 at 3:01 AM Jean-Philippe Evrard < jean-philippe at evrard.me> wrote: > Hello, > > I am filling the survey for the room booking during the OpenDev summit > in Vancouver. Do you already know if you will attend, this way we have > a first guesstimate of the attendance? > > Can you answer it here by Thursday, please? > > FYI: I thought of booking for min 1.5 days, maximum time 2 days, as > last time we finished the marathon in two days. > > For the room format, I like the roundtable (was wondering to add extra > seats next to the roundtable for people joining), however it's only > around 10 persons per table, which won't be enough for us all if > everyone attends. > > Regards, > JP > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Feb 25 18:27:13 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 25 Feb 2020 19:27:13 +0100 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> Message-ID: I'd just like to add that I see those 3 points as steps. Indeed doing 1 looks best atm and can be followed by 2 and then 3. They sound incremental. -yoctozepto From Arkady.Kanevsky at dell.com Tue Feb 25 18:33:16 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Tue, 25 Feb 2020 18:33:16 +0000 Subject: [tc] Presence at OpenDev Vancouver in June In-Reply-To: References: <8997c9c8ecc503faafb4f33405072d0ea7a3d296.camel@evrard.me> Message-ID: <76828a01d6104138804d6b622dbee646@AUSX13MPS308.AMER.DELL.COM> Not sure what are the benefits on combining UC and TC. I think the real issue is that as community shrunk from several years back, we need to adjust what UC , TC and overall community can deliver. Adjusting the scope. But UC and TC are engaged in governing different activities. And while two Cs are part of larger openstack community as the whole, and many folks work in both U and T communities of openstack they do have different goals and represent different aspects of overall openstack community. I think reducing C sizes, and deliverables makes sense, why they should be combined still eludes me. Do agree that hardwiring UC size into bylaws was a mistake. Thanks, Arkady From: Jim Rollenhagen Sent: Tuesday, February 25, 2020 12:16 PM To: openstack-discuss at lists.openstack.org Subject: Re: [tc] Presence at OpenDev Vancouver in June [EXTERNAL EMAIL] I sadly will not make it to Vancouver :( // jim On Tue, Feb 25, 2020 at 3:01 AM Jean-Philippe Evrard > wrote: Hello, I am filling the survey for the room booking during the OpenDev summit in Vancouver. Do you already know if you will attend, this way we have a first guesstimate of the attendance? Can you answer it here by Thursday, please? FYI: I thought of booking for min 1.5 days, maximum time 2 days, as last time we finished the marathon in two days. For the room format, I like the roundtable (was wondering to add extra seats next to the roundtable for people joining), however it's only around 10 persons per table, which won't be enough for us all if everyone attends. Regards, JP -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Tue Feb 25 21:04:47 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 25 Feb 2020 13:04:47 -0800 Subject: [keystone][middleware] why can not middle-ware support redis? In-Reply-To: <289D1C6884E043999D843F93BED4332E@guoyongPC> References: <289D1C6884E043999D843F93BED4332E@guoyongPC> Message-ID: <90960d13-dec6-46d1-bbf8-2fa390599ffb@www.fastmail.com> Hi there, I'm sorry no one responded to this yet. See below. On Thu, Feb 13, 2020, at 00:20, guoyongxhzhf at 163.com wrote: > > In my environment, there is a redis. I want the keystone client in nova > to cache token and use redis as a cache server. > But after reading keystone middle-ware code, now keystone middle-ware > just support swift and memcached server. > Can I just modify keystone middleware code to use dogpile.cache.redis > directly? I'm not totally familiar with the history behind it, but it seems like keystonemiddleware makes assumptions about the particular caching backend in use in order to take advantage of things like encryption or cache pools. If you want to contribute support for using an arbitrary caching backend through oslo.cache (which would make dogpile.cache.redis available as a backend option) we could probably look at accepting it, but I don't think it would be easy to implement. On the other hand, it's pretty easy to set up a memcached server. I would probably recommend just going forward with memcached. Is there a reason you need to use redis instead? Colleen From colleen at gazlene.net Tue Feb 25 22:02:24 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 25 Feb 2020 14:02:24 -0800 Subject: =?UTF-8?Q?Re:_[keystone]_[keystonemiddleware]_[neutron]_[keystone=5Fauth?= =?UTF-8?Q?token]_auth=5Furl_not_available_via_oslo=5Fconfig?= In-Reply-To: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> References: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> Message-ID: On Mon, Feb 24, 2020, at 08:17, Ben Nemec wrote: > > > On 2/24/20 9:08 AM, Jeremy Freudberg wrote: > > not a keystone person, but I can offer you this: > > https://opendev.org/openstack/sahara/src/commit/75df1e93872a3a6b761d0eb89ca87de0b2b3620f/sahara/utils/openstack/keystone.py#L32 > > > > It's a nasty workaround for getting config values from > > keystone_authtoken which are supposed to private for > > keystonemiddleware only. It's probably a bad idea. > > Yeah, config opts should generally not be referenced by other projects. > The oslo.config deprecation mechanism doesn't handle the case where an > opt gets renamed but is still being referred to in the code by its old > name. I realize that's not what happened here, but in general it's a > good reason not to do this. If a given config value needs to be exposed > to consumers of a library it should be explicitly provided via an API. > > I realize that's not what happened here, but it demonstrates the > fragility of referring to another project's config opts directly. It's > also possible that a project could change when its opts get registered, > which may be what's happening here. If this plugin code is running > before keystoneauth has registered its opts that might explain why it's > not being found. That may also explain why it's working in some other > environments - if the timing of when the opts are registered versus when > the plugin code gets called is different it might cause that kind of > varying behavior with otherwise identical code/configuration. > > I have vague memories of this having come up before, but I can't > remember exactly what the recommendation was. Hopefully someone from > Keystone can chime in. Services that need to connect to keystone with their own session outside of keystonemiddleware can use the keystoneauth loading module to register config options rather than reusing the keystone_authtoken section. For example, this is what nova does: https://opendev.org/openstack/nova/src/branch/master/nova/conf/glance.py This doesn't help unbreak OP's broken Queens site. Perhaps the neutron or calico contributors can help diagnose what's different about that site in order to figure out what's missing that's causing keystonemiddleware not to load the keystoneauth config opts. Colleen > > > > > > > On Fri, Feb 21, 2020 at 9:43 AM Justin Cattle wrote: > >> > >> Just to add, it also doesn't seem to be registering the password option from keystone_authtoken either. > >> > >> So, makes me think the auth plugin isn't loading , or not the right one at least ?? > >> > >> > >> Cheers, > >> Just > >> > >> > >> On Thu, 20 Feb 2020 at 20:55, Justin Cattle wrote: > >>> > >>> Hi, > >>> > >>> > >>> I'm reaching out for help with a strange issue I've found. Running openstack queens, on ubuntu xenial. > >>> > >>> We have a bunch of different sites with the same set-up, recently upgraded from mitaka to queens. However, on this one site, after the upgrade, we cannot start neutron-server. The reason is, that the ml2 plugin throws an error because it can't find auth_url from the keystone_authtoken section of neutron.conf. However, it is there in the file. > >>> > >>> The ml2 plugin is calico, it fails with this error: > >>> > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico [-] Exception in function %s: TypeError: expected string or buffer > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico Traceback (most recent call last): > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/logutils.py", line 21, in wrapped > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return fn(*args, **kwargs) > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/dist-packages/networking_calico/plugins/ml2/drivers/calico/mech_calico.py", line 347, in _post_fork_init > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico auth_url=re.sub(r'/v3/?$', '', auth_url) + > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico File "/usr/lib/python2.7/re.py", line 155, in sub > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico return _compile(pattern, flags).sub(repl, string, count) > >>> 2020-02-20 20:14:22.495 2964911 ERROR networking_calico.plugins.ml2.drivers.calico.mech_calico TypeError: expected string or buffer > >>> > >>> > >>> When you look at the code, this is because neither auth_url or is found in cfg.CONF.keystone_authtoken. The config defintely exists. > >>> > >>> I have copied the neutron.conf config from a working site, same error. I have copied the entire /etc/neutron directory from a working site, same error. > >>> > >>> I have check with strace, and /etc/neutron/neutron.conf is the only neutron.conf being parsed. > >>> > >>> Here is the keystone_authtoken part of the config: > >>> > >>> [keystone_authtoken] > >>> auth_uri=https://api-srv-cloud.host.domain:5000 > >>> region_name=openstack > >>> memcached_servers=1.2.3.4:11211 > >>> auth_type=password > >>> auth_url=https://api-srv-cloud.host.domain:5000 > >>> username=neutron > >>> password=xxxxxxxxxxxxxxxxxxxxxxxxx > >>> user_domain_name=Default > >>> project_name=services > >>> project_domain_name=Default > >>> > >>> > >>> I'm struggling to understand how the auth_url config is really registered in via oslo_config. > >>> I found an excellent exchagne on the ML here: > >>> > >>> https://openstack.nimeyo.com/115150/openstack-keystone-devstack-confusion-auth_url-middleware > >>> > >>> This seems to indicate auth_url is only registered if a particular auth plugin requires it. But I can't find the plugin code that does it, so I'm not sure how/where to debug it properly. > >>> > >>> If anyone has any ideas, I would really appreciate some input or pointers. > >>> > >>> Thanks! > >>> > >>> > >>> Cheers, > >>> Just > >> > >> > >> Notice: > >> This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. > >> > >> If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. > >> > >> References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. > > > > From zbitter at redhat.com Wed Feb 26 01:39:36 2020 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 25 Feb 2020 20:39:36 -0500 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> Message-ID: <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> On 25/02/20 5:23 am, Thierry Carrez wrote: > 1- No bylaws change > As bylaws changes take a lot of time and energy, the simplest approach > would be to merge the TC and UC without changing the bylaws at all. The > single body (called TC) would incorporate the AUC criteria by adding all > AUC members as extra-ATC. It would tackle all aspects of our community. > To respect the letter of the bylaws, the TC would formally designate 5 > of its members to be the 'UC' and those would select a 'UC chair'. But > all tasks would be handled together. I think there's an even simpler approach. Any member of the foundation can nominate themselves as a UC member. In the past we've just kept extending the deadline until someone steps up. So the *simplest* thing to do is: * Run the election process as usual. If any users want to step up they are welcome to, and on past performance very likely to be acclaimed without an election. * If the deadline passes, the TC commits to finding enough warm bodies who are foundation members to fill any remaining seats, including from among its own members if necessary. * The additional volunteers are acclaimed, and all members of the UC elect a chair. * We accept that to the extent that the UC has duties to perform and ambassadors to help, this will largely remain un-done. This eliminates the issue of actual users needing to submit themselves to an election of all ATCs + AUCs in order to get a seat (which tbh seems questionable under the bylaws, since not all ATCs are AUCs). - ZB From amy at demarco.com Wed Feb 26 01:54:34 2020 From: amy at demarco.com (Amy Marrich) Date: Tue, 25 Feb 2020 19:54:34 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> References: <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> Message-ID: <31AABDF8-256B-4860-9A8D-E0BD1D0562C7@demarco.com> That’s not a bad plan Zane, but are you still thinking separate or the unified group? Amy > On Feb 25, 2020, at 7:42 PM, Zane Bitter wrote: > > On 25/02/20 5:23 am, Thierry Carrez wrote: >> 1- No bylaws change >> As bylaws changes take a lot of time and energy, the simplest approach would be to merge the TC and UC without changing the bylaws at all. The single body (called TC) would incorporate the AUC criteria by adding all AUC members as extra-ATC. It would tackle all aspects of our community. To respect the letter of the bylaws, the TC would formally designate 5 of its members to be the 'UC' and those would select a 'UC chair'. But all tasks would be handled together. > > I think there's an even simpler approach. Any member of the foundation can nominate themselves as a UC member. In the past we've just kept extending the deadline until someone steps up. So the *simplest* thing to do is: > > * Run the election process as usual. If any users want to step up they are welcome to, and on past performance very likely to be acclaimed without an election. > * If the deadline passes, the TC commits to finding enough warm bodies who are foundation members to fill any remaining seats, including from among its own members if necessary. > * The additional volunteers are acclaimed, and all members of the UC elect a chair. > * We accept that to the extent that the UC has duties to perform and ambassadors to help, this will largely remain un-done. > > This eliminates the issue of actual users needing to submit themselves to an election of all ATCs + AUCs in order to get a seat (which tbh seems questionable under the bylaws, since not all ATCs are AUCs). > > - ZB > > From zbitter at redhat.com Wed Feb 26 01:56:16 2020 From: zbitter at redhat.com (Zane Bitter) Date: Tue, 25 Feb 2020 20:56:16 -0500 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: References: <98a8ffb8-e2ba-33eb-eae9-ad30db843c5c@openstack.org> Message-ID: <7e3a1be9-7191-d18c-034c-d576095c6728@redhat.com> On 25/02/20 12:33 pm, Michael Johnson wrote: > So, maybe a hybrid. > > For example, have a standard "Contributing.rst", with fixed > template/content, that can link off to these other guides? (I feel > like this might be the original idea and I just missed the concept, > lol) > > Maybe we need two templates. one for "contributing.rst" in the root > (boilerplate-ish) and one template for inside the documentation tree > ("how-to-contribute.rst ???). This seems similar to what we have > today, but with a more formal template for the "how-to-contribute" > landing page. That sounds very similar to what's actually proposed for option 2: https://review.opendev.org/708672 > To some degree there is a strange overlap here with the existing > "contributor" section of the documentation that has the gory coding > details, etc. > > I still lean towards having the "slightly more verbose" version in the > documentation tree as then we can use the sphinx magic for glossary > linking, internal links, etc. Agreed. It should also be noted that some of the people reading CONTRIBUTING.rst will just be doing so in their local checkout/tarball using cat/less. Making them read full on rst formatting, complete with comments and stuff is probably not the best experience. The current document (e.g. https://opendev.org/openstack/nova/raw/branch/master/CONTRIBUTING.rst) is actually very good at that and AFAIK is actually pretty consistent across all of our repos. I also just don't think that the full on contribution template is the right information for the audience that will read CONTRIBUTING.rst. If someone is looking at that file, it's because they either have a local checkout or a tarball from somewhere and they want to know how to make their first contribution, or because they're already trying to open a PR on GitHub for their first contribution. They need to know where the canonical repo and the bug tracker is and how to submit a bug or patch; they don't need to know what the PTL's duties are at that moment. cheers, Zane. > > Michael > > On Tue, Feb 25, 2020 at 2:56 AM Thierry Carrez wrote: >> >> Kendall Nelson wrote: >>> There is some debate about where the content of the actual docs should >>> live. This grew out of the discussion about how to setup the includes so >>> that the correct info shows up where we want it. 'Correct' in that last >>> sentence being where the debate is. There are two main schools of thought: >>> >>> 1. All the content of the docs should live in the top level >>> CONTRIBUTING.rst and the sphinx glue should live in >>> doc/source/contributor/contributing.rst. A patch has already been merged >>> for this approach[1]. There is also a patch to update the goal to >>> match[2]. This approach keeps all the info in one place so that if >>> things change in the future, its easier to keep things straight. All the >>> content is also more easily discoverable when looking at a repo in >>> GitHub (or similar) or checking out the code because it is at the top >>> most level of the repo and not hidden in the docs sub directory. >>> >>> 2. The new patch[3] says that the content should live in >>> /doc/source/contributor/contributing.rst and a skeleton with only the >>> most important version should live in the top level CONTRIBUTING.rst. >>> This approach argues that people don't want to read a wall of text when >>> viewing the code on GitHub (or similar) or checking it out and looking >>> at the top level CONTRIBUTING.rst and as such only the important details >>> should be kept in that file. These important details being that we don't >>> accept patches in github and where to report bugs (both of which are >>> included in the larger format of the content). >>> >>> So what do people think? Which approach do they prefer? >> >> I personally prefer a single page approach (school 1). That said, I >> think we need to be very careful to keep this page to a minimum, to >> avoid the "wall of text" effect. >> >> In particular, I think: >> >> - "Communication" / "Contacting the Core team" could be collapsed into a >> single section >> >> - "Task tracking" / "Reporting a bug" could be collapsed into a single >> section >> >> - "Project team lead duties" sounds a bit overkill for a first-contact >> doc, can probably be documented elsewhere. >> >> - Sections could be reordered in order of likely involvement: How to >> talk with the team, So you want to report a bug, So you want to propose >> a change, So you want to propose a new feature. >> >>> I am anxious to get this settled ASAP so that projects have time to >>> complete the goal in time. >> >> Agree it would be good to come up with a consensus on this ASAP. Maybe >> the TC can settle it at next week meeting. >> >> -- >> Thierry Carrez (ttx) >> > From gmann at ghanshyammann.com Wed Feb 26 02:29:59 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 25 Feb 2020 20:29:59 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> Message-ID: <1707f545be6.111aaf7f2137329.7862267052998714924@ghanshyammann.com> ---- On Tue, 25 Feb 2020 19:39:36 -0600 Zane Bitter wrote ---- > On 25/02/20 5:23 am, Thierry Carrez wrote: > > 1- No bylaws change > > As bylaws changes take a lot of time and energy, the simplest approach > > would be to merge the TC and UC without changing the bylaws at all. The > > single body (called TC) would incorporate the AUC criteria by adding all > > AUC members as extra-ATC. It would tackle all aspects of our community. > > To respect the letter of the bylaws, the TC would formally designate 5 > > of its members to be the 'UC' and those would select a 'UC chair'. But > > all tasks would be handled together. > > I think there's an even simpler approach. Any member of the foundation > can nominate themselves as a UC member. In the past we've just kept > extending the deadline until someone steps up. So the *simplest* thing > to do is: > > * Run the election process as usual. If any users want to step up they > are welcome to, and on past performance very likely to be acclaimed > without an election. > * If the deadline passes, the TC commits to finding enough warm bodies > who are foundation members to fill any remaining seats, including from > among its own members if necessary. TC or BoDs ? If TC then we still need Bylaw change to remove the number of UC and mention, TC owns to make UC team even with one or more members. Because the number of 5 UC in Bylaw is something that has to be fixed. But again the main question is how TC will find the members as nobody even from TC are not running for UC. -gmann > * The additional volunteers are acclaimed, and all members of the UC > elect a chair. > * We accept that to the extent that the UC has duties to perform and > ambassadors to help, this will largely remain un-done. > > This eliminates the issue of actual users needing to submit themselves > to an election of all ATCs + AUCs in order to get a seat (which tbh > seems questionable under the bylaws, since not all ATCs are AUCs). > > - ZB > > > From fungi at yuggoth.org Wed Feb 26 02:43:45 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Feb 2020 02:43:45 +0000 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> Message-ID: <20200226024345.fkqrpue5cbri6w3q@yuggoth.org> On 2020-02-25 20:39:36 -0500 (-0500), Zane Bitter wrote: [...] > This eliminates the issue of actual users needing to submit > themselves to an election of all ATCs + AUCs in order to get a > seat (which tbh seems questionable under the bylaws, since not all > ATCs are AUCs). The OSF bylaws don't define what an AUC is at all, they leave it up to the UC to define their electorate. They do, however, (via the less-protected TC Member Policy appendix) define ATCs but, also declare that the TC can add anyone they like on top of that base definition. This means that the UC can declare their electorate is TC members, or the combined set of current ATCs and AUCs, or whatever, and that the TC can declare all current AUCs are also ATCs if they so choose. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rui.zang at yandex.com Wed Feb 26 02:57:47 2020 From: rui.zang at yandex.com (rui zang) Date: Wed, 26 Feb 2020 10:57:47 +0800 Subject: Cinder / Nova - Select cinder backend based on nova availability zone? In-Reply-To: References: <4384601582597455@iva8-89610aea0561.qloud-c.yandex.net> Message-ID: <15319131582685867@iva2-5f9649d2845f.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Feb 26 10:23:05 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 26 Feb 2020 11:23:05 +0100 Subject: [tc] February meeting Agenda Message-ID: Hello everyone, Our next meeting is happening next week Thursday (the 5th), and the agenda is, as usual, on the wiki! Here is a primer of the agenda for this month: Follow up on past action items Report on tc/uc merge (ttx): See http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012806.html Ensuring the analysis of the survey was updated (jungleboyj) Report on telemetry after Rico's convo with the PTL (ricolin) Report on stable branch policy work: mnaser to push an update to distill what happened on the ML in terms of stable branch policy (mnaser) Report on the community goals for U and V, py2 drop (gmann) Report on release naming (mugsie) Dropping side projects - draft guidelines are available for review https://review.opendev.org/#/c/707421/ See you there! Regards, Jean-Philippe Evrard (evrardjp) From stig.openstack at telfer.org Wed Feb 26 10:46:18 2020 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 26 Feb 2020 10:46:18 +0000 Subject: [scientific-sig] IRC Meeting today - 1100 UTC Message-ID: Hi All - We have a Scientific SIG meeting on IRC at 1100 UTC in channel #openstack-meeting. Everyone is welcome. Today’s agenda is here: https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_February_26th_2020 Today we’d like to cover SIG participation in a number of upcoming events. The best SIG events are when we get together face-to-face. Cheers, Stig -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Wed Feb 26 12:06:27 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Wed, 26 Feb 2020 12:06:27 +0000 Subject: [nova][neutron] nova scheduling support for routed networks Message-ID: <1582718783.12170.6@est.tech> Hi, There are multiple efforts to add scheduler support to the neutron routed network feature. Matt's effort to have it as a scheduler pre-filter [1] Adam's (rm_work) effort to have this as a a new scheduler hint and a new scheduler filter [2] We had a good discussion on #openstack-nova [3] about the possible way forward with Sean and Adam. Here is a summary and my proposed way forward. Currently booting a server in a routed network setup works iff the neutron port is pre-created with ip_allocation=deferred before the server create. Nova then schedule the server without consider anything about the possible multiple network segments of the port. Neutron will do the segment assignment when the port is bond to the compute host. The problem is that once a port bound to a segment it cannot be bound to another segment as that would change the IP of the port. So if a server is migrated nova should select a destination host where the _same segment_ is available. Therefore we need a way to influence the scheduling based on the assigned segment id of the port. Both [1] and [2] does that based on the fact that neutron creates nova host aggregates for each network segment and those aggregates are mirrored to placement as placement aggregates. For me the pre-filter approach[1] seems better as * it avoids introducing a new scheduler hint * it is not affected by the allocation candidate limit configuration that effect scheduler filters * it allows us to iterate towards an approach where neutron provides the required aggregates of each port via the ports' resource_request attribute The downside of the pre-filter approach is that right now it needs to query the segments for each network / port the server has before the each scheduling. I think we limit such performance impact by making this pre-filter turned off by default via configuration. So a deployment without routed networks does not need to pay this cost. Later this cost can be nullified by extending neutron to specify the needed aggregates in the port's resource_request. I'm now planning to take [1] over from Matt and finish it up by adding functional test and documentation to it. I would also like to raise attention to the tempest patch that adds basic CI coverage for the currently working server create scenario [4]. Cheers, gibi [1] https://review.opendev.org/#/c/656885 [2] https://review.opendev.org/#/c/709280 [3] http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-02-26.log.html#t2020-02-26T10:18:18 [4] https://review.opendev.org/#/c/665155 From jean-philippe at evrard.me Wed Feb 26 12:51:58 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 26 Feb 2020 13:51:58 +0100 Subject: [all][crazy][ideas] Call for "ideas" inside the openstack/ideas repo Message-ID: <4e19fcae558a8cd4b2b3fb463369afd219e96816.camel@evrard.me> Hello everyone, I would like to inform you that the "ideas" repos is now created, and waiting for all your fresh (or non fresh) ideas to change OpenStack! You will be able to find rendered ideas in https://governance.openstack.org/ideas/ . Currently it's lacking a little bit of content... I would love to see this changed soon, to be honest :) Here is a reminder of what the ideas repo is: - It's a place where anyone can propose an idea for changing OpenStack as a whole; - It's a git repository, which allows the versioning (and authoring) of ideas; - It's a way to refer to mailing lists, to know full context, but without having to browse through the history of the multiple mailing lists to find what you are looking for; - It's a repo where everyone can propose ideas to, and there will be no judgement on the ideas for merging them (the only thing asked is that what's written in an idea reflects what has been said on the mailing lists). Thank you for your attention! Your TC chair, Jean-Philippe Evrard (evrardjp) From thierry at openstack.org Wed Feb 26 13:53:53 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 26 Feb 2020 14:53:53 +0100 Subject: OpenStack with vCenter In-Reply-To: References: Message-ID: Kevin Puusild wrote: > I'm currently trying to setup DevStack with vCenter (Not ESXi), i found > a perfect documentation for this task: > https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide > > But the problem here is that when i start *stack.sh *with localrc file > shown in documentation the installing process fails. > > Is this documentation out-dated? Is there some up to date documentation? This wiki page was last modified in 2015, so it is very likely that the information in it is out of date. Current devstack documentation lives at: https://docs.openstack.org/devstack/latest/index.html In case you still encounter issues, please post a more detailed error message: people with OpenStack on vCenter experience might be able to help you. -- Thierry Carrez (ttx) From mark at stackhpc.com Wed Feb 26 14:07:28 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 26 Feb 2020 14:07:28 +0000 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Tue, 25 Feb 2020 at 17:06, Alfredo Moralejo Alonso wrote: > > > > On Mon, Feb 24, 2020 at 8:08 PM Wesley Hayutin wrote: >> >> >> >> On Mon, Feb 24, 2020 at 8:55 AM Mark Goddard wrote: >>> >>> On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso >>> wrote: >>> > >>> > >>> > >>> > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: >>> >> >>> >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: >>> >> > >>> >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek >>> >> > wrote: >>> >> > > >>> >> > > I know it was for masakari. >>> >> > > Gaëtan had to grab crmsh from opensuse: >>> >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ >>> >> > > >>> >> > > -yoctozepto >>> >> > >>> >> > Thanks Wes for getting this discussion going. I've been looking at >>> >> > CentOS 8 today and trying to assess where we are. I created an >>> >> > Etherpad to track status: >>> >> > https://etherpad.openstack.org/p/kolla-centos8 >>> >> >>> > >>> > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. >>> >>> I've been working on the backport of kolla CentOS 8 patches to the >>> stable/train branch. It looks like these packages which were added to >>> master are not present in Train. >>> >> >> I'll help check in on that for you Mark. >> Thank you!! >> > > > I've just synced all deps in centos8 master to train, could you check again? Looking good now, thanks. > >> >> >>> >>> > >>> >> >>> >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error >>> >> code when installing packages. It often happens on the rabbitmq and >>> >> grafana images. There is a prompt about importing GPG keys prior to >>> >> the error. >>> >> >>> >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log >>> >> >>> >> Related bug report? https://github.com/containers/libpod/issues/4431 >>> >> >>> >> Anyone familiar with it? >>> >> >>> > >>> > Didn't know about this issue. >>> > >>> > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. >>> > >>> >> > >>> >> > > >>> >> > > pon., 27 sty 2020 o 10:13 Marcin Juszkiewicz >>> >> > > napisał(a): >>> >> > > > >>> >> > > > W dniu 27.01.2020 o 09:48, Alfredo Moralejo Alonso pisze: >>> >> > > > > How is crmsh used in these images?, ha packages included in >>> >> > > > > HighAvailability repo in CentOS includes pcs and some crm_* commands in pcs >>> >> > > > > and pacemaker-cli packages. IMO, tt'd be good to switch to those commands >>> >> > > > > to manage the cluster. >>> >> > > > >>> >> > > > No idea. Gaëtan Trellu may know - he created those images. >>> >> > > > >>> >> > > >>> >> >>> From grant at civo.com Wed Feb 26 14:46:56 2020 From: grant at civo.com (Grant Morley) Date: Wed, 26 Feb 2020 14:46:56 +0000 Subject: Restarting neutron-l3-agent service Message-ID: Hi all, We are currently seeing an issue where neutron isn't setting up ports correctly on routers. We think we have narrowed it down to the "neutron-l3-agent" on the neutron nodes that is causing the issue. We are wanting to restart the service but was wondering if doing so it would cause any issues with currently running instances? We have 2 neutron nodes in HA mode ( one active and one standby ).  Currently the issue is only affecting newly built instances, so we don't want to restart the service in hours if it is going to cause issues with instances that are currently unaffected. We are running on OpenStack Queens and have built the platform using OSA. Many thanks for any help. Regards, From amoralej at redhat.com Wed Feb 26 14:57:58 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Wed, 26 Feb 2020 15:57:58 +0100 Subject: [tripleo][RDO] Version of Ansible in RDO CentOS8 repository In-Reply-To: References: Message-ID: On Tue, Feb 25, 2020 at 4:32 PM Alfredo Moralejo Alonso wrote: > Hi all, > > During CentOS 8 dependencies preparation we've built ansible 2.9 in RDO > dependencies repo which was released on Oct 2019, > > While testing TripleO with CentOS8 it has been discovered that the latest > release of ceph-ansible does not support ansible 2.9 but only 2.8, so I'm > opening discussion about the best way to move on in CentOS 8: > > - Make ceph-ansible 4.0 to work with ansible 2.8 *and* 2.9 so that the > same releases can be used in CentOS7 with Stein and Train and CentOS8 Train > and Ussuri. > - Maintain separated ceph-ansible releases and builds for centos7/ansible > 2.8 and centos8/ansible 2.9 able to deploy Nautilus. > - Move ansible back to 2.8 in CentOS 8 Ussuri repository. > > I wonder if TripleO or other projects using ansible from RDO repositories > has any requirement or need to move to ansible 2.9 in Ussuri cycle or can > stay in 2.8 until next release, any thoughts? > > I've proposed moving to 2.8.8 in CentOS8 ussuri https://review.rdoproject.org/r/#/c/25379/ Feedback is appreciated. > Best regards, > > Alfredo > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sachinladdha at gmail.com Wed Feb 26 04:14:03 2020 From: sachinladdha at gmail.com (Sachin Laddha) Date: Wed, 26 Feb 2020 09:44:03 +0530 Subject: [TaskFlow] running multiple engines on a shared thread Message-ID: Hi, We are using taskflow to execute workflows. Each workflow is executed by a separate thread (using engine.run() method). This is limiting our capability to execute maximum number of workflows that can run in parallel. It is limited by the number of threads there in the thread pool. Most of the time, the workflow tasks are run by agents which could take some time to complete. Each engine is alive and runs on a dedicated thread. Is there any way to reuse or run multiple engines on one thread. The individual tasks of these engines can run in parallel. I came across iter_run method of the engine class. But not sure if that can be used for this purpose. Any help is highly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mvoelker at vmware.com Wed Feb 26 14:31:58 2020 From: mvoelker at vmware.com (Mark Voelker) Date: Wed, 26 Feb 2020 14:31:58 +0000 Subject: [Tempest] OpenSack Powered * vs OpeStack Compatible In-Reply-To: <1707cdd49b2.ddd44e5d119585.1167564760858241265@ghanshyammann.com> References: <2130957943.152281.1582365846726.ref@mail.yahoo.com> <2130957943.152281.1582365846726@mail.yahoo.com> <775D37EA-DC9B-4B80-A002-51C89A3A9E62@vmware.com> <1706963754.79322.1582614973577@mail.yahoo.com> <1707cdd49b2.ddd44e5d119585.1167564760858241265@ghanshyammann.com> Message-ID: <934A54A9-2E2D-4086-94FA-198E4421031E@vmware.com> On Feb 25, 2020, at 10:00 AM, Ghanshyam Mann > wrote: ---- On Tue, 25 Feb 2020 01:16:13 -0600 prakash RAMCHANDRAN > wrote ---- Mark, Glad you pointed to right code. Reviewed and stand corrected. My mis-underdtanding was I considered 20919.01 as first release and 2019.02 as second release of the year. However based on your comments and reference I understand that it is year.month of release , thus 2019.11 includes 'usuri' and previous 3 releases as listed pointed by you. "os_trademark_approval": { "target_approval": "2019.11", "replaces": "2019.06", "releases": ["rocky", "stein", "train", "ussuri"], "status": "approved" } }, That clears that I should have asked for 2019.11. Few more questions on Tempedt tests. I read some where that we have about 1500 Tempest tests overall. Is that correct? Yeah, it might be little more or less but around 1500 in-tree tests in Tempest. The interop code lines have gone down from 3836 lines to 3232 in train to usuri. Looks contrary to growth, any comments? As Thierry pointed out, a chunk of this was due to the removal of the volumes-v2 API from the required list. A few other tests have been removed over time for various other reasons (changes to or removal of tests in Tempest, etc). Since the test lists are kept in git, you can actually walk through the complete history yourself to see why some tests were added or removed if you’d like: https://opendev.org/openstack/interop/commits/branch/master It’s also important to note that just because we’ve added more tests over time to Tempest, more projects over time to OpenStack, or more API’s to existing projects, that doesn’t mean there will be a corresponding increase in the number of tests required for the trademark programs. The OpenStack Powered program is more a trailing indicator of adoption than an attempt to force commercial products to support any and all capabilities. Each API is considered for inclusion in the program against a set of criteria detailed here: https://opendev.org/openstack/interop/src/branch/master/doc/source/process/CoreCriteria.rst So, for example: if a project introduced a new API in Usuri, it’s highly unlikely that it would appear in the very next Guideline because it would fail to meet several criteria. It would be unlikely to be “widely deployed” since many deployments would still be using Stein or Train (also covered in the same Guideline). It might not yet be supported in many third-party SDK’s or tools, so it might not yet meet the “used by tools” criteria. It might be only supported by one or two plugins/drivers in it’s first release. Etc, etc, etc. Over time that API might gain adoption, meet sufficient criteria, and be added to the required list--or it might not. If you’re curious about the history of all this and the process, you might have a look at this slightly old but still mostly relevant deck: https://www.slideshare.net/markvoelker/interopwg-intro-vertical-programs-jan-2017 Question then is 60 compute and 40 storage lines I see in test cases, do we have stats for Tempest tests what's the distribution of 1500 tests across Platform, compute, storage etc. Where and how can I get that. information as documented report. Those should be counted from interop guidelines where you have mapping of capabilities with test cases. Few or most of the capabilities have one test to verifying it or a few more than one test. For example, "compute-flavors-list" capability is verified by two tests[2]. This way you can count and identify the exact number of tests per compute, storage etc. If you would like to know about the Tempest test categorization, you can find it from the directory structure. We have structured the tests service wise directory, for example, all compute tests under tempest/api/compute [2]. Based on above should we expect decrease or increase for say 2020.05 Vancouver release? Anecdotally, the Guidelines haven’t been changing very much in recent times as the “core” capabilities that have met with a lot of adoption seem fairly well established (though there have been a few new ones added and a few removed). I wouldn’t expect dramatic changes. How does one certify a kubernetes cluster running openstak modules, one module per docker container in a kubrrnetes cluster using tempest say like in Airship Control Plane on k8s worker node, which is a OpenStack over kubernetes cluster. Is this possible and if so what test we need to modify to test and certify a Containerized OpenStack in Airship as OpenStack Platform? I should be verified same way via Tempest. Tempest does not reply on how OpenStack is deployed it interacts via the public interface (where interop needs only user-facing API excluding admin API) of each service which should be accessible on k8s cluster also. Tests are not required to modify in this case. NOTE: we can always extend (writing new tests for current 6 services tests) Tempest for tests required for interop capabilities either that are new or improved in the verification of existing ones. I remember about the discussion on covering the API microversion. We keep adding new tests to cover the new microverison where we need integration tests but we can add API tests also if interop requires those. Right on. In fact, the capabilities in the interop programs are intended to be usable independent of how a cloud is deployed (including whether it’s containerized or on bare metal, whether it uses KVM/OVS/Ceph or vSphere/NSX/vSAN, whether it’s highly available or a single-box appliance, etc). Can we even certify if for say 2019.11? The OpenStack Foundation’s interoperability programs allow you to use either of two most recently approved Guidelines for your testing (which as of right now are 2019.11 and 2019.06). Once the Board of Directors approves the next guideline, 2019.06 will rotate out. I should note though: the Foundation’s interoperability programs are primarily targeted at commercial products (distributions, public clouds, appliances, managed services, etc). There’s no real reason an open source product couldn’t use these same tests of course! At Your Service, Mark T. Voelker This should open up exciting possibilities if practical to extend OpeStack powered platform to Airship. Like to hear anyone who has insight to educate us on that. ThanksPrakash Sent from Yahoo Mail on Android On Mon, Feb 24, 2020 at 6:09 AM, Mark Voelker> wrote: Hi Prakash, I am curious to find out if any Distribution or Products based on Openstack Train or Usuri are seeking the latest certifications based on 2019.02. Hm, there actually isn’t a 2019.02 guideline--were you perhaps referring to 2019.06 or 2019.11? 2019.06 does cover Train but not Usuri [1], 2019.11 covers both [2]. As an FYI, the OpenStack Marketplace does list which guideline a particular product was most recently tested against (refer to https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.openstack.org%2Fmarketplace%2Fdistros%2F&data=02%7C01%7Cmvoelker%40vmware.com%7C5567a591df5a45322f6808d7ba037f79%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637182396561657063&sdata=K9zbzRvFzs3cxVnCocij2Sjy43MNyejDKuRN7ivQkMk%3D&reserved=0 for example, and look for the green “TESTED” checkmark and accompanying guideline version), though this obviously doesn’t tell you what testing might be currently in flight. [1] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopendev.org%2Fopenstack%2Finterop%2Fsrc%2Fbranch%2Fmaster%2F2019.06.json%23L75%5B2&data=02%7C01%7Cmvoelker%40vmware.com%7C5567a591df5a45322f6808d7ba037f79%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637182396561657063&sdata=Vcy8iL%2F8L93m0%2B3PIwikD5vN%2BaREJigrncz7cjd3Zyc%3D&reserved=0] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopendev.org%2Fopenstack%2Finterop%2Fsrc%2Fbranch%2Fmaster%2F2019.11.json%23L75&data=02%7C01%7Cmvoelker%40vmware.com%7C5567a591df5a45322f6808d7ba037f79%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637182396561667051&sdata=pTrl8xMypZJAPFu2vzQwFFOWnJu51k44oxn2PrrQI2Y%3D&reserved=0 At Your Service, Mark T. Voelker On Feb 22, 2020, at 5:04 AM, prakash RAMCHANDRAN > wrote: Hi all, I am curious to find out if any Distribution or Products based on Openstack Train or Usuri are seeking the latest certifications based on 2019.02. Similarly does any Hardware Driver of Software application seeking OpenStack compatibility Logo? Finally does anyone think that Open Infra Distro like Airship or StarlingX should promote Open Infra Airship or Open Infra StarlingX powered as a new way to promote eco system surrounding them similar to OpenStack compatible drivers and software. Will then Argo, customize,Metal3.io or Ironic be qualified as Open Infra Airship compatible? If so how tempest can help in testing the above comments? Refer to this market place below as how Distos and Products leverage OpenStack logos and branding programs. https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.openstack.org%2Fmarketplace%2Fdistros%2F&data=02%7C01%7Cmvoelker%40vmware.com%7C5567a591df5a45322f6808d7ba037f79%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637182396561667051&sdata=rABzR1tWXUHinSG4jpe46ayuELsini4fbKcJhsxrN9g%3D&reserved=0 Discussions and feedback are welcome. A healthy debate as how k8s modules used in Open Infra can be certified will be a good start. ThanksPrakash Sent from Yahoo Mail on Android [1] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopendev.org%2Fopenstack%2Finterop%2Fsrc%2Fcommit%2F8f2e82b7db54cfff9315e5647bd2ba3dd6aacaad%2F2019.11.json%23L260-L281&data=02%7C01%7Cmvoelker%40vmware.com%7C5567a591df5a45322f6808d7ba037f79%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637182396561667051&sdata=wncg32ql1Gq%2B%2F7fZPUDdPzSv2Fbay2zCAXPT%2F9%2BQIo4%3D&reserved=0 [2] https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fopendev.org%2Fopenstack%2Ftempest%2Fsrc%2Fbranch%2Fmaster%2Ftempest%2Fapi%2Fcompute&data=02%7C01%7Cmvoelker%40vmware.com%7C5567a591df5a45322f6808d7ba037f79%7Cb39138ca3cee4b4aa4d6cd83d9dd62f0%7C0%7C0%7C637182396561667051&sdata=mgDV%2Bng445lIrj3CjMwdTeifyGMdXyogQhnVdXk4FS0%3D&reserved=0 -gmann -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 26 15:45:40 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 26 Feb 2020 09:45:40 -0600 Subject: Retiring x/devstack-plugin-bdd Message-ID: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> Hello all, This is to announce that I plan on going through the process to retire the x/devstack-plugin-bdd repo. https://opendev.org/x/devstack-plugin-bdd/ This project was to allow using the Cinder block device driver in Devstack. Support for the block device driver was removed from Cinder many releases ago and there is no expectation that this devstack plugin could be used in any meaningful way. Sean From frode.nordahl at canonical.com Wed Feb 26 16:12:31 2020 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Wed, 26 Feb 2020 17:12:31 +0100 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: +1 from me On Thu, Feb 13, 2020 at 6:09 PM James Page wrote: > > Hi Team > > I'd like to proposed Peter Matulis for membership of the charms-core team. > > Although he's not be focussed on developing the codebase since he started contributing to the OpenStack Charms he's made a number of significant contributions to our documentation as well as regularly providing feedback on updates to README's and the charm deployment guide. > > I think he would make a valuable addition to the charms-core review team! > > Cheers > > James > -- Frode Nordahl From zbitter at redhat.com Wed Feb 26 16:21:45 2020 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 26 Feb 2020 11:21:45 -0500 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <1707f545be6.111aaf7f2137329.7862267052998714924@ghanshyammann.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> <1707f545be6.111aaf7f2137329.7862267052998714924@ghanshyammann.com> Message-ID: <290f018c-79e3-cf16-edba-8d95285f408f@redhat.com> On 25/02/20 9:29 pm, Ghanshyam Mann wrote: > ---- On Tue, 25 Feb 2020 19:39:36 -0600 Zane Bitter wrote ---- > > On 25/02/20 5:23 am, Thierry Carrez wrote: > > > 1- No bylaws change > > > As bylaws changes take a lot of time and energy, the simplest approach > > > would be to merge the TC and UC without changing the bylaws at all. The > > > single body (called TC) would incorporate the AUC criteria by adding all > > > AUC members as extra-ATC. It would tackle all aspects of our community. > > > To respect the letter of the bylaws, the TC would formally designate 5 > > > of its members to be the 'UC' and those would select a 'UC chair'. But > > > all tasks would be handled together. > > > > I think there's an even simpler approach. Any member of the foundation > > can nominate themselves as a UC member. In the past we've just kept > > extending the deadline until someone steps up. So the *simplest* thing > > to do is: > > > > * Run the election process as usual. If any users want to step up they > > are welcome to, and on past performance very likely to be acclaimed > > without an election. > > * If the deadline passes, the TC commits to finding enough warm bodies > > who are foundation members to fill any remaining seats, including from > > among its own members if necessary. > > TC or BoDs ? The TC, but there's no formal role here. TC members just commit amongst themselves to doing whatever it takes to make sure the requisite number of volunteers appear, up to and including volunteering themselves if necessary. > If TC then we still need Bylaw change to remove the number > of UC and mention, TC owns to make UC team even with one or more members. > Because the number of 5 UC in Bylaw is something that has to be fixed. No, what I'm saying is that the TC will ensure the UC always has exactly 5 members so the bylaws can remain unchanged. > But again the main question is how TC will find the members as nobody even from TC > are not running for UC. Easy, like this: Dear , Congratulations, you just volunteered to be a UC member! But don't worry, because there are no duties other than showing up for roll call twice a year. love, the TC > > -gmann > > > * The additional volunteers are acclaimed, and all members of the UC > > elect a chair. > > * We accept that to the extent that the UC has duties to perform and > > ambassadors to help, this will largely remain un-done. > > > > This eliminates the issue of actual users needing to submit themselves > > to an election of all ATCs + AUCs in order to get a seat (which tbh > > seems questionable under the bylaws, since not all ATCs are AUCs). > > > > - ZB > > > > > > > From james.page at canonical.com Wed Feb 26 16:28:27 2020 From: james.page at canonical.com (James Page) Date: Wed, 26 Feb 2020 16:28:27 +0000 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: Welcome to the team Peter! On Wed, Feb 26, 2020 at 4:12 PM Frode Nordahl wrote: > +1 from me > > On Thu, Feb 13, 2020 at 6:09 PM James Page > wrote: > > > > Hi Team > > > > I'd like to proposed Peter Matulis for membership of the charms-core > team. > > > > Although he's not be focussed on developing the codebase since he > started contributing to the OpenStack Charms he's made a number of > significant contributions to our documentation as well as regularly > providing feedback on updates to README's and the charm deployment guide. > > > > I think he would make a valuable addition to the charms-core review team! > > > > Cheers > > > > James > > > > > -- > Frode Nordahl > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Feb 26 16:45:08 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 26 Feb 2020 17:45:08 +0100 Subject: [largescale-sig] Meeting summary and next actions Message-ID: <6f9ee169-b17f-25a5-53e8-676247709a62@openstack.org> Hi everyone, The Large Scale SIG held a meeting today. You can catch up with the summary and logs of the meeting at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-02-26-09.00.html amorin reported on progress documenting better configuration defaults for large scale, as can be seen on the etherpad[1]. He will set up a wiki page under our SIG page[2] to continue solidifying that content, as well as proposing a patch to Nova doc to point to that page. [1] https://etherpad.openstack.org/p/large-scale-sig-documentation [2] https://wiki.openstack.org/wiki/Large_Scale_SIG The story we collected in the scaling stories etherpad[3] proves itself very valuable. We should encourage collecting small bits of information and anecdotes of scaling up, it does not have to be as thorough of a report as the one already collected. [3] https://etherpad.openstack.org/p/scaling-stories masahito reported on progress on the oslo.metrics spec[4]. He plans to post a new patchset of that spec very soon. [4] https://review.opendev.org/#/c/704733/ In other topics, oneswig suggested that we also tackle bare metal cluster scaling. This should be considered a part of the "scaling within one cluster" goal, just for specific types of clusters. It would be great to have a session around bare metal cluster scaling at the upcoming OpenDev event[5] (under the "Large scale usage of open infrastructure" track). belmiro and masahito volunteered to be part of the programming committee for that track. the idea would be to make sure it aligns with the goals of this SIG, and that we recruit new members there. For example, we could have a specific session on scaling-stories, where people can informally talk about scaling anecdotes, and then we'd take notes and follow up with them to submit something more detailed. [5] https://www.openstack.org/events/opendev-ptg-2020/ The next meeting will happen on March 11, at 9:00 UTC on #openstack-meeting. Cheers, -- Thierry Carrez (ttx) From gmann at ghanshyammann.com Wed Feb 26 17:03:22 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 Feb 2020 11:03:22 -0600 Subject: Retiring x/devstack-plugin-bdd In-Reply-To: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> References: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> Message-ID: <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> ---- On Wed, 26 Feb 2020 09:45:40 -0600 Sean McGinnis wrote ---- > Hello all, > > This is to announce that I plan on going through the process to retire > the x/devstack-plugin-bdd repo. > > https://opendev.org/x/devstack-plugin-bdd/ It is not under openstack governance so retire process not really applicable to this. I think this can be left as it is under 'x' namespace or remove the code and update README with "it's no longer available" . But leaving as it is also does not harm as there are lot of repo under 'x' namespace. -gmann > > This project was to allow using the Cinder block device driver in > Devstack. Support for the block device driver was removed from Cinder > many releases ago and there is no expectation that this devstack plugin > could be used in any meaningful way. > > Sean > > > From fungi at yuggoth.org Wed Feb 26 17:11:14 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Feb 2020 17:11:14 +0000 Subject: Retiring x/devstack-plugin-bdd In-Reply-To: <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> References: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> Message-ID: <20200226171113.eydamrjt2cj3ucah@yuggoth.org> On 2020-02-26 11:03:22 -0600 (-0600), Ghanshyam Mann wrote: [...] > It is not under openstack governance so retire process not really > applicable to this. [...] The process described at https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project is still recommended, you can just ignore "Step 5: Remove Repository from the Governance Repository" in such cases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Wed Feb 26 17:13:41 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 Feb 2020 11:13:41 -0600 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <290f018c-79e3-cf16-edba-8d95285f408f@redhat.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> <1707f545be6.111aaf7f2137329.7862267052998714924@ghanshyammann.com> <290f018c-79e3-cf16-edba-8d95285f408f@redhat.com> Message-ID: <170827d65a0.e682b226176781.1443934150246120257@ghanshyammann.com> ---- On Wed, 26 Feb 2020 10:21:45 -0600 Zane Bitter wrote ---- > On 25/02/20 9:29 pm, Ghanshyam Mann wrote: > > ---- On Tue, 25 Feb 2020 19:39:36 -0600 Zane Bitter wrote ---- > > > On 25/02/20 5:23 am, Thierry Carrez wrote: > > > > 1- No bylaws change > > > > As bylaws changes take a lot of time and energy, the simplest approach > > > > would be to merge the TC and UC without changing the bylaws at all. The > > > > single body (called TC) would incorporate the AUC criteria by adding all > > > > AUC members as extra-ATC. It would tackle all aspects of our community. > > > > To respect the letter of the bylaws, the TC would formally designate 5 > > > > of its members to be the 'UC' and those would select a 'UC chair'. But > > > > all tasks would be handled together. > > > > > > I think there's an even simpler approach. Any member of the foundation > > > can nominate themselves as a UC member. In the past we've just kept > > > extending the deadline until someone steps up. So the *simplest* thing > > > to do is: > > > > > > * Run the election process as usual. If any users want to step up they > > > are welcome to, and on past performance very likely to be acclaimed > > > without an election. > > > * If the deadline passes, the TC commits to finding enough warm bodies > > > who are foundation members to fill any remaining seats, including from > > > among its own members if necessary. > > > > TC or BoDs ? > > The TC, but there's no formal role here. TC members just commit amongst > themselves to doing whatever it takes to make sure the requisite number > of volunteers appear, up to and including volunteering themselves if > necessary. > > > If TC then we still need Bylaw change to remove the number > > of UC and mention, TC owns to make UC team even with one or more members. > > Because the number of 5 UC in Bylaw is something that has to be fixed. > > No, what I'm saying is that the TC will ensure the UC always has exactly > 5 members so the bylaws can remain unchanged. > > > But again the main question is how TC will find the members as nobody even from TC > > are not running for UC. > > Easy, like this: > > Dear , > Congratulations, you just volunteered to be a UC member! But don't > worry, because there are no duties other than showing up for roll call > twice a year. :) 'no duties'. Then it makes sense to update the Bylaw to remove UC reference from it and have single governance as TC. Because keeping a team with no duties is negative impression and I will say either we have team working on its mission/duties as a separate team or merged into TC with relevant duties or if those duties are ok not to be done then it is clear that nobody depends on those duties or someone else is doing it indirectly so closing the team is the right approach. -gmann > > love, > the TC > > > > > -gmann > > > > > * The additional volunteers are acclaimed, and all members of the UC > > > elect a chair. > > > * We accept that to the extent that the UC has duties to perform and > > > ambassadors to help, this will largely remain un-done. > > > > > > This eliminates the issue of actual users needing to submit themselves > > > to an election of all ATCs + AUCs in order to get a seat (which tbh > > > seems questionable under the bylaws, since not all ATCs are AUCs). > > > > > > - ZB > > > > > > > > > > > > > > From gmann at ghanshyammann.com Wed Feb 26 17:23:53 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 Feb 2020 11:23:53 -0600 Subject: Retiring x/devstack-plugin-bdd In-Reply-To: <20200226171113.eydamrjt2cj3ucah@yuggoth.org> References: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> <20200226171113.eydamrjt2cj3ucah@yuggoth.org> Message-ID: <1708286bda9.d0b5a14a177167.787593462879049969@ghanshyammann.com> ---- On Wed, 26 Feb 2020 11:11:14 -0600 Jeremy Stanley wrote ---- > On 2020-02-26 11:03:22 -0600 (-0600), Ghanshyam Mann wrote: > [...] > > It is not under openstack governance so retire process not really > > applicable to this. > [...] > > The process described at > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > is still recommended, you can just ignore "Step 5: Remove Repository > from the Governance Repository" in such cases. Even step1 is not required as requirement repo does not own x/* repos. It is left with cleaning up the infra job setup. -gmann > -- > Jeremy Stanley > From aj at suse.com Wed Feb 26 17:29:39 2020 From: aj at suse.com (Andreas Jaeger) Date: Wed, 26 Feb 2020 18:29:39 +0100 Subject: Retiring x/devstack-plugin-bdd In-Reply-To: <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> References: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> Message-ID: <4df53be0-95d0-cc6f-37b4-5a04f3ac7121@suse.com> On 26/02/2020 18.03, Ghanshyam Mann wrote: > [...] > But leaving as it is also does not harm as there are lot of repo > under 'x' namespace. There are - but they are used elsewhere. When we do cross-repo changes, those dead repos hurt. So, I applaud to the retirement of this repo, Andreas -- Andreas Jaeger aj at suse.com Twitter: jaegerandi SUSE Software Solutions Germany GmbH, Maxfeldstr. 5, D 90409 Nürnberg (HRB 36809, AG Nürnberg) GF: Felix Imendörffer GPG fingerprint = EF18 1673 38C4 A372 86B1 E699 5294 24A3 FF91 2ACB From hberaud at redhat.com Wed Feb 26 17:31:15 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 26 Feb 2020 18:31:15 +0100 Subject: [all][crazy][ideas] Call for "ideas" inside the openstack/ideas repo In-Reply-To: <4e19fcae558a8cd4b2b3fb463369afd219e96816.camel@evrard.me> References: <4e19fcae558a8cd4b2b3fb463369afd219e96816.camel@evrard.me> Message-ID: Interesting things, thanks for the sharing. Le mer. 26 févr. 2020 à 13:55, Jean-Philippe Evrard a écrit : > Hello everyone, > > I would like to inform you that the "ideas" repos is now created, and > waiting for all your fresh (or non fresh) ideas to change OpenStack! > > You will be able to find rendered ideas in > https://governance.openstack.org/ideas/ . Currently it's lacking a > little bit of content... I would love to see this changed soon, to be > honest :) > > Here is a reminder of what the ideas repo is: > - It's a place where anyone can propose an idea for changing OpenStack > as a whole; > - It's a git repository, which allows the versioning (and authoring) of > ideas; > - It's a way to refer to mailing lists, to know full context, but > without having to browse through the history of the multiple mailing > lists to find what you are looking for; > - It's a repo where everyone can propose ideas to, and there will be no > judgement on the ideas for merging them (the only thing asked is that > what's written in an idea reflects what has been said on the mailing > lists). > > Thank you for your attention! > > Your TC chair, Jean-Philippe Evrard (evrardjp) > > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 26 17:33:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 26 Feb 2020 11:33:04 -0600 Subject: Retiring x/devstack-plugin-bdd In-Reply-To: <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> References: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> Message-ID: <8df6290b-c1bb-7e57-d20d-05a3a12c1505@gmx.com> On 2/26/20 11:03 AM, Ghanshyam Mann wrote: > ---- On Wed, 26 Feb 2020 09:45:40 -0600 Sean McGinnis wrote ---- > > Hello all, > > > > This is to announce that I plan on going through the process to retire > > the x/devstack-plugin-bdd repo. > > > > https://opendev.org/x/devstack-plugin-bdd/ > > It is not under openstack governance so retire process not really > applicable to this. > > I think this can be left as it is under 'x' namespace or remove the code > and update README with "it's no longer available" . > > But leaving as it is also does not harm as there are lot of repo > under 'x' namespace. > > -gmann It is not under governance, but the retire process still applies to clean up old repos. From gmann at ghanshyammann.com Wed Feb 26 17:47:10 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 26 Feb 2020 11:47:10 -0600 Subject: [tc][uc][all] Starting community-wide goals ideas for V series In-Reply-To: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> References: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> Message-ID: <170829c0eb9.118671b1a178111.1346517061236524495@ghanshyammann.com> ---- On Wed, 05 Feb 2020 12:39:17 -0600 Ghanshyam Mann wrote ---- > Hello everyone, > > We are in R14 week of Ussuri cycle which means It's time to start the > discussions about community-wide goals ideas for the V series. > > Community-wide goals are important in term of solving and improving a technical > area across OpenStack as a whole. It has lot more benefits to be considered from > users as well from a developers perspective. See [1] for more details about > community-wide goals and process. > > We have the Zuulv3 migration goal already accepted and pre-selected for v cycle. > If you are interested in proposing a goal, please write down the idea on this etherpad[2] > - https://etherpad.openstack.org/p/YVR-v-series-goals > > Accordingly, we will start the separate ML discussion over each goal idea. > > Also, you can refer to the backlogs of community-wide goals from this[3] and ussuri > cycle goals[4]. Updates: We have zuulv3 migration goal selected for V cycle[1] and waiting for more goals proposal. For V cycle, we need one more goal to select, please do not wait or hesitate to propose something you think we should solve in OpenStack as a whole community. You can write your idea on this etherpad - https://etherpad.openstack.org/p/YVR-v-series-goals NOTE: As per the new process[2], we can have as many goals proposed and we pick two goals per cycle. Do not limit yourself to propose your goal ideas which can be selected for V or future cyle. [1] https://governance.openstack.org/tc/goals/selected/victoria/index.html [2] https://governance.openstack.org/tc/goals/#process-details -gmann & njohnston > > NOTE: TC has defined the goal process schedule[5] to streamlined the process and > ready with goals for projects to plan/implement at the start of the cycle. I am > hoping to start that schedule for W cycle goals. > > [1] https://governance.openstack.org/tc/goals/index.html > [2] https://etherpad.openstack.org/p/YVR-v-series-goals > [3] https://etherpad.openstack.org/p/community-goals > [4] https://etherpad.openstack.org/p/PVG-u-series-goals > [5] https://governance.openstack.org/tc/goals/#goal-selection-schedule > > -gmann > From peter.matulis at canonical.com Wed Feb 26 18:00:42 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Wed, 26 Feb 2020 13:00:42 -0500 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: Thank you for placing your trust in me. On Wed, Feb 26, 2020 at 11:30 AM James Page wrote: > > Welcome to the team Peter! > > On Wed, Feb 26, 2020 at 4:12 PM Frode Nordahl wrote: >> >> +1 from me >> >> On Thu, Feb 13, 2020 at 6:09 PM James Page wrote: >> > >> > Hi Team >> > >> > I'd like to proposed Peter Matulis for membership of the charms-core team. >> > >> > Although he's not be focussed on developing the codebase since he started contributing to the OpenStack Charms he's made a number of significant contributions to our documentation as well as regularly providing feedback on updates to README's and the charm deployment guide. >> > >> > I think he would make a valuable addition to the charms-core review team! >> > >> > Cheers >> > >> > James >> > >> >> >> -- >> Frode Nordahl From fungi at yuggoth.org Wed Feb 26 18:01:47 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Feb 2020 18:01:47 +0000 Subject: Retiring x/devstack-plugin-bdd In-Reply-To: <1708286bda9.d0b5a14a177167.787593462879049969@ghanshyammann.com> References: <7fe13585-7684-080e-bf42-15126f9c965b@gmx.com> <1708273f504.fe6883e8176415.2528845782080050428@ghanshyammann.com> <20200226171113.eydamrjt2cj3ucah@yuggoth.org> <1708286bda9.d0b5a14a177167.787593462879049969@ghanshyammann.com> Message-ID: <20200226180147.pka53tyfimshktts@yuggoth.org> On 2020-02-26 11:23:53 -0600 (-0600), Ghanshyam Mann wrote: > ---- On Wed, 26 Feb 2020 11:11:14 -0600 Jeremy Stanley wrote ---- > > On 2020-02-26 11:03:22 -0600 (-0600), Ghanshyam Mann wrote: > > [...] > > > It is not under openstack governance so retire process not really > > > applicable to this. > > [...] > > > > The process described at > > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > > is still recommended, you can just ignore "Step 5: Remove Repository > > from the Governance Repository" in such cases. > > Even step1 is not required as requirement repo does not own x/* repos. Not exactly true, there used to be quite a few non-OpenStack projects making use of openstack/requirements synchronization. Since we reworked OpenStack's requirements coordination to no longer propose requirements.txt changes to projects, that step is probably not really applicable to anyone any longer, not even OpenStack projects. It just hasn't been updated to reflect our current reality. > It is left with cleaning up the infra job setup. And in my opinion, the more important parts, replacing the repository content with a README explaining that it's no longer being developed, and setting its Gerrit ACL to read-only so that new change proposals will be rejected instead of going into a black hole without the proposer ever realizing that nobody's going to look at it. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Wed Feb 26 18:04:13 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Feb 2020 18:04:13 +0000 Subject: [all][tc][uc] Uniting the TC and the UC In-Reply-To: <170827d65a0.e682b226176781.1443934150246120257@ghanshyammann.com> References: <1813f849-7428-9c4e-750d-eb9f1f3e8532@openstack.org> <61d0cf46-810b-212d-f825-e1011e7ef6b8@redhat.com> <1707f545be6.111aaf7f2137329.7862267052998714924@ghanshyammann.com> <290f018c-79e3-cf16-edba-8d95285f408f@redhat.com> <170827d65a0.e682b226176781.1443934150246120257@ghanshyammann.com> Message-ID: <20200226180413.m6hkotwwlinx7veo@yuggoth.org> On 2020-02-26 11:13:41 -0600 (-0600), Ghanshyam Mann wrote: [...] > :) 'no duties'. Then it makes sense to update the Bylaw to remove > UC reference from it and have single governance as TC. > > Because keeping a team with no duties is negative impression and I > will say either we have team working on its mission/duties as a > separate team or merged into TC with relevant duties or if those > duties are ok not to be done then it is clear that nobody depends > on those duties or someone else is doing it indirectly so closing > the team is the right approach. Even if it makes sense for the long term, that still leaves the question as to what to do in the meantime, for the years it will take to get those changes to the bylaws enacted. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amuller at redhat.com Wed Feb 26 18:25:05 2020 From: amuller at redhat.com (Assaf Muller) Date: Wed, 26 Feb 2020 13:25:05 -0500 Subject: Restarting neutron-l3-agent service In-Reply-To: References: Message-ID: On Wed, Feb 26, 2020 at 9:50 AM Grant Morley wrote: > > Hi all, > > We are currently seeing an issue where neutron isn't setting up ports > correctly on routers. We think we have narrowed it down to the > "neutron-l3-agent" on the neutron nodes that is causing the issue. > > We are wanting to restart the service but was wondering if doing so it > would cause any issues with currently running instances? > > We have 2 neutron nodes in HA mode ( one active and one standby ). > Currently the issue is only affecting newly built instances, so we don't > want to restart the service in hours if it is going to cause issues with > instances that are currently unaffected. > > We are running on OpenStack Queens and have built the platform using OSA. > > Many thanks for any help. The L3 agent is designed to restart in a way that doesn't impact running workloads. > > Regards, > > > From melwittt at gmail.com Wed Feb 26 18:56:51 2020 From: melwittt at gmail.com (melanie witt) Date: Wed, 26 Feb 2020 10:56:51 -0800 Subject: [nova][neutron] nova scheduling support for routed networks In-Reply-To: <1582718783.12170.6@est.tech> References: <1582718783.12170.6@est.tech> Message-ID: <268dcf9d-eff4-e165-5c3e-013f04ede1ac@gmail.com> On 2/26/20 04:06, Balázs Gibizer wrote: > Hi, > > There are multiple efforts to add scheduler support to the neutron > routed network feature. > > Matt's effort to have it as a scheduler pre-filter [1] > Adam's (rm_work) effort to have this as a a new scheduler hint and a > new scheduler filter [2] > > We had a good discussion on #openstack-nova [3] about the possible way > forward with Sean and Adam. Here is a summary and my proposed way > forward. > > Currently booting a server in a routed network setup works iff the > neutron port is pre-created with ip_allocation=deferred before the > server create. Nova then schedule the server without consider anything > about the possible multiple network segments of the port. Neutron will > do the segment assignment when the port is bond to the compute host. > > The problem is that once a port bound to a segment it cannot be bound > to another segment as that would change the IP of the port. So if a > server is migrated nova should select a destination host where the > _same segment_ is available. Therefore we need a way to influence the > scheduling based on the assigned segment id of the port. Both [1] and > [2] does that based on the fact that neutron creates nova host > aggregates for each network segment and those aggregates are mirrored > to placement as placement aggregates. > > For me the pre-filter approach[1] seems better as > * it avoids introducing a new scheduler hint > * it is not affected by the allocation candidate limit configuration > that effect scheduler filters > * it allows us to iterate towards an approach where neutron provides > the required aggregates of each port via the ports' resource_request > attribute > > The downside of the pre-filter approach is that right now it needs to > query the segments for each network / port the server has before the > each scheduling. I think we limit such performance impact by making > this pre-filter turned off by default via configuration. So a > deployment without routed networks does not need to pay this cost. > Later this cost can be nullified by extending neutron to specify the > needed aggregates in the port's resource_request. > > I'm now planning to take [1] over from Matt and finish it up by adding > functional test and documentation to it. > > I would also like to raise attention to the tempest patch that adds > basic CI coverage for the currently working server create scenario [4]. FWIW, this all sounds like the best approach to me, given the considerations you explained. Thanks for writing it up. Cheers, -melanie > [1] https://review.opendev.org/#/c/656885 > [2] https://review.opendev.org/#/c/709280 > [3] > http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2020-02-26.log.html#t2020-02-26T10:18:18 > [4] https://review.opendev.org/#/c/665155 > > > From alawson at aqorn.com Wed Feb 26 19:47:53 2020 From: alawson at aqorn.com (Adam Peacock) Date: Wed, 26 Feb 2020 11:47:53 -0800 Subject: [neutron] Shared tenant network allow duplicate IP's? Message-ID: Hey folks, So I caught wind from a friend/colleague that allowing duplicate IP's in each tenant was now only achievable by creating a separate tenant network+subnet and assigning them separately to each individual tenant. This doesn't seem right to me and it doesn't scale. *Looking for this:* - tenant-network-id = abc (shared) - tenant1 - vm1: 10.0.0.10 - tenant2 - vm1: 10.0.0.10 Am I missing something and this setup is no longer supported? I hope I'm wrong but I can't find documentation that speaks to this specifically so would appreciate a link if anyone has it handy. Thanks! //adam *Adam Peacock* Principal Architect Office: +1-916-794-5706 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 26 20:51:00 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 26 Feb 2020 12:51:00 -0800 Subject: [all][PTL][release] Call for Ussuri Cycle Highlights Message-ID: Hello Everyone! Its time to start thinking about calling out 'cycle-highlights' in your deliverables! As PTLs, you probably get many pings towards the end of every release cycle by various parties (marketing, management, journalists, etc) asking for highlights of what is new and what significant changes are coming in the new release. By putting them all in the same place it makes them easy to reference because they get compiled into a pretty website like this from the last few releases: Rocky[1], Stein[2], Train[3]. We don't need a fully fledged marketing message, just a few highlights (3-4 ideally), from each project team. Looking through your release notes might be a good place to start. *The deadline for cycle highlights is the end of the R-5 week [4] on April 10th.* How To Reminder: ------------------------- Simply add them to the deliverables/train/$PROJECT.yaml in the openstack/releases repo like this: cycle-highlights: - Introduced new service to use unused host to mine bitcoin. The formatting options for this tag are the same as what you are probably used to with Reno release notes. Also, you can check on the formatting of the output by either running locally: tox -e docs And then checking the resulting doc/build/html/train/highlights.html file or the output of the build-openstack-sphinx-docs job under html/train/ highlights.html. Can't wait to see you all have accomplished this release! Thanks :) -Kendall Nelson (diablo_rojo) [1] https://releases.openstack.org/rocky/highlights.html [2] https://releases.openstack.org/stein/highlights.html [3] https://releases.openstack.org/train/highlights.html [4] htt https://releases.openstack.org/ussuri/schedule.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From dsneddon at redhat.com Wed Feb 26 21:26:43 2020 From: dsneddon at redhat.com (Dan Sneddon) Date: Wed, 26 Feb 2020 13:26:43 -0800 Subject: [neutron] Shared tenant network allow duplicate IP's? In-Reply-To: References: Message-ID: On Wed, Feb 26, 2020 at 11:49 AM Adam Peacock wrote: > Hey folks, > > So I caught wind from a friend/colleague that allowing duplicate IP's in > each tenant was now only achievable by creating a separate tenant > network+subnet and assigning them separately to each individual tenant. > This doesn't seem right to me and it doesn't scale. > > *Looking for this:* > > - tenant-network-id = abc (shared) > - tenant1 > - vm1: 10.0.0.10 > - tenant2 > - vm1: 10.0.0.10 > > Am I missing something and this setup is no longer supported? > I hope I'm wrong but I can't find documentation that speaks to this > specifically so would appreciate a link if anyone has it handy. > > Thanks! > > //adam > > *Adam Peacock* > > Principal Architect > Office: +1-916-794-5706 > That has never been supported. It is not feasible to have two VMs on the same network+subnet that have the same IP, even if they are owned by different tenants. That isn't a Neutron limitation, that's a limitation of IP-over-Ethernet that applies to all networks. Think of the non-virtualized equivalent, if you had a physical network subnet with two computers using the same IP address there would be a conflict, even if one computer was owned by Alice and the other computer was owned by Bob. There is no way to make that work in a virtualized cloud environment unless the two tenants are using different network subnets. -- Dan Sneddon | Senior Principal Software Engineer dsneddon at redhat.com | redhat.com/cloud dsneddon:irc | @dxs:twitter -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Feb 26 21:47:54 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 26 Feb 2020 13:47:54 -0800 Subject: [TaskFlow] running multiple engines on a shared thread In-Reply-To: References: Message-ID: Hi Sachin, I'm not 100% sure I understand your need, but I will attempt to answer and you can correct me if I am off base. Taskflow engines (you can create as many of these as you want) use an executor defined at flow load time. Here is a snippet from the Octavia code: self.executor = concurrent.futures.ThreadPoolExecutor( max_workers=CONF.task_flow.max_workers) eng = tf_engines.load( flow, engine=CONF.task_flow.engine, executor=self.executor, never_resolve=CONF.task_flow.disable_revert, **kwargs) The parts you are likely interested in are: 1. The executor. In this case we are using a concurrent.futures.ThreadPoolExecutor. We then set the max_workers setting to the number of threads we want in our taskflow engine thread pool. 2. During flow load, we define the engine to be 'parallel' (note: 'serial' is the default). This means that unordered flows will run in parallel as opposed to serially. 3. As noted in the documentation[1], You can share an executor between taskflow engines to share the thread pool. Finally, you want to use "unordered" flows or sub-flows to execute tasks concurrently. [1] https://docs.openstack.org/taskflow/latest/user/engines.html#parallel Michael On Wed, Feb 26, 2020 at 7:19 AM Sachin Laddha wrote: > > Hi, > > We are using taskflow to execute workflows. Each workflow is executed by a separate thread (using engine.run() method). This is limiting our capability to execute maximum number of workflows that can run in parallel. It is limited by the number of threads there in the thread pool. > > Most of the time, the workflow tasks are run by agents which could take some time to complete. Each engine is alive and runs on a dedicated thread. > > Is there any way to reuse or run multiple engines on one thread. The individual tasks of these engines can run in parallel. > > I came across iter_run method of the engine class. But not sure if that can be used for this purpose. > > Any help is highly appreciated. From fungi at yuggoth.org Wed Feb 26 22:07:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 26 Feb 2020 22:07:53 +0000 Subject: [neutron] Shared tenant network allow duplicate IP's? In-Reply-To: References: Message-ID: <20200226220753.4l6tefouuhky3mud@yuggoth.org> On 2020-02-26 13:26:43 -0800 (-0800), Dan Sneddon wrote: [...] > That has never been supported. It is not feasible to have two VMs on the > same network+subnet that have the same IP, even if they are owned by > different tenants. That isn't a Neutron limitation, that's a limitation of > IP-over-Ethernet that applies to all networks. > > Think of the non-virtualized equivalent, if you had a physical network > subnet with two computers using the same IP address there would be a > conflict, even if one computer was owned by Alice and the other computer > was owned by Bob. There is no way to make that work in a virtualized cloud > environment unless the two tenants are using different network subnets. It's probably useful to level-set on terminology, since not all these same words are used to mean the same things in different contexts. From Neutron's perspective "network" is your OSI layer 2 broadcast domain, and "subnet" is your OSI layer 3 addressing. Obviously to reuse the same layer 3 (IP) addresses on different systems you need them to reside on separate layer 2 (Ethernet) networks and have independent routing, most likely with some layer 3 address translation in place if they are ever expected to communicate with one another. As Dan points out, though, this has nothing to do with multi-tenancy and everything to do with the fundamentals rules of network engineering. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From najoy at cisco.com Wed Feb 26 22:11:25 2020 From: najoy at cisco.com (Naveen Joy (najoy)) Date: Wed, 26 Feb 2020 22:11:25 +0000 Subject: Networking-vpp 20.01 for VPP 20.01 is now available Message-ID: <3ADC2BFF-5306-4DFC-B301-4750CA7CAEDC@cisco.com> Hello All, We'd like to invite you to try out Networking-vpp 20.01. As many of you may already know, VPP is a fast user space forwarder based on the DPDK toolkit. VPP uses vector packet processing algorithms to minimize the CPU time spent on each packet to maximize throughput. Networking-vpp is a ML2 mechanism driver that controls VPP on your control and compute hosts to provide fast L2 forwarding under Neutron. This latest version of Networking-vpp is updated to work with VPP 20.01. In this release, we've made the below changes: - We've dropped support for Python 2.7 and updated the code to work with Python 3.6 or later. - We've updated the code to be compatible with VPP 20.01 API changes. - We've added VPP API versioning support using 2 data files. They are API whitelist file and API CRC manifest file. At startup, the vpp-agent will check to see if the API signature is compatible with the installed VPP. Also, at runtime, only the whitelisted API calls will be allowed. - We've fixed an issue with the eventlet that caused problems when binding tap interfaces into Linux bridge - We've been doing the usual round of bug fixes, clean-ups and updates. The code will work with VPP 20.01 and the OpenStack Stein release. The README [1] explains how you can try out VPP with Networking-vpp using devstack: the devstack plugin will deploy the mechanism driver and VPP and should give you a working system with a minimum of hassle. We will be continuing our development for VPP's 20.05 release. We welcome anyone who would like to come help us. -- Jerome, Ian & Naveen [1] https://opendev.org/x/networking-vpp/src/branch/master/README.rst -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Wed Feb 26 22:59:18 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 27 Feb 2020 07:59:18 +0900 Subject: [release][tc][horizon] xstatic repositories marked as deprecated Message-ID: Hi, I noticed some xstatic repositories [1] under the horizon team are marked as deprecated. They are marked as deprecated in [2] as they were never released, but the actual situation looks different. They are used in horizon requirements and the contents of these repositories are same as those in the corresponding PyPI modules. I guess they were released to PyPI based on the repositories but the horizon team just forgot to push the corresponding tags. I would like to drop 'deprecated' mark from the repositories and move them to the proper situation. Does it make sense? If so, what is the right solution to do this? In my understanding, we need to add tags corresponding to the deliverables and to add deliverable files in the releases repo. What I am not sure is whether we can push a tag to a repository without pushing deliverables to PyPI? Or do we need to create another release to tag them in their repositories? [1] Such xstatic repositories are: - xstatic-bootstrap-datepicker - xstatic-hogan - xstatic-jquery-migrate - xstatic-jquery.quicksearch - xstatic-jquery.tablesorter - xstatic-rickshaw - xstatic-spin [2] https://review.opendev.org/#/c/629889/ Thanks, Akihiro Motoki (irc: amotoki) From alawson at aqorn.com Thu Feb 27 03:35:53 2020 From: alawson at aqorn.com (Adam Peacock) Date: Wed, 26 Feb 2020 19:35:53 -0800 Subject: [neutron] Shared tenant network allow duplicate IP's? In-Reply-To: <20200226220753.4l6tefouuhky3mud@yuggoth.org> References: <20200226220753.4l6tefouuhky3mud@yuggoth.org> Message-ID: What I'm referring to here are two separate tenants in the same region - each with their own unique Layer 2 broadcast domain but sharing the same subnet definition - with DHCP requiring the use of namespaces and ... the other element escapes me. But subnets don't necessarily presume Layer 3. Routing/switching between subnets yes, the use of a subnet definition is not. This used to be supported as far back as the Icehouse release, just not clear when the support for this configuration was changed or removed. //adam /a/dam, *Adam Peacock* Principal Architect Office: +1-916-794-5706 On Wed, Feb 26, 2020 at 2:13 PM Jeremy Stanley wrote: > On 2020-02-26 13:26:43 -0800 (-0800), Dan Sneddon wrote: > [...] > > That has never been supported. It is not feasible to have two VMs on the > > same network+subnet that have the same IP, even if they are owned by > > different tenants. That isn't a Neutron limitation, that's a limitation > of > > IP-over-Ethernet that applies to all networks. > > > > Think of the non-virtualized equivalent, if you had a physical network > > subnet with two computers using the same IP address there would be a > > conflict, even if one computer was owned by Alice and the other computer > > was owned by Bob. There is no way to make that work in a virtualized > cloud > > environment unless the two tenants are using different network subnets. > > It's probably useful to level-set on terminology, since not all > these same words are used to mean the same things in different > contexts. From Neutron's perspective "network" is your OSI layer 2 > broadcast domain, and "subnet" is your OSI layer 3 addressing. > Obviously to reuse the same layer 3 (IP) addresses on different > systems you need them to reside on separate layer 2 (Ethernet) > networks and have independent routing, most likely with some layer 3 > address translation in place if they are ever expected to > communicate with one another. > > As Dan points out, though, this has nothing to do with multi-tenancy > and everything to do with the fundamentals rules of network > engineering. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Feb 27 05:09:02 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 27 Feb 2020 06:09:02 +0100 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: Hi Akihiro, If you only want to create tags in these repositories, and don't need tarball and PyPI releases, you could follow the approach used for kayobe-config repositories with the no-artifact-build-job flag: https://review.opendev.org/#/c/700174/2/deliverables/train/kayobe.yaml Best wishes, Pierre On Thu, 27 Feb 2020 at 00:06, Akihiro Motoki wrote: > > Hi, > > I noticed some xstatic repositories [1] under the horizon team are > marked as deprecated. > > They are marked as deprecated in [2] as they were never released, but > the actual situation looks different. > They are used in horizon requirements and the contents of these > repositories are same as those in the corresponding PyPI modules. > I guess they were released to PyPI based on the repositories but the > horizon team just forgot to push the corresponding tags. > > I would like to drop 'deprecated' mark from the repositories and move > them to the proper situation. > Does it make sense? > > If so, what is the right solution to do this? > In my understanding, we need to add tags corresponding to the deliverables and > to add deliverable files in the releases repo. What I am not sure is > whether we can > push a tag to a repository without pushing deliverables to PyPI? > Or do we need to create another release to tag them in their repositories? > > [1] Such xstatic repositories are: > - xstatic-bootstrap-datepicker > - xstatic-hogan > - xstatic-jquery-migrate > - xstatic-jquery.quicksearch > - xstatic-jquery.tablesorter > - xstatic-rickshaw > - xstatic-spin > [2] https://review.opendev.org/#/c/629889/ > > Thanks, > Akihiro Motoki (irc: amotoki) > From tony.pearce at cinglevue.com Thu Feb 27 08:19:14 2020 From: tony.pearce at cinglevue.com (Tony Pearce) Date: Thu, 27 Feb 2020 16:19:14 +0800 Subject: Cinder / Nova - Select cinder backend based on nova availability zone? In-Reply-To: <15319131582685867@iva2-5f9649d2845f.qloud-c.yandex.net> References: <4384601582597455@iva8-89610aea0561.qloud-c.yandex.net> <15319131582685867@iva2-5f9649d2845f.qloud-c.yandex.net> Message-ID: Hi Rui, Many thanks for sending over that information to me. It looks like the missing piece of the puzzle was nova.conf `cross_az_attach`. I also reached out to the IRC groups and want to say thank you for the help in the #openstack-cinder. The Mirantis blog was very helpful (and also mentions my exact use-case) but also those docs.openstack links mention some bugs with the `cross_az_attach`. I've applied the config, and tests show that it works great and the desired outcome is achieved. But I'm also going to look into and try the other suggestions from the mirantis blog page. Thanks again for your reply on this :) Regards, *Tony Pearce* On Wed, 26 Feb 2020 at 10:57, rui zang wrote: > Hi Tony, > > I do not have an environment at hand to verify. So I was replying based on > what I understood. > However I dig a little further and found out there is indeed some > alignment between nova AZ and cinder AZ. > > > https://docs.openstack.org/nova/latest/admin/availability-zones.html#resource-affinity > More specifically > https://docs.openstack.org/nova/latest/configuration/config.html#cinder.cross_az_attach > > And also this helped me a lot in understanding the whole picture : > https://www.mirantis.com/blog/the-first-and-final-word-on-openstack-availability-zones/ > > Hope it helps. > > Thanks, > Zang, Rui > > > 25.02.2020, 16:56, "Tony Pearce" : > > Hi Rui, thank you for the reply. I did find that link previously but I was > unable to get it working. Because I am unable to get it working, I tried to > use the `cinder_img_volume_type` in the glance image metadata to select a > backend (which is also not working - bug: > https://bugs.launchpad.net/cinder/+bug/1864616) > > > When I try and use the RESKEY:availability_zones in the volume type it > does not work for me and what is happening is that the volume type of > `__DEFAULT__` is being selected. > > I peformed these steps to test, are these steps correct? > > 1. create the 2 cinder backends > 2. create 2 volume types (one for each backend). > 3. add the RESKEY:availability_zones extra specs to the volume type, like: > RESKEY:availability_zones="host1" for the first volume type > and > RESKEY:availability_zones="host2" for the 2nd volume type > 4. create 2 host aggrigates with AZ "host1" and "host2" for each host1 / > host2 and assign a host to host1 > 5. try to launch an instance and choose the availability zone "host1" from > the dropdown > > 6. result = instance goes error state > volume goes into in-use > volume shows "host" as storage at backend and the "backend" is incorrectly > chosen > In addition to that, the "volume type" selected (which I expected to be > host1) is showing as __DEFAUT__ > > > > At this point I am not sure if I am doing something incorrect and this is > the reason why it is not working, or, I have correct config and this is a > bug experience. > > > > I'd be grateful if you or anyone else that reads this could confirm. > > > > Thanks and regards > > > > *Tony Pearce* | > *Senior Network Engineer / Infrastructure LeadCinglevue International > * > > > > Email: tony.pearce at cinglevue.com > Web: http://www.cinglevue.com > > > > *Australia* > 1 Walsh Loop, Joondalup, WA 6027 Australia. > > > > Direct: +61 8 6202 0036 | Main: +61 8 6202 0024 > > Note: This email and all attachments are the sole property of Cinglevue > International Pty Ltd. (or any of its subsidiary entities), and the > information contained herein must be considered confidential, unless > specified otherwise. If you are not the intended recipient, you must not > use or forward the information contained in these documents. If you have > received this message in error, please delete the email and notify the > sender. > > > > > > > > On Tue, 25 Feb 2020 at 10:24, rui zang wrote: > > I don't think there is auto-alignment between nova az and cinder az. > Probably you may want to look at this > https://docs.openstack.org/cinder/rocky/admin/blockstorage-availability-zone-type.html > Cinder AZ can be associated with volume types. When you create a VM, you > can specify the VM nova az (eg. host1), volume type (cinder az, eg, also > host1), thus by aligning nova az and cinder az manually, maybe your goal > can be achieved. I believe there are also other similar configuration ways > to make it work like this. > > Thanks, > Zang, Rui > > 24.02.2020, 16:53, "Tony Pearce" : > > > > Apologies in advance if this seems trivial but I am looking for some > direction on this and I may have found a bug while testing also. > > > Some background - I have 2 physical hosts which I am testing with (host1, > host2) and I have 2 separate cinder backends (backend1, backend2). Backend1 > can only be utilised by host1. Same for backend2 - it can only be utilised > by host2. So they are paired together like: host1:backend1 host2:backend2 > > > So I wanted to select a Cinder storage back-end based on nova availability > zone and to do this when creating an instance through horizon (not creating > a volume directly). Also I wanted to avoid the use of metadata input on > each instance create or by using metadata from images (such as > cinder_img_volume_type) [2] . Because I can foresee a necessity to then > have a number of images which reference each AZ or backend individually. > > > > > - Is it possible to select a backend based on nova AZ? If so, could > anyone share any resources to me that could help me understand how to > achieve it? > > > > Because I failed at achieving the above, I then decided to use one way > which had worked for me in the past, which was to use the image metadata > "cinder_img_volume_type". However I find that this is not working. The > “default” volume type is selected (if cinder.conf has it) or if no default, > then `__DEFAULT__` is being selected. The link at [2] states that first, a > volume type is used based on the volume type selected and if not chosen/set > then the 2nd method is "cinder_img_volume_type" from the image metadata and > then the 3rd and final is the default from cinder.conf. > > > > I have tested with fresh deployment using Kayobe as well as RDO’s > packstack. > > Openstack version is *Train* > > > Steps to reproduce: > 1. Install packstack > > 2. Update cinder.conf with enabled_backends and the [backend] > > 3. Add the volume type to reference the backend (for reference, I call > this volume type `number-1`) > > 4. Upload an image and add metadata `cinder_img_volume_type` and the name > as mentioned in step 3: number-1 > > 5. Try and create an instance using horizon. Source = image and create new > volume > > 6. Result = volume type / backend as chosen in the image metadata is not > used and instance goes into error status. > > > After fresh-deploying the RDO Packstack, I enabled debug logs and tested > again. In the cinder-api.log I see “"volume_type": null,” and then the next > debug log immediately after logged as “Create volume request body:” has > “volume_type': None”. > > > I was searching for a list of the supported image metadata, in case it had > changed but the pages seem empty one rocky/stein/train [3] or not yet > updated. > > > Selecting backend based on nova AZ:: > > > I was searching how to achieve this and I came across this video on the > subject of AZs [1]. Although it seems only in the context of creating > volumes (not with creating instances with volume from an image, for > example). > > I have tried creating a host aggregate in nova, with AZ name `host1az`. > I've also created a backend in Cinder (cinder.conf) with > `backend_availability_zone = host1az`. But this does not appear to achieve > the desired result, either and the cinder api logs are showing > “"availability_zone": null” during the volume create part of the launch > instance from Horizon. > > I also tried setting RESKEY [3] in the volume type, but again similar > situation seen; although I dont think this option is the correct context > for what I am attempting. > > > Could anyone please nudge me in the right direction on this? Any pointers > appreciated at this point. Thanks in advance. > > References: > > [1] *https://www.youtube.com/watch?v=a5332_Ew9JA* > > > [2] *https://docs.openstack.org/cinder/latest/cli/cli-manage-volumes.html* > > > [3] > *https://docs.openstack.org/cinder/train/contributor/api/cinder.api.schemas.volume_image_metadata.html* > > > > [4] > *https://docs.openstack.org/cinder/rocky/admin/blockstorage-availability-zone-type.html* > > > > > Regards, > > *Tony Pearce* > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Thu Feb 27 10:06:21 2020 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Thu, 27 Feb 2020 10:06:21 +0000 Subject: [ops] nova wsgi config In-Reply-To: <9cbaefe7-fcd0-8c5f-3c6a-b2cda278e01a@debian.org> References: <20191022101943.GG14827@sync> <659657f1-89ba-63b6-f2dc-6d8c42430d08@goirand.fr> <20191024090645.GH14827@sync> <9cbaefe7-fcd0-8c5f-3c6a-b2cda278e01a@debian.org> Message-ID: <20200227100621.GS27149@sync> Hey all, I am back to this nova processes/threads configuration. Here is what I have in my apache wsgi config: $ grep WSGIDaemonProcess /etc/apache2/sites-enabled/10-nova-api.conf WSGIDaemonProcess nova-api display-name=nova-api group=nova processes=1 python-home=/opt/openstack/nova python-path=/opt/openstack/nova/ threads=1 user=nova As you can see, I set 1 process, and 1 thread. But here is the result after restarting apache: $ ps -ef | grep nova nova 8535 8527 4 09:10 ? 00:02:15 nova-api -k start $ ps -o pid -o thcount -p8535 PID THCNT 8535 4 So, nova-api is running 4 threads! I am running nova stein. I checked in nova code but I am far from beeing an expert on the subjet, so does anyone can give me a clue on why nova is doing threading, while it should not? Thanks in advance, -- Arnaud Morin On 25.10.19 - 01:15, Thomas Goirand wrote: > On 10/24/19 11:06 AM, Arnaud Morin wrote: > > Hey Thomas, > > > > Thank you for your example. > > If I understand well, you are using 4 processes in the uwsgi config. > > I dont see any number of thread, does it mean the uwsgi is not spawning > > threads but only processes? ( so there is only 1 thread per process?) > > > > Thanks, > > Hi Arnaud, > > If you carefully read the notes for nova, they are saying that we should > leave the number of thread to 1, otherwise there may be some eventlet > reconnection to rabbit issues. > > It's however fine to increase the number of processes. > > Cheers, > > Thomas Goirand (zigo) > From eblock at nde.ag Thu Feb 27 10:55:28 2020 From: eblock at nde.ag (Eugen Block) Date: Thu, 27 Feb 2020 10:55:28 +0000 Subject: Train release notes: deprecated option show_multiple_locations missing? Message-ID: <20200227105528.Horde.slrWM8mpKX0WCQGeeG9xQW6@webmail.nde.ag> Hi, I'm currently evaluating our new OpenStack environment (Train), testing the usual scenarios. In the past I've had issues with nova and rbd backend, when the compute nodes used to download rbd images just to re-import them back to the ceph cluster instead of simply cloning an image. The cause was the show_multiple_locations option that was marked deprecated since Pike but in fact it's still necessary. The release notes of Pike, Queens, Rocky and Stein all mention that the workaround is still required: > The show_multiple_locations configuration option remains deprecated > in this release, but it has not been removed. > (It had been > scheduled for removal in the Pike release.) Please keep a watch on > the Glance release notes and the > glance-specs repository to stay > informed about developments on this issue. The Train release notes don't mention this option anymore but my tests show that it's still required to avoid unnecessary rbd exports/imports. Maybe the release notes for Train should be updated to cover this issue. In case I missed any further information that already addresses this I apologize. Best regards, Eugen From skaplons at redhat.com Thu Feb 27 11:02:25 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 27 Feb 2020 12:02:25 +0100 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: <7e3a1be9-7191-d18c-034c-d576095c6728@redhat.com> References: <98a8ffb8-e2ba-33eb-eae9-ad30db843c5c@openstack.org> <7e3a1be9-7191-d18c-034c-d576095c6728@redhat.com> Message-ID: Hi, > On 26 Feb 2020, at 02:56, Zane Bitter wrote: > > On 25/02/20 12:33 pm, Michael Johnson wrote: >> So, maybe a hybrid. >> For example, have a standard "Contributing.rst", with fixed >> template/content, that can link off to these other guides? (I feel >> like this might be the original idea and I just missed the concept, >> lol) >> Maybe we need two templates. one for "contributing.rst" in the root >> (boilerplate-ish) and one template for inside the documentation tree >> ("how-to-contribute.rst ???). This seems similar to what we have >> today, but with a more formal template for the "how-to-contribute" >> landing page. > > That sounds very similar to what's actually proposed for option 2: > https://review.opendev.org/708672 > >> To some degree there is a strange overlap here with the existing >> "contributor" section of the documentation that has the gory coding >> details, etc. >> I still lean towards having the "slightly more verbose" version in the >> documentation tree as then we can use the sphinx magic for glossary >> linking, internal links, etc. > > Agreed. It should also be noted that some of the people reading CONTRIBUTING.rst will just be doing so in their local checkout/tarball using cat/less. Making them read full on rst formatting, complete with comments and stuff is probably not the best experience. The current document (e.g. https://opendev.org/openstack/nova/raw/branch/master/CONTRIBUTING.rst) is actually very good at that and AFAIK is actually pretty consistent across all of our repos. > > I also just don't think that the full on contribution template is the right information for the audience that will read CONTRIBUTING.rst. If someone is looking at that file, it's because they either have a local checkout or a tarball from somewhere and they want to know how to make their first contribution, or because they're already trying to open a PR on GitHub for their first contribution. They need to know where the canonical repo and the bug tracker is and how to submit a bug or patch; they don't need to know what the PTL's duties are at that moment. I totally agree with that. IMHO we should keep top-level CONTRIBUTING.rst file short and link to more detailed guide which will be in contributor/contributing.rst. > > cheers, > Zane. > >> Michael >> On Tue, Feb 25, 2020 at 2:56 AM Thierry Carrez wrote: >>> >>> Kendall Nelson wrote: >>>> There is some debate about where the content of the actual docs should >>>> live. This grew out of the discussion about how to setup the includes so >>>> that the correct info shows up where we want it. 'Correct' in that last >>>> sentence being where the debate is. There are two main schools of thought: >>>> >>>> 1. All the content of the docs should live in the top level >>>> CONTRIBUTING.rst and the sphinx glue should live in >>>> doc/source/contributor/contributing.rst. A patch has already been merged >>>> for this approach[1]. There is also a patch to update the goal to >>>> match[2]. This approach keeps all the info in one place so that if >>>> things change in the future, its easier to keep things straight. All the >>>> content is also more easily discoverable when looking at a repo in >>>> GitHub (or similar) or checking out the code because it is at the top >>>> most level of the repo and not hidden in the docs sub directory. >>>> >>>> 2. The new patch[3] says that the content should live in >>>> /doc/source/contributor/contributing.rst and a skeleton with only the >>>> most important version should live in the top level CONTRIBUTING.rst. >>>> This approach argues that people don't want to read a wall of text when >>>> viewing the code on GitHub (or similar) or checking it out and looking >>>> at the top level CONTRIBUTING.rst and as such only the important details >>>> should be kept in that file. These important details being that we don't >>>> accept patches in github and where to report bugs (both of which are >>>> included in the larger format of the content). >>>> >>>> So what do people think? Which approach do they prefer? >>> >>> I personally prefer a single page approach (school 1). That said, I >>> think we need to be very careful to keep this page to a minimum, to >>> avoid the "wall of text" effect. >>> >>> In particular, I think: >>> >>> - "Communication" / "Contacting the Core team" could be collapsed into a >>> single section >>> >>> - "Task tracking" / "Reporting a bug" could be collapsed into a single >>> section >>> >>> - "Project team lead duties" sounds a bit overkill for a first-contact >>> doc, can probably be documented elsewhere. >>> >>> - Sections could be reordered in order of likely involvement: How to >>> talk with the team, So you want to report a bug, So you want to propose >>> a change, So you want to propose a new feature. >>> >>>> I am anxious to get this settled ASAP so that projects have time to >>>> complete the goal in time. >>> >>> Agree it would be good to come up with a consensus on this ASAP. Maybe >>> the TC can settle it at next week meeting. >>> >>> -- >>> Thierry Carrez (ttx) >>> > > — Slawek Kaplonski Senior software engineer Red Hat From skaplons at redhat.com Thu Feb 27 11:32:50 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 27 Feb 2020 12:32:50 +0100 Subject: [ptl][release][stable][EM] Extended Maintenance - Rocky In-Reply-To: References: <8c51036b-050a-3aa7-ef97-4f89c1ba7fe0@est.tech> <79F22CA6-2830-4F21-BCBD-FE2B1C96E249@redhat.com> <1468c6c8-8f2b-b27f-1ba8-5bc6af9ce145@gmx.com> Message-ID: <7D2BB472-E1C0-4BF7-930D-33722B57925C@redhat.com> Hi, > On 25 Feb 2020, at 15:01, Bernard Cafarelli wrote: > > On Mon, 24 Feb 2020 at 22:02, Sean McGinnis wrote: > On 2/24/20 2:56 PM, Slawek Kaplonski wrote: > > Hi, > > > > In Neutron we still have some patches to merge to stable/rocky branch before we will cut last release and go to EM. Will it be ok if we will do it before end of this week? > > That should be fine. As long as these are something in the works and > just waiting for them to make it through the review and gating process, > we should be able to wait a few days so they are included in a final > release. > Indeed, we are down to 2 backports [0], both ready to get in, just waiting for a gate fix [2] So our final release for some Rocky deliverables is at [1] and after that we can go to EM with [2]. Thx Bernard, Amotoki and the rest of the neutron stable team for help with that :) > > Sean > > > [0] https://review.opendev.org/705400 and https://review.opendev.org/705199 > [1] https://bugs.launchpad.net/neutron/+bug/1864471 > > -- > Bernard Cafarelli [1] https://review.opendev.org/#/c/710080/ [2] https://review.opendev.org/#/c/709901/ — Slawek Kaplonski Senior software engineer Red Hat From smooney at redhat.com Thu Feb 27 12:12:44 2020 From: smooney at redhat.com (Sean Mooney) Date: Thu, 27 Feb 2020 12:12:44 +0000 Subject: [neutron] Shared tenant network allow duplicate IP's? In-Reply-To: References: <20200226220753.4l6tefouuhky3mud@yuggoth.org> Message-ID: On Wed, 2020-02-26 at 19:35 -0800, Adam Peacock wrote: > What I'm referring to here are two separate tenants in the same region - > each with their own unique Layer 2 broadcast domain but sharing the same > subnet definition - with DHCP requiring the use of namespaces and ... the > other element escapes me. But subnets don't necessarily presume Layer 3. not subnets are specifcally how we model l3 adress ranges. they dont assume ipv4 or ipv6 at least not any more. also network in neuton generally do model the l2 broadcast domain however if you are using an l3 network backend like calico a network models a set of network segments where each segment is a broadcast domain. > Routing/switching between subnets yes, the use of a subnet definition is > not. the use of subnet definitions has always been required if you are using neuton as your ipam service. you cannoth create a port and set an fix ip on the port without fort addign the port to a network and a subnet, even in the provider routing case with neutron. > > This used to be supported as far back as the Icehouse release, just not > clear when the support for this configuration was changed or removed. can you provide the exact set of command you used to create this configuration. i have worked on openstack since havana and even quantum required you to define subnets. is it possible you are thinking of nova networks? by the way there is a convince feature for those that come form a nova networks world to automate this call "get me a network" it was orginaly porposed in mitaka https://specs.openstack.org/openstack/neutron-specs/specs/mitaka/get-me-a-network.html and impleneted in netwon https://specs.openstack.org/openstack/nova-specs/specs/newton/implemented/get-me-a-network.html the docs are here https://docs.openstack.org/neutron/train/admin/config-auto-allocation.html basicaly there are 3 steps as an operator you do "openstack network set public --default" to setup a default netwrok which will be used to attach a neutorn router too. you then create a default subnet pool that will be used to allcoate subnets to auto created networks openstack subnet pool create --share --default \ --pool-prefix 192.0.2.0/24 --default-prefix-length 26 \ shared-default and the a new tenant just does "openstack network auto allocated topology create --or-show" this will create a router, a netwrok and allocate a subnet to the network then attach teh reouter to the default network as the upstream network and the newly create network/subnet. > > //adam > > /a/dam, > > *Adam Peacock* > > Principal Architect > Office: +1-916-794-5706 > > > On Wed, Feb 26, 2020 at 2:13 PM Jeremy Stanley wrote: > > > On 2020-02-26 13:26:43 -0800 (-0800), Dan Sneddon wrote: > > [...] > > > That has never been supported. It is not feasible to have two VMs on the > > > same network+subnet that have the same IP, even if they are owned by > > > different tenants. That isn't a Neutron limitation, that's a limitation > > > > of > > > IP-over-Ethernet that applies to all networks. > > > > > > Think of the non-virtualized equivalent, if you had a physical network > > > subnet with two computers using the same IP address there would be a > > > conflict, even if one computer was owned by Alice and the other computer > > > was owned by Bob. There is no way to make that work in a virtualized > > > > cloud > > > environment unless the two tenants are using different network subnets. > > > > It's probably useful to level-set on terminology, since not all > > these same words are used to mean the same things in different > > contexts. From Neutron's perspective "network" is your OSI layer 2 > > broadcast domain, and "subnet" is your OSI layer 3 addressing. > > Obviously to reuse the same layer 3 (IP) addresses on different > > systems you need them to reside on separate layer 2 (Ethernet) > > networks and have independent routing, most likely with some layer 3 > > address translation in place if they are ever expected to > > communicate with one another. > > > > As Dan points out, though, this has nothing to do with multi-tenancy > > and everything to do with the fundamentals rules of network > > engineering. > > -- > > Jeremy Stanley > > From rosmaita.fossdev at gmail.com Thu Feb 27 14:01:46 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 27 Feb 2020 09:01:46 -0500 Subject: Train release notes: deprecated option show_multiple_locations missing? In-Reply-To: <20200227105528.Horde.slrWM8mpKX0WCQGeeG9xQW6@webmail.nde.ag> References: <20200227105528.Horde.slrWM8mpKX0WCQGeeG9xQW6@webmail.nde.ag> Message-ID: <54050f6b-0a3f-db47-3960-f8e599789fbc@gmail.com> On 2/27/20 5:55 AM, Eugen Block wrote: > Hi, > > I'm currently evaluating our new OpenStack environment (Train), testing > the usual scenarios. In the past I've had issues with nova and rbd > backend, when the compute nodes used to download rbd images just to > re-import them back to the ceph cluster instead of simply cloning an image. > > The cause was the show_multiple_locations option that was marked > deprecated since Pike but in fact it's still necessary. The release > notes of Pike, Queens, Rocky and Stein all mention that the workaround > is still required: > >> The show_multiple_locations configuration option remains deprecated in >> this release, but it has not been removed. > (It had been scheduled >> for removal in the Pike release.) Please keep a watch on the Glance >> release notes and the > glance-specs repository to stay informed about >> developments on this issue. > > The Train release notes don't mention this option anymore but my tests > show that it's still required to avoid unnecessary rbd exports/imports. You are correct, the option has not been removed and the workaround is still required. > Maybe the release notes for Train should be updated to cover this issue. > In case I missed any further information that already addresses this I > apologize. No, you didn't miss anything. We didn't include anything in the Train release notes because there was nothing new to report, and we didn't want to clutter up the release notes with a non-announcement. In retrospect, though, it might have been a good idea to continue the series of non-announcements. > Best regards, > Eugen > > From thierry at openstack.org Thu Feb 27 14:04:11 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 27 Feb 2020 15:04:11 +0100 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: Pierre Riteau wrote: > If you only want to create tags in these repositories, and don't need > tarball and PyPI releases, you could follow the approach used for > kayobe-config repositories with the no-artifact-build-job flag: > https://review.opendev.org/#/c/700174/2/deliverables/train/kayobe.yaml If I understand correctly, this flag only avoids failing tests if no release job is defined in Zuul for the repository (which is one of the things the release test jobs check for). It would not prevent defined release jobs from running if you added a tag. The way we've been handling this in the past was to ignore past releases (since they are not signed by the release team), and push a new one through the releases repository. It should replace the unofficial one in PyPI and make sure all is in order. In parallel, remove the "deprecated" flag from governance. -- Thierry Carrez (ttx) From eblock at nde.ag Thu Feb 27 14:15:42 2020 From: eblock at nde.ag (Eugen Block) Date: Thu, 27 Feb 2020 14:15:42 +0000 Subject: Train release notes: deprecated option show_multiple_locations missing? In-Reply-To: <54050f6b-0a3f-db47-3960-f8e599789fbc@gmail.com> References: <20200227105528.Horde.slrWM8mpKX0WCQGeeG9xQW6@webmail.nde.ag> <54050f6b-0a3f-db47-3960-f8e599789fbc@gmail.com> Message-ID: <20200227141542.Horde.wy-T9a81KFRMq_KCRmhjlvS@webmail.nde.ag> Thank you for the confirmation, Brain. > No, you didn't miss anything. We didn't include anything in the > Train release notes because there was nothing new to report, and we > didn't want to clutter up the release notes with a non-announcement. > In retrospect, though, it might have been a good idea to continue > the series of non-announcements. I entirely understand the non-cluttering approach, and if you have stumbled across this before you'll probably find the solution quickly again. On the other hand the non-announcements would at least deliver some kind of consistency. ;-) Zitat von Brian Rosmaita : > On 2/27/20 5:55 AM, Eugen Block wrote: >> Hi, >> >> I'm currently evaluating our new OpenStack environment (Train), >> testing the usual scenarios. In the past I've had issues with nova >> and rbd backend, when the compute nodes used to download rbd images >> just to re-import them back to the ceph cluster instead of simply >> cloning an image. >> >> The cause was the show_multiple_locations option that was marked >> deprecated since Pike but in fact it's still necessary. The release >> notes of Pike, Queens, Rocky and Stein all mention that the >> workaround is still required: >> >>> The show_multiple_locations configuration option remains >>> deprecated in this release, but it has not been removed. > (It had >>> been scheduled for removal in the Pike release.) Please keep a >>> watch on the Glance release notes and the > glance-specs >>> repository to stay informed about developments on this issue. >> >> The Train release notes don't mention this option anymore but my >> tests show that it's still required to avoid unnecessary rbd >> exports/imports. > > You are correct, the option has not been removed and the workaround > is still required. > >> Maybe the release notes for Train should be updated to cover this >> issue. In case I missed any further information that already >> addresses this I apologize. > > No, you didn't miss anything. We didn't include anything in the > Train release notes because there was nothing new to report, and we > didn't want to clutter up the release notes with a non-announcement. > In retrospect, though, it might have been a good idea to continue > the series of non-announcements. > >> Best regards, >> Eugen >> >> From pierre at stackhpc.com Thu Feb 27 14:50:52 2020 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 27 Feb 2020 15:50:52 +0100 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: On Thu, 27 Feb 2020 at 15:11, Thierry Carrez wrote: > > Pierre Riteau wrote: > > If you only want to create tags in these repositories, and don't need > > tarball and PyPI releases, you could follow the approach used for > > kayobe-config repositories with the no-artifact-build-job flag: > > https://review.opendev.org/#/c/700174/2/deliverables/train/kayobe.yaml > If I understand correctly, this flag only avoids failing tests if no > release job is defined in Zuul for the repository (which is one of the > things the release test jobs check for). It would not prevent defined > release jobs from running if you added a tag. > > The way we've been handling this in the past was to ignore past releases > (since they are not signed by the release team), and push a new one > through the releases repository. It should replace the unofficial one in > PyPI and make sure all is in order. > > In parallel, remove the "deprecated" flag from governance. My bad, I misunderstood the problem. From now on I shall drink my morning coffee before replying to the list. From moguimar at redhat.com Thu Feb 27 15:13:47 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Thu, 27 Feb 2020 16:13:47 +0100 Subject: [oslo][cache] oslo.cache hardening Message-ID: Hi all, Whenever deploying a service inside a network, basic security concerns come to mind: Is the network trusted? Can we send data in plaintext? Is the service available only to those intended to use it? Can the service itself or others have access to the data? That is no exception for caching servers and a while ago, me and Lance Bragstad started a discussion about this topic. *Possible solutions* *Protecting data in transit using TLS* Requires a backend with TLS support. Since version 1.5.13, Memcached supports authentication and encryption via TLS. This feature requires: OpenSSL 1.1.0 or later; A Memcached client with TLS support; A Memcached server built using ./configure --enable-tls. Encrypting the traffic protects the data in transit from reading and tampering. The complexity impact is that each Memcached server will need a valid certificate. The performance impact is the TLS overhead itself. Performing client authentication protects the server from unauthorized read and write operations. The complexity impact is that each Memcached client will need a valid certificate. The performance impact is bigger due to the extra steps to authenticate both sides. This approach doesn't protect the data held in memory by Memcached in any other way. *Authentication using SASL* Requires a backend with SASL support. Since version 1.4.3, Memcached supports authentication via SASL. This feature requires: A Memcached client with SASL support; A Memcached server built using ./configure --enable-sasl. This approach protects the server from unauthorized read and write operations. The complexity and performance impact is according to SASL usage. This approach doesn't protect the data in transit or held in memory by Memcached in any other way. *Encrypting data before storing* Requires *NO* extra features in the backend. This approach consists of encrypting the data before sending it to the caching servers. The complexity impact is dealing with key sharing for the encryption/decryption process. The performance impact depends on the algorithms used for encryption. This approach protects the data both in transit and held in memory by caching servers, but the key sharing is more prone to setup errors than the TLS or the SASL approach. --- After considering the possible solutions, we decided to tackle the TLS path first. We did an initial analysis of oslo.cache backends that use Memcached together with Hervé Beraud here: https://etherpad.openstack.org/p/oslo-cache-tls-support-worksheet Since python-binary-memcached already has SASL support, we thought it to be a good first candidate to implement TLS support and last month I had it merged here: https://github.com/jaysonsantos/python-binary-memcached/pull/211 We are now looking for more people interested in the discussion and help to push changes forward. -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Thu Feb 27 16:44:54 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 27 Feb 2020 17:44:54 +0100 Subject: [release][tc][horizon] xstatic repositories marked as deprecated In-Reply-To: References: Message-ID: Thierry Carrez wrote: > The way we've been handling this in the past was to ignore past releases > (since they are not signed by the release team), and push a new one > through the releases repository. It should replace the unofficial one in > PyPI and make sure all is in order. Clarification with a practical example: xstatic-hogan 2.0.0.2 is on PyPI, but has no tag in the openstack/xstatic-hogan repo, and no deliverable file in openstack/releases. Solution is to resync everything by proposing a 2.0.0.3 release that will have tag, be in openstack/releases and have a matching upload on PyPI. This is done by: - bumping BUILD at https://opendev.org/openstack/xstatic-hogan/src/branch/master/xstatic/pkg/hogan/__init__.py# - adding a deliverables/_independent/xstatic-hogan.yaml file in openstack/releases defining a tag for 2.0.0.3 - removing the "deprecated" line from https://opendev.org/openstack/governance/src/branch/master/reference/projects.yaml#L542 Repeat for every affected package :) -- Thierry Carrez (ttx) From sachinladdha at gmail.com Thu Feb 27 09:19:07 2020 From: sachinladdha at gmail.com (Sachin Laddha) Date: Thu, 27 Feb 2020 14:49:07 +0530 Subject: [TaskFlow] running multiple engines on a shared thread In-Reply-To: References: Message-ID: thanks Michael, I am aware that executor can be reused by different engines. My query was regarding if multiple engines can share same thread for running the engines(and not the tasks of those engines). I tried run_iter which can be used to run multiple engines but the tasks of individual engines are run one after another. Probably engine is holding on to the thread. This is limiting our ability run multiple workflows (i.e. engines) in parallel. My query is - is it possible to run multiple engines in parallel on same thread (using some asynchronous task execution) On Thu, Feb 27, 2020 at 3:18 AM Michael Johnson wrote: > Hi Sachin, > > I'm not 100% sure I understand your need, but I will attempt to answer > and you can correct me if I am off base. > > Taskflow engines (you can create as many of these as you want) use an > executor defined at flow load time. > > Here is a snippet from the Octavia code: > self.executor = concurrent.futures.ThreadPoolExecutor( > max_workers=CONF.task_flow.max_workers) > eng = tf_engines.load( > flow, > engine=CONF.task_flow.engine, > executor=self.executor, > never_resolve=CONF.task_flow.disable_revert, > **kwargs) > > The parts you are likely interested in are: > > 1. The executor. In this case we are using a > concurrent.futures.ThreadPoolExecutor. We then set the max_workers > setting to the number of threads we want in our taskflow engine thread > pool. > 2. During flow load, we define the engine to be 'parallel' (note: > 'serial' is the default). This means that unordered flows will run in > parallel as opposed to serially. > 3. As noted in the documentation[1], You can share an executor between > taskflow engines to share the thread pool. > > Finally, you want to use "unordered" flows or sub-flows to execute > tasks concurrently. > > [1] https://docs.openstack.org/taskflow/latest/user/engines.html#parallel > > Michael > > On Wed, Feb 26, 2020 at 7:19 AM Sachin Laddha > wrote: > > > > Hi, > > > > We are using taskflow to execute workflows. Each workflow is executed by > a separate thread (using engine.run() method). This is limiting our > capability to execute maximum number of workflows that can run in parallel. > It is limited by the number of threads there in the thread pool. > > > > Most of the time, the workflow tasks are run by agents which could take > some time to complete. Each engine is alive and runs on a dedicated thread. > > > > Is there any way to reuse or run multiple engines on one thread. The > individual tasks of these engines can run in parallel. > > > > I came across iter_run method of the engine class. But not sure if that > can be used for this purpose. > > > > Any help is highly appreciated. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Thu Feb 27 18:43:30 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 27 Feb 2020 11:43:30 -0700 Subject: [tripleo] Message-ID: Greetings, Pending a zuul restart I suggest not rechecking or workflowing +2 patches atm. Probably should lay off submitting patches as well [1]. You can watch the dashboard [2] or follow twitter [3] for the restart notice if you want to. Thanks [1] http://zuul.openstack.org/builds?result=RETRY_LIMIT [2] http://dashboard-ci.tripleo.org/d/cockpit/cockpit?orgId=1&fullscreen&panelId=2 [3] https://twitter.com/openstackinfra -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 27 19:15:04 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 27 Feb 2020 11:15:04 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #3 In-Reply-To: References: Message-ID: It seems like most people being vocal (thank you for your opinions btw :) ) are leaning towards approach number 2 so I guess if no one has any real objections we should get these patches merged[1][2][3] so that projects can make progress on the goal. I think we can still complete it in this cycle and am willing to help projects that are concerned about the deadline. -Kendall (diablo_rojo) [1] Cookie Cutter Change https://review.opendev.org/#/c/708672/ [2] Governance Goal Change https://review.opendev.org/#/c/709617/ [3] Governance Goal Change Typo Fix https://review.opendev.org/#/c/710332/1 On Mon, Feb 24, 2020 at 11:44 AM Kendall Nelson wrote: > Hello! > > There is some debate about where the content of the actual docs should > live. This grew out of the discussion about how to setup the includes so > that the correct info shows up where we want it. 'Correct' in that last > sentence being where the debate is. There are two main schools of thought: > > 1. All the content of the docs should live in the top level > CONTRIBUTING.rst and the sphinx glue should live in > doc/source/contributor/contributing.rst. A patch has already been merged > for this approach[1]. There is also a patch to update the goal to match[2]. > This approach keeps all the info in one place so that if things change in > the future, its easier to keep things straight. All the content is also > more easily discoverable when looking at a repo in GitHub (or similar) or > checking out the code because it is at the top most level of the repo and > not hidden in the docs sub directory. > > 2. The new patch[3] says that the content should live in > /doc/source/contributor/contributing.rst and a skeleton with only the most > important version should live in the top level CONTRIBUTING.rst. This > approach argues that people don't want to read a wall of text when viewing > the code on GitHub (or similar) or checking it out and looking at the top > level CONTRIBUTING.rst and as such only the important details should be > kept in that file. These important details being that we don't accept > patches in github and where to report bugs (both of which are included in > the larger format of the content). > > So what do people think? Which approach do they prefer? > > I am anxious to get this settled ASAP so that projects have time to > complete the goal in time. > > Previous updates if you missed them[4][5]. > > Please feel free to add other ideas or make corrections to my summaries of > the approaches if I missed things :) > > -Kendall (diablo_rojo) > > [1] Merged Template Patch (school of thought 1): > https://review.opendev.org/#/c/708511/ > [2] Goal Update Patch: https://review.opendev.org/#/c/707736/ > [3] Current Template Patch (school of thought 2): > https://review.opendev.org/#/c/708672/ > [4] Update #1: > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012364.html > [5] Update #2: > http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012570.html > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Feb 27 19:30:59 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 27 Feb 2020 19:30:59 +0000 Subject: [tripleo] In-Reply-To: References: Message-ID: <20200227193058.wwk5qwh3bjfk5f5w@yuggoth.org> On 2020-02-27 11:43:30 -0700 (-0700), Wesley Hayutin wrote: [...] > https://twitter.com/openstackinfra There's also a lower-level activity feed which encompasses those notices and alerts along with general status log entries, pushed via automation to this wiki article: https://wiki.openstack.org/wiki/Infrastructure_Status -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Thu Feb 27 20:37:48 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 27 Feb 2020 15:37:48 -0500 Subject: Train release notes: deprecated option show_multiple_locations missing? In-Reply-To: <20200227141542.Horde.wy-T9a81KFRMq_KCRmhjlvS@webmail.nde.ag> References: <20200227105528.Horde.slrWM8mpKX0WCQGeeG9xQW6@webmail.nde.ag> <54050f6b-0a3f-db47-3960-f8e599789fbc@gmail.com> <20200227141542.Horde.wy-T9a81KFRMq_KCRmhjlvS@webmail.nde.ag> Message-ID: <68c4b2e4-5ad3-8ef4-0363-5c7931561c2b@gmail.com> On 2/27/20 9:15 AM, Eugen Block wrote: > Thank you for the confirmation, Brain. > >> No, you didn't miss anything.  We didn't include anything in the Train >> release notes because there was nothing new to report, and we didn't >> want to clutter up the release notes with a non-announcement.  In >> retrospect, though, it might have been a good idea to continue the >> series of non-announcements. > > I entirely understand the non-cluttering approach, and if you have > stumbled across this before you'll probably find the solution quickly > again. On the other hand the non-announcements would at least deliver > some kind of consistency. ;-) We discussed this at today's Glance weekly meeting and decided that your suggestion to update the Train release notes is a good one. Sorry for any confusion this caused. > > Zitat von Brian Rosmaita : > >> On 2/27/20 5:55 AM, Eugen Block wrote: >>> Hi, >>> >>> I'm currently evaluating our new OpenStack environment (Train), >>> testing the usual scenarios. In the past I've had issues with nova >>> and rbd backend, when the compute nodes used to download rbd images >>> just to re-import them back to the ceph cluster instead of simply >>> cloning an image. >>> >>> The cause was the show_multiple_locations option that was marked >>> deprecated since Pike but in fact it's still necessary. The release >>> notes of Pike, Queens, Rocky and Stein all mention that the >>> workaround is still required: >>> >>>> The show_multiple_locations configuration option remains deprecated >>>> in this release, but it has not been removed. > (It had been >>>> scheduled for removal in the Pike release.) Please keep a watch on >>>> the Glance release notes and the > glance-specs repository to stay >>>> informed about developments on this issue. >>> >>> The Train release notes don't mention this option anymore but my >>> tests show that it's still required to avoid unnecessary rbd >>> exports/imports. >> >> You are correct, the option has not been removed and the workaround is >> still required. >> >>> Maybe the release notes for Train should be updated to cover this >>> issue. In case I missed any further information that already >>> addresses this I apologize. >> >> No, you didn't miss anything.  We didn't include anything in the Train >> release notes because there was nothing new to report, and we didn't >> want to clutter up the release notes with a non-announcement.  In >> retrospect, though, it might have been a good idea to continue the >> series of non-announcements. >> >>> Best regards, >>> Eugen >>> >>> > > > > From rosmaita.fossdev at gmail.com Thu Feb 27 21:46:09 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 27 Feb 2020 16:46:09 -0500 Subject: [cinder] ussuri virtual mid-cycle 16 march at 12:00 UTC Message-ID: The poll results are in. As with session one, there was only one time when everyone can meet (apologies to the "if need be" people, thank you for being flexible). Session Two of the Cinder Ussuri virtual mid-cycle will be held: DATE: Monday, 16 March 2020 TIME: 1200-1400 UTC LOCATION: https://bluejeans.com/3228528973 The meeting will be recorded. Please add topics to the planning etherpad: https://etherpad.openstack.org/p/cinder-ussuri-mid-cycle-planning cheers, brian From whayutin at redhat.com Thu Feb 27 22:18:38 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 27 Feb 2020 15:18:38 -0700 Subject: [tripleo] In-Reply-To: <20200227193058.wwk5qwh3bjfk5f5w@yuggoth.org> References: <20200227193058.wwk5qwh3bjfk5f5w@yuggoth.org> Message-ID: On Thu, Feb 27, 2020 at 12:31 PM Jeremy Stanley wrote: > On 2020-02-27 11:43:30 -0700 (-0700), Wesley Hayutin wrote: > [...] > > https://twitter.com/openstackinfra > > There's also a lower-level activity feed which encompasses those > notices and alerts along with general status log entries, pushed via > automation to this wiki article: > > https://wiki.openstack.org/wiki/Infrastructure_Status > > -- > Jeremy Stanley > All clear as you probably saw on irc. Thanks Jeremy :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Feb 27 23:09:59 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 27 Feb 2020 15:09:59 -0800 Subject: [manila] share group replication spike/questions In-Reply-To: References: Message-ID: On Tue, Feb 25, 2020 at 12:53 AM wrote: > Hi, guys, > > As we talked about the topic in a virtual PTG few months ago. > > https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (*Support > promoting several shares in group (DELL EMC: dingdong) * > > > > I’m trying to write a manila-spec for it. > Hi, thank you for working on this, and for submitting a specification [0]. We're targeting this for the Victoria release, correct? I like working on these major changes as soon as possible giving us enough air time for testing and hardening. > It’s my first experience to implement such feature in framework. > > I need to double check with you something, and hope you can give me some > guides like: > > 1. Where is the extra-spec defined for group/group type, > it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) > Group type extra specs are added as storage capabilities first, you begin by modifying the driver interface to report this group type capability. When share drivers report their support for group replication, operators can use the corresponding string in their group type extra-specs to schedule appropriately. I suggest taking a look at an existing share group type capability called "consistent_snapshot_support". [1] and [2] are reviews that added it. > 2. The command cli should be implemented for > ‘python-manilaclinet’ repo, right? (I have never touched this repo before) > Yes. python-manilaclient encompasses - a python SDK to version 2 of the manila API - two shell implementations: manila and openstack client (actively being developed) Group type extra-specs are passed transparently through the SDK and CLI, you may probably add some documentation or shell hint text (like [3] if needed). > 3. Where is the rest-api should be implemented? > The rest API is in the openstack/manila repository. [4][5] contain some documentation regarding how to change the manila API. > 4. And more tips you have? like any other related project > should be changed? > For any new feature, we need these additional things besides working code: - A first party driver implementation where possible so we can test this feature in the upstream CI (if no first party driver can support this feature, you'll need to make the best approximation of this feature through the Dummy/Fake driver [6]) - The feature must be tested with adequate test cases in manila-tempest-plugin - Documentation must be added to the manila documentation [7] > Just list what I know, and more details questions will be raised when > implementing, I think. > > FYI > > Thanks, > > Ding Dong > Happy to answer any more questions, here or on your specification [0] Thanks, Goutham [0] https://review.opendev.org/#/c/710166/ [1] https://review.opendev.org/#/c/446044/ [2] https://review.opendev.org/#/c/447474/ [3] https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 [4] https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html [5] https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html [6] https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py [7] https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Thu Feb 27 23:22:40 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Thu, 27 Feb 2020 23:22:40 +0000 Subject: WSREP: referenced FK check fail after switching to Galera Message-ID: I switched our dev and qa clusters to Galera, and they seem to work fine, but I see this error in the logs: Feb 27 11:24:08 us01odc-dev1-ctrl1 mysqld[361982]: 2020-02-27 11:24:08 484069 [ERROR] InnoDB: WSREP: referenced FK check fail: Lock wait index `PRIMARY` table `neutron`.`provisioningblocks` I googled around and people say that it is a harmless error, but it seems to be time-correlated with creation of VMs that aren't pingable. https://jira.mariadb.org/browse/MDEV-18562 Is this a harmless error? Do I need to look elsewhere for the reason why some VMs fail to return pings? -------------- next part -------------- An HTML attachment was scrubbed... URL: From trang.le at berkeley.edu Fri Feb 28 05:05:01 2020 From: trang.le at berkeley.edu (Trang Le) Date: Thu, 27 Feb 2020 21:05:01 -0800 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Hi Albert, Thank you so much! Sorry it took a while, I just finished with my midterm (the perks of still being in school...). Let me look at the contribution guidelines again and maybe work on some mockups and frameworks. Best, Trang On Wed, Feb 12, 2020 at 10:34 AM Albert Braden wrote: > Also, when I get stuck somewhere in the signup process, I’ve found help on > IRC, on the Freenode network, in #openstack-mentoring > > *From:* Kendall Nelson > *Sent:* Wednesday, February 12, 2020 10:23 AM > *To:* Albert Braden > *Cc:* Trang Le ; > openstack-discuss at lists.openstack.org > *Subject:* Re: [UX] Contributing to OpenStack's User Interface > > > > Also the contributor guide: https://docs.openstack.org/contributors/ > > > > > And if you have any questions about getting started let me or anyone else > in the First Contact SIG[1] know! > > > > -Kendall > > > > [1] https://wiki.openstack.org/wiki/First_Contact_SIG > > > > > On Tue, Feb 11, 2020 at 8:31 AM Albert Braden > wrote: > > Hi Trang, > > > > This document has been useful for me as I work on becoming an Openstack > contributor. > > > > https://docs.openstack.org/infra/manual/developers.html > > > > > *From:* Trang Le > *Sent:* Tuesday, February 11, 2020 2:25 AM > *To:* openstack-discuss at lists.openstack.org > *Subject:* [UX] Contributing to OpenStack's User Interface > > > > Dear OpenStack Discussion Team, > > > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > > > All the best, > > Trang > > > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Feb 28 06:59:14 2020 From: eblock at nde.ag (Eugen Block) Date: Fri, 28 Feb 2020 06:59:14 +0000 Subject: Train release notes: deprecated option show_multiple_locations missing? In-Reply-To: <68c4b2e4-5ad3-8ef4-0363-5c7931561c2b@gmail.com> References: <20200227105528.Horde.slrWM8mpKX0WCQGeeG9xQW6@webmail.nde.ag> <54050f6b-0a3f-db47-3960-f8e599789fbc@gmail.com> <20200227141542.Horde.wy-T9a81KFRMq_KCRmhjlvS@webmail.nde.ag> <68c4b2e4-5ad3-8ef4-0363-5c7931561c2b@gmail.com> Message-ID: <20200228065914.Horde.D3RJVuy_w3ZFXtpYPB8rDRs@webmail.nde.ag> Thanks for the update! Zitat von Brian Rosmaita : > On 2/27/20 9:15 AM, Eugen Block wrote: >> Thank you for the confirmation, Brain. >> >>> No, you didn't miss anything.  We didn't include anything in the >>> Train release notes because there was nothing new to report, and >>> we didn't want to clutter up the release notes with a >>> non-announcement.  In retrospect, though, it might have been a >>> good idea to continue the series of non-announcements. >> >> I entirely understand the non-cluttering approach, and if you have >> stumbled across this before you'll probably find the solution >> quickly again. On the other hand the non-announcements would at >> least deliver some kind of consistency. ;-) > > We discussed this at today's Glance weekly meeting and decided that > your suggestion to update the Train release notes is a good one. > Sorry for any confusion this caused. >> >> Zitat von Brian Rosmaita : >> >>> On 2/27/20 5:55 AM, Eugen Block wrote: >>>> Hi, >>>> >>>> I'm currently evaluating our new OpenStack environment (Train), >>>> testing the usual scenarios. In the past I've had issues with >>>> nova and rbd backend, when the compute nodes used to download rbd >>>> images just to re-import them back to the ceph cluster instead of >>>> simply cloning an image. >>>> >>>> The cause was the show_multiple_locations option that was marked >>>> deprecated since Pike but in fact it's still necessary. The >>>> release notes of Pike, Queens, Rocky and Stein all mention that >>>> the workaround is still required: >>>> >>>>> The show_multiple_locations configuration option remains >>>>> deprecated in this release, but it has not been removed. > (It >>>>> had been scheduled for removal in the Pike release.) Please keep >>>>> a watch on the Glance release notes and the > glance-specs >>>>> repository to stay informed about developments on this issue. >>>> >>>> The Train release notes don't mention this option anymore but my >>>> tests show that it's still required to avoid unnecessary rbd >>>> exports/imports. >>> >>> You are correct, the option has not been removed and the >>> workaround is still required. >>> >>>> Maybe the release notes for Train should be updated to cover this >>>> issue. In case I missed any further information that already >>>> addresses this I apologize. >>> >>> No, you didn't miss anything.  We didn't include anything in the >>> Train release notes because there was nothing new to report, and >>> we didn't want to clutter up the release notes with a >>> non-announcement.  In retrospect, though, it might have been a >>> good idea to continue the series of non-announcements. >>> >>>> Best regards, >>>> Eugen >>>> >>>> >> >> >> >> From radoslaw.piliszek at gmail.com Fri Feb 28 07:32:40 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 28 Feb 2020 08:32:40 +0100 Subject: WSREP: referenced FK check fail after switching to Galera In-Reply-To: References: Message-ID: Hi Albert, Albert wrote: > Do I need to look elsewhere for the reason why some VMs fail to return pings? do you mean those instances aren't pingable but otherwise connectivity from/to them works normally? -yoctozepto From Dong.Ding at dell.com Fri Feb 28 08:21:28 2020 From: Dong.Ding at dell.com (Dong.Ding at dell.com) Date: Fri, 28 Feb 2020 08:21:28 +0000 Subject: [manila] share group replication spike/questions In-Reply-To: References: Message-ID: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> Thanks Gotham, We are talking about this feature after U release. Cannot get it done in recently. Just do some prepare first. BR, Ding Dong From: Goutham Pacha Ravi Sent: Friday, February 28, 2020 7:10 AM To: Ding, Dong Cc: OpenStack Discuss Subject: Re: [manila] share group replication spike/questions [EXTERNAL EMAIL] On Tue, Feb 25, 2020 at 12:53 AM > wrote: Hi, guys, As we talked about the topic in a virtual PTG few months ago. https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (Support promoting several shares in group (DELL EMC: dingdong) I’m trying to write a manila-spec for it. Hi, thank you for working on this, and for submitting a specification [0]. We're targeting this for the Victoria release, correct? I like working on these major changes as soon as possible giving us enough air time for testing and hardening. It’s my first experience to implement such feature in framework. I need to double check with you something, and hope you can give me some guides like: 1. Where is the extra-spec defined for group/group type, it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) Group type extra specs are added as storage capabilities first, you begin by modifying the driver interface to report this group type capability. When share drivers report their support for group replication, operators can use the corresponding string in their group type extra-specs to schedule appropriately. I suggest taking a look at an existing share group type capability called "consistent_snapshot_support". [1] and [2] are reviews that added it. 2. The command cli should be implemented for ‘python-manilaclinet’ repo, right? (I have never touched this repo before) Yes. python-manilaclient encompasses - a python SDK to version 2 of the manila API - two shell implementations: manila and openstack client (actively being developed) Group type extra-specs are passed transparently through the SDK and CLI, you may probably add some documentation or shell hint text (like [3] if needed). 3. Where is the rest-api should be implemented? The rest API is in the openstack/manila repository. [4][5] contain some documentation regarding how to change the manila API. 4. And more tips you have? like any other related project should be changed? For any new feature, we need these additional things besides working code: - A first party driver implementation where possible so we can test this feature in the upstream CI (if no first party driver can support this feature, you'll need to make the best approximation of this feature through the Dummy/Fake driver [6]) - The feature must be tested with adequate test cases in manila-tempest-plugin - Documentation must be added to the manila documentation [7] Just list what I know, and more details questions will be raised when implementing, I think. FYI Thanks, Ding Dong Happy to answer any more questions, here or on your specification [0] Thanks, Goutham [0] https://review.opendev.org/#/c/710166/ [1] https://review.opendev.org/#/c/446044/ [2] https://review.opendev.org/#/c/447474/ [3] https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 [4] https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html [5] https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html [6] https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py [7] https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From agarwalvishakha18 at gmail.com Fri Feb 28 09:25:45 2020 From: agarwalvishakha18 at gmail.com (Vishakha Agarwal) Date: Fri, 28 Feb 2020 14:55:45 +0530 Subject: [keystone] Keystone Team Update - Week of 24 February 2020 Message-ID: # Keystone Team Update - Week of 24 February 2020 ## News ### User Support and Bug Duty Every week the duty is being rotated between the members. The person-in-charge for bug duty for current and upcoming week can be seen on the etherpad [2] [2] https://etherpad.openstack.org/p/keystone-l1-duty ## Open Specs Ussuri specs: https://bit.ly/2XDdpkU Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 8 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 22 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Ussuri Roadmap Stories - Groups in keystone SAML assertion https://tree.taiga.io/project/keystone-ussuri-roadmap/us/33 https://review.opendev.org/#/c/588211/ Add openstack_groups to assertion - Add support for modifying resource options to CLI tool https://tree.taiga.io/project/keystone-ussuri-roadmap/us/53 https://review.opendev.org/#/c/697444/ Adding options to user cli ## Milestone Outlook https://releases.openstack.org/ussuri/schedule.html Feature proposal freeze happens in two weeks. Feature freeze follows four weeks after that. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From amoralej at redhat.com Fri Feb 28 10:07:20 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Fri, 28 Feb 2020 11:07:20 +0100 Subject: [RDO][tripleo][kolla][puppet] Moving CentOS8 master dependencies repo in RDO In-Reply-To: References: Message-ID: Hi, In the next hours we are moving RDO to a new Dependencies repo for CentOS8 master based on new CBS (CentOS Build System) builds. We have tested this new repo for all the projects that we know use it (kolla, packstack, puppet-openstck-integration and tripleo) and we have a plan to make the transition non-disrupting so we expect not to affect gates on any project by modifying [1] in several steps. Let us know if you notice abnormal behavior so that we can fix any issue that may arise or even revert the change if needed. Best regards, Alfredo [1] https://trunk.rdoproject.org/centos8-master/delorean-deps.repo -------------- next part -------------- An HTML attachment was scrubbed... URL: From j at ocado.com Fri Feb 28 10:39:11 2020 From: j at ocado.com (Justin Cattle) Date: Fri, 28 Feb 2020 10:39:11 +0000 Subject: [keystone] [keystonemiddleware] [neutron] [keystone_authtoken] auth_url not available via oslo_config In-Reply-To: References: <3c2978b2-d574-b34a-87a0-6a81f8830718@nemebean.com> Message-ID: Thanks for the responses guys. In the end, tracked down the reason that the config options where not being registered. One of the [ unrelated ] plugins in neutron was failing to initialise, that the silently broke the plugin loading for the rest. It took a while to track it diwn, because it was a silent failure with no exception! :) Apllogies, I don't have the details for the code, but it was somewhere in the neutron.manager. The particular server was rebuilt after we found the issue, adn we didn't preserve our debugs :( We think one of the plugins could not initialise a connection to the MQ, which was why it didn't load properly. Of course, the ml2 plugin we had an issue with has no relation to the MQ, so we didn't even considder that intiially! Thanks again for the help, and sorry we don't hav emore detail about the silent failure. If we did I would raise a bug. Cheers Just On Tue, 25 Feb 2020 at 22:04, Colleen Murphy wrote: > On Mon, Feb 24, 2020, at 08:17, Ben Nemec wrote: > > > > > > On 2/24/20 9:08 AM, Jeremy Freudberg wrote: > > > not a keystone person, but I can offer you this: > > > > https://opendev.org/openstack/sahara/src/commit/75df1e93872a3a6b761d0eb89ca87de0b2b3620f/sahara/utils/openstack/keystone.py#L32 > > > > > > It's a nasty workaround for getting config values from > > > keystone_authtoken which are supposed to private for > > > keystonemiddleware only. It's probably a bad idea. > > > > Yeah, config opts should generally not be referenced by other projects. > > The oslo.config deprecation mechanism doesn't handle the case where an > > opt gets renamed but is still being referred to in the code by its old > > name. I realize that's not what happened here, but in general it's a > > good reason not to do this. If a given config value needs to be exposed > > to consumers of a library it should be explicitly provided via an API. > > > > I realize that's not what happened here, but it demonstrates the > > fragility of referring to another project's config opts directly. It's > > also possible that a project could change when its opts get registered, > > which may be what's happening here. If this plugin code is running > > before keystoneauth has registered its opts that might explain why it's > > not being found. That may also explain why it's working in some other > > environments - if the timing of when the opts are registered versus when > > the plugin code gets called is different it might cause that kind of > > varying behavior with otherwise identical code/configuration. > > > > I have vague memories of this having come up before, but I can't > > remember exactly what the recommendation was. Hopefully someone from > > Keystone can chime in. > > Services that need to connect to keystone with their own session outside > of keystonemiddleware can use the keystoneauth loading module to register > config options rather than reusing the keystone_authtoken section. For > example, this is what nova does: > > https://opendev.org/openstack/nova/src/branch/master/nova/conf/glance.py > > This doesn't help unbreak OP's broken Queens site. Perhaps the neutron or > calico contributors can help diagnose what's different about that site in > order to figure out what's missing that's causing keystonemiddleware not to > load the keystoneauth config opts. > > Colleen > > > > > > > > > > > > On Fri, Feb 21, 2020 at 9:43 AM Justin Cattle wrote: > > >> > > >> Just to add, it also doesn't seem to be registering the password > option from keystone_authtoken either. > > >> > > >> So, makes me think the auth plugin isn't loading , or not the right > one at least ?? > > >> > > >> > > >> Cheers, > > >> Just > > >> > > >> > > >> On Thu, 20 Feb 2020 at 20:55, Justin Cattle wrote: > > >>> > > >>> Hi, > > >>> > > >>> > > >>> I'm reaching out for help with a strange issue I've found. Running > openstack queens, on ubuntu xenial. > > >>> > > >>> We have a bunch of different sites with the same set-up, recently > upgraded from mitaka to queens. However, on this one site, after the > upgrade, we cannot start neutron-server. The reason is, that the ml2 > plugin throws an error because it can't find auth_url from the > keystone_authtoken section of neutron.conf. However, it is there in the > file. > > >>> > > >>> The ml2 plugin is calico, it fails with this error: > > >>> > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico [-] Exception in > function %s: TypeError: expected string or buffer > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico Traceback (most > recent call last): > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico File > "/usr/lib/python2.7/dist-packages/networking_calico/logutils.py", line 21, > in wrapped > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico return > fn(*args, **kwargs) > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico File > "/usr/lib/python2.7/dist-packages/networking_calico/plugins/ml2/drivers/calico/mech_calico.py", > line 347, in _post_fork_init > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico > auth_url=re.sub(r'/v3/?$', '', auth_url) + > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico File > "/usr/lib/python2.7/re.py", line 155, in sub > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico return > _compile(pattern, flags).sub(repl, string, count) > > >>> 2020-02-20 20:14:22.495 2964911 ERROR > networking_calico.plugins.ml2.drivers.calico.mech_calico TypeError: > expected string or buffer > > >>> > > >>> > > >>> When you look at the code, this is because neither auth_url or is > found in cfg.CONF.keystone_authtoken. The config defintely exists. > > >>> > > >>> I have copied the neutron.conf config from a working site, same > error. I have copied the entire /etc/neutron directory from a working > site, same error. > > >>> > > >>> I have check with strace, and /etc/neutron/neutron.conf is the only > neutron.conf being parsed. > > >>> > > >>> Here is the keystone_authtoken part of the config: > > >>> > > >>> [keystone_authtoken] > > >>> auth_uri=https://api-srv-cloud.host.domain:5000 > > >>> region_name=openstack > > >>> memcached_servers=1.2.3.4:11211 > > >>> auth_type=password > > >>> auth_url=https://api-srv-cloud.host.domain:5000 > > >>> username=neutron > > >>> password=xxxxxxxxxxxxxxxxxxxxxxxxx > > >>> user_domain_name=Default > > >>> project_name=services > > >>> project_domain_name=Default > > >>> > > >>> > > >>> I'm struggling to understand how the auth_url config is really > registered in via oslo_config. > > >>> I found an excellent exchagne on the ML here: > > >>> > > >>> > https://openstack.nimeyo.com/115150/openstack-keystone-devstack-confusion-auth_url-middleware > > >>> > > >>> This seems to indicate auth_url is only registered if a particular > auth plugin requires it. But I can't find the plugin code that does it, so > I'm not sure how/where to debug it properly. > > >>> > > >>> If anyone has any ideas, I would really appreciate some input or > pointers. > > >>> > > >>> Thanks! > > >>> > > >>> > > >>> Cheers, > > >>> Just > > >> > > >> > > >> Notice: > > >> This email is confidential and may contain copyright material of > members of the Ocado Group. Opinions and views expressed in this message > may not necessarily reflect the opinions and views of the members of the > Ocado Group. > > >> > > >> If you are not the intended recipient, please notify us immediately > and delete all copies of this message. Please note that it is your > responsibility to scan this message for viruses. > > >> > > >> References to the "Ocado Group" are to Ocado Group plc (registered in > England and Wales with number 7098618) and its subsidiary undertakings (as > that expression is defined in the Companies Act 2006) from time to time. > The registered office of Ocado Group plc is Buildings One & Two, Trident > Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. > > > > > > > > > -- Notice: This email is confidential and may contain copyright material of members of the Ocado Group. Opinions and views expressed in this message may not necessarily reflect the opinions and views of the members of the Ocado Group. If you are not the intended recipient, please notify us immediately and delete all copies of this message. Please note that it is your responsibility to scan this message for viruses. References to the "Ocado Group" are to Ocado Group plc (registered in England and Wales with number 7098618) and its subsidiary undertakings (as that expression is defined in the Companies Act 2006) from time to time. The registered office of Ocado Group plc is Buildings One & Two, Trident Place, Mosquito Way, Hatfield, Hertfordshire, AL10 9UL. -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Fri Feb 28 13:59:31 2020 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 28 Feb 2020 08:59:31 -0500 Subject: [RDO][tripleo][kolla][puppet] Moving CentOS8 master dependencies repo in RDO In-Reply-To: References: Message-ID: On Fri, Feb 28, 2020 at 5:14 AM Alfredo Moralejo Alonso wrote: > > Hi, > > In the next hours we are moving RDO to a new Dependencies repo for CentOS8 > master based on new CBS (CentOS Build System) builds. > > We have tested this new repo for all the projects that we know use it > (kolla, packstack, puppet-openstck-integration and tripleo) and we have a > plan to make the transition non-disrupting so we expect not to affect gates > on any project by modifying [1] in several steps. > > Let us know if you notice abnormal behavior so that we can fix any issue > that may arise or even revert the change if needed. > Thanks a lot to you and your team for this hard work, we have never been that close! > Best regards, > > Alfredo > > [1] https://trunk.rdoproject.org/centos8-master/delorean-deps.repo > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 28 15:08:02 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 28 Feb 2020 09:08:02 -0600 Subject: [release] Release countdown for week R-10, March 2-6 Message-ID: <1a23e5fe-f8c6-5261-311d-e5491a96db8f@gmx.com> General Information ------------------- The following cycle-with-intermediary deliverables have not done any intermediary release yet during this cycle. The cycle-with-rc release model is more suited for deliverables that plan to be released only once per cycle. As a result, we have proposed[1] to change the release model for the following deliverables: freezer karbor networking-baremetal neutron-fwaas-dashboard, neutron-vpnaas-dashboard senlin-dashboard tacker-horizon [1] https://review.opendev.org/#/q/topic:ussuri-cwi PTLs and release liaisons for each of those deliverables can either +1 the release model change, or propose an intermediary release for that deliverable. In absence of answer by the end of R-10 week we'll consider that the switch to cycle-with-rc is preferable. We are now past the published date for the stable/rocky branch to enter the Extended Maintenance phase: https://docs.openstack.org/project-team-guide/stable-branches.html#maintenance-phases A set of patches have been proposed for all cycle-based deliverables in rocky to tag the last release as "rocky-em". After this point, no more official releases will be done. Please acknowledge any patches for your team if you have not already done so, or let us know if there are some last minute patches on their way to merging that we should hold for. https://review.opendev.org/#/q/status:open+project:openstack/releases+branch:master+topic:rocky-em Any patches not acknowledged by March 5 will be assumed to be OK and we will process any remaining patches. We also published a proposed release schedule for the upcoming Victoria cycle. Please check out the separate thread: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012735.html Upcoming Deadlines & Dates -------------------------- Non-client library freeze: April 2 (R-6 week) Client library freeze: April 9 (R-5 week) Ussuri-3 milestone: April 9 (R-5 week) OpenDev+PTG Vancouver: June 8-11 From mdulko at redhat.com Fri Feb 28 15:27:50 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Fri, 28 Feb 2020 16:27:50 +0100 Subject: [kuryr] macvlan driver looking for an owner In-Reply-To: <53fd84f1c90c5deac82340662f9b284257ba798f.camel@redhat.com> References: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> <53fd84f1c90c5deac82340662f9b284257ba798f.camel@redhat.com> Message-ID: <341dd00ab72894947f4c5ce789ea849ce938ee55.camel@redhat.com> Hi, I was able to migrate the macvlan plugin in [1]. I'd still appreciate if somebody could test this on real env. Thanks, Michał [1] https://review.opendev.org/#/c/709806/ On Mon, 2020-02-10 at 18:07 +0100, mdulko at redhat.com wrote: > On Fri, 2020-02-07 at 16:25 +0100, Roman Dobosz wrote: > > Hi, > > > > Recently we migrated most of the drivers and handler to OpenStackSDK, > > instead of neutron-client[1], and have a plan to drop neutron-client > > usage. > > > > One of the drivers - MACVLAN based interfaces for nested containers - > > wasn't migrated due to lack of confidence backed up with sufficient > > tempest tests. Therefore we are looking for a maintainer, who will > > take care about both - migration to the openstacksdk (which I can > > help with) and to provide appropriate environment and tests. > > > > In case there is no interest on continuing support for this driver we > > will deprecate it and remove from the source tree possibly in this > > release. > > I think I'll give a shot and will try converting it to openstacksdk. We > depend on revisions [2] feature and seems like openstacksdk lacks > support for it, but maybe I'll be able to solve this. If I'll fail, I > guess the driver will be on it's way out. > > [2] https://docs.openstack.org/api-ref/network/v2/#revisions > > > [1] https://review.opendev.org/#/q/topic:bp/switch-to-openstacksdk > > > > From gagehugo at gmail.com Fri Feb 28 16:09:58 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Fri, 28 Feb 2020 10:09:58 -0600 Subject: [security] Security SIG Newsletter - Feb 2020 Message-ID: #Month Feb 2020 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Updates - The Security SIG has recently updated the reporting guidelines for private security bugs to now have a max timeframe of 90 days for embargos (excluding unusual circumstances). We have begun marking all existing private bugs to go public after 90 days from the date that it was updated. - - The Security SIG has reached out about obtaining a room/timeslot for the Vancouver PTG. We will be requesting a spot, likely just a shorter timeslot due to a small amount of given interest. However we should be able to have time to discuss any topics brought forward. The first OSSA of the year was released this month, for more details check out the link below. #VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ - OSSA-2020-001 was released: https://security.openstack.org/ossa/OSSA-2020-001.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From ekultails at gmail.com Fri Feb 28 17:26:49 2020 From: ekultails at gmail.com (Luke Short) Date: Fri, 28 Feb 2020 12:26:49 -0500 Subject: [tripleo] openstack-train mirror for CentOS aarch64 In-Reply-To: References: Message-ID: Greetings Lenny, I do not know the exact plans for CentOS 7 or 8. However, I believe the priority right now is to get the CentOS 8 x86_64 repository and packages working first for both Train and master (Ussuri). Then other architectures may be explored for CentOS 8. Sincerely, Luke Short On Mon, Feb 24, 2020 at 9:48 AM Lenny Verkhovsky wrote: > Hi, > > I see that there is no openstack-train repo for aarch64 for CentOS[1] > > This repo is also used by kolla project. > > Who maintains it and if there are any plans for adding train repo? > > > > [1] http://mirror.centos.org/altarch/7/cloud/aarch64/ > > > > > > *Best Regards* > > > > *Lenny Verkhovsky *Mellanox Technologies > > office:+972 74 712 92 44 irc:lennyb > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaetan.trellu at incloudus.com Fri Feb 28 20:29:15 2020 From: gaetan.trellu at incloudus.com (gaetan.trellu at incloudus.com) Date: Fri, 28 Feb 2020 15:29:15 -0500 Subject: [glance] Different checksum between CLI and curl Message-ID: Hi guys, Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". Any idea? Thanks, Gaëtan [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data From mordred at inaugust.com Fri Feb 28 22:00:00 2020 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 28 Feb 2020 16:00:00 -0600 Subject: [glance] Different checksum between CLI and curl In-Reply-To: References: Message-ID: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> > On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: > > Hi guys, > > Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? > > During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". > > Any idea? That seems strange. I don’t know off the top of my head. I do know Artem has patches up to switch OSC to using SDK for image operations. https://review.opendev.org/#/c/699416/ That said, I’d still expect current OSC checksums to be solid. Perhaps there is some filtering/processing being done cloud-side in your glance? If you download the image to a file and run a checksum on it - does it match the checksum given by OSC on upload? Or the checksum given by glance API on download? > Thanks, > > Gaëtan > > [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data > > From gaetan.trellu at incloudus.com Fri Feb 28 22:15:51 2020 From: gaetan.trellu at incloudus.com (gaetan.trellu at incloudus.com) Date: Fri, 28 Feb 2020 17:15:51 -0500 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> Message-ID: <10cb06508fa2146207462a9778253c22@incloudus.com> Hey Monty, If I download the image via the CLI, the checksum of the file matches the checksum from the image details. If I download the image via "curl", the "Content-Md5" header matches the image details but the file checksum doesn't. The files have the same size, this is really weird. Gaëtan On 2020-02-28 17:00, Monty Taylor wrote: >> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: >> >> Hi guys, >> >> Does anyone know why the md5 checksum is different between the >> "openstack image save" CLI and "curl" commands? >> >> During the image creation a checksum is computed to check the image >> integrity, using the "openstack" CLI match the checksum generated but >> when "curl" is used by following the API documentation[1] the checksum >> change at every "download". >> >> Any idea? > > That seems strange. I don’t know off the top of my head. I do know > Artem has patches up to switch OSC to using SDK for image operations. > > https://review.opendev.org/#/c/699416/ > > That said, I’d still expect current OSC checksums to be solid. Perhaps > there is some filtering/processing being done cloud-side in your > glance? If you download the image to a file and run a checksum on it - > does it match the checksum given by OSC on upload? Or the checksum > given by glance API on download? > >> Thanks, >> >> Gaëtan >> >> [1] >> https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data >> >> From mordred at inaugust.com Fri Feb 28 22:21:03 2020 From: mordred at inaugust.com (Monty Taylor) Date: Fri, 28 Feb 2020 16:21:03 -0600 Subject: [glance] Different checksum between CLI and curl In-Reply-To: <10cb06508fa2146207462a9778253c22@incloudus.com> References: <5AC5FCDE-4F8E-478B-9BA0-34C527DDC2E2@inaugust.com> <10cb06508fa2146207462a9778253c22@incloudus.com> Message-ID: <40790667-B696-4CBC-9CD2-41A684D97D64@inaugust.com> > On Feb 28, 2020, at 4:15 PM, gaetan.trellu at incloudus.com wrote: > > Hey Monty, > > If I download the image via the CLI, the checksum of the file matches the checksum from the image details. > If I download the image via "curl", the "Content-Md5" header matches the image details but the file checksum doesn't. > > The files have the same size, this is really weird. WOW. I still don’t know the issue - but my unfounded hunch is that the curl command is likely not doing something it should be. If OSC is producing a file that matches the image details, that seems like the right choice for now. Seriously fascinating though. > Gaëtan > > On 2020-02-28 17:00, Monty Taylor wrote: >>> On Feb 28, 2020, at 2:29 PM, gaetan.trellu at incloudus.com wrote: >>> Hi guys, >>> Does anyone know why the md5 checksum is different between the "openstack image save" CLI and "curl" commands? >>> During the image creation a checksum is computed to check the image integrity, using the "openstack" CLI match the checksum generated but when "curl" is used by following the API documentation[1] the checksum change at every "download". >>> Any idea? >> That seems strange. I don’t know off the top of my head. I do know >> Artem has patches up to switch OSC to using SDK for image operations. >> https://review.opendev.org/#/c/699416/ >> That said, I’d still expect current OSC checksums to be solid. Perhaps >> there is some filtering/processing being done cloud-side in your >> glance? If you download the image to a file and run a checksum on it - >> does it match the checksum given by OSC on upload? Or the checksum >> given by glance API on download? >>> Thanks, >>> Gaëtan >>> [1] https://docs.openstack.org/api-ref/image/v2/index.html?expanded=download-binary-image-data-detail#download-binary-image-data > From gouthampravi at gmail.com Fri Feb 28 23:42:48 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 28 Feb 2020 15:42:48 -0800 Subject: [manila] share group replication spike/questions In-Reply-To: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> References: <55d84e2e29cb4758aaff0b8c07aaa0bd@KULX13MDC124.APAC.DELL.COM> Message-ID: On Fri, Feb 28, 2020 at 12:21 AM wrote: > Thanks Gotham, > > > > We are talking about this feature *after U release*. > > Cannot get it done in recently. > > Just do some prepare first. > Great, thanks for confirming. We'll hash out the design on the specification, and if necessary, we can work through it during the Open Infra Project Technical Gathering in June [8][9] [8] https://www.openstack.org/ptg/ [9] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning > > > BR, > > Ding Dong > > > > *From:* Goutham Pacha Ravi > *Sent:* Friday, February 28, 2020 7:10 AM > *To:* Ding, Dong > *Cc:* OpenStack Discuss > *Subject:* Re: [manila] share group replication spike/questions > > > > [EXTERNAL EMAIL] > > > > > > > > > > On Tue, Feb 25, 2020 at 12:53 AM wrote: > > Hi, guys, > > As we talked about the topic in a virtual PTG few months ago. > > https://etherpad.openstack.org/p/shanghai-ptg-manila-virtual (*Support > promoting several shares in group (DELL EMC: dingdong) * > > > > I’m trying to write a manila-spec for it. > > > > Hi, thank you for working on this, and for submitting a specification [0]. > We're targeting this for the Victoria release, correct? I like working on > these major changes as soon as possible giving us enough air time for > testing and hardening. > > > > It’s my first experience to implement such feature in framework. > > I need to double check with you something, and hope you can give me some > guides like: > > 1. Where is the extra-spec defined for group/group type, > it’s in Manila repo, right? (like manila.db.sqlalchemy.models….) > > Group type extra specs are added as storage capabilities first, you begin > by modifying the driver interface to report this group type capability. > When share drivers report their support for group replication, operators > can use the corresponding string in their group type extra-specs to > schedule appropriately. I suggest taking a look at an existing share group > type capability called "consistent_snapshot_support". [1] and [2] are > reviews that added it. > > 2. The command cli should be implemented for > ‘python-manilaclinet’ repo, right? (I have never touched this repo before) > > Yes. python-manilaclient encompasses > > - a python SDK to version 2 of the manila API > > - two shell implementations: manila and openstack client (actively being > developed) > > > > Group type extra-specs are passed transparently through the SDK and CLI, > you may probably add some documentation or shell hint text (like [3] if > needed). > > > > > > 3. Where is the rest-api should be implemented? > > The rest API is in the openstack/manila repository. [4][5] contain some > documentation regarding how to change the manila API. > > > > 4. And more tips you have? like any other related project > should be changed? > > For any new feature, we need these additional things besides working code: > > - A first party driver implementation where possible so we can test this > feature in the upstream CI (if no first party driver can support this > feature, you'll need to make the best approximation of this feature through > the Dummy/Fake driver [6]) > > - The feature must be tested with adequate test cases in > manila-tempest-plugin > > - Documentation must be added to the manila documentation [7] > > Just list what I know, and more details questions will be raised when > implementing, I think. > > FYI > > Thanks, > > Ding Dong > > > > Happy to answer any more questions, here or on your specification [0] > > > > Thanks, > > Goutham > > > > [0] https://review.opendev.org/#/c/710166/ > > [1] https://review.opendev.org/#/c/446044/ > > [2] https://review.opendev.org/#/c/447474/ > > [3] > https://opendev.org/openstack/python-manilaclient/src/commit/ac5ca461e8c8dd11fe737de7b90ab5c33366ab35/manilaclient/v2/shell.py#L4543 > > [4] > https://docs.openstack.org/manila/latest/contributor/addmethod.openstackapi.html > > [5] > https://docs.openstack.org/manila/latest/contributor/api_microversion_dev.html > > [6] > https://opendev.org/openstack/manila/src/commit/68a18f49472ac7686ceab15e9788dcef05764822/manila/tests/share/drivers/dummy.py > > [7] > https://docs.openstack.org/manila/latest/contributor/documenting_your_work.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.denton at rackspace.com Sat Feb 29 01:41:35 2020 From: james.denton at rackspace.com (James Denton) Date: Sat, 29 Feb 2020 01:41:35 +0000 Subject: [neutron] security group list regression Message-ID: Hello all, We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty severe regression in the time it takes the API to return the list of security groups. This environment has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack security group list’ command to complete. I don’t have actual data from the same environment running Newton, but was able to replicate this behavior with the following lab environments running a mix of virtual and baremetal machines: Newton (VM) Rocky (BM) Stein (VM) Train (BM) Number of sec grps vs time in seconds: # Newton Rocky Stein Train 200 4.1 3.7 5.4 5.2 500 5.3 7 11 9.4 1000 7.2 12.4 19.2 16 2000 9.2 24.2 35.3 30.7 3000 12.1 36.5 52 44 4000 16.1 47.2 73 58.9 5000 18.4 55 90 69 As you can see (hopefully), the response time increased significantly between Newton and Rocky, and has grown slightly ever since. We don't know, yet, if this behavior can be seen with other 'list' commands or is limited to secgroups. We're currently verifying on some intermediate releases to see where things went wonky. There are some similar recent reports out in the wild with little feedback: https://bugzilla.redhat.com/show_bug.cgi?id=1788749 https://bugzilla.redhat.com/show_bug.cgi?id=1721273 I opened a bug here, too: https://bugs.launchpad.net/neutron/+bug/1865223 Bottom line: Has anyone else experienced similar regressions in recent releases? If so, were you able to address them with any sort of tuning? Thanks in advance, James From mnaser at vexxhost.com Sat Feb 29 08:32:05 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 29 Feb 2020 09:32:05 +0100 Subject: [all] Moving towards OpenDev Message-ID: Hi everyone, The OpenStack infrastructure team has helped build an extremely powerful set of tools which we use on a daily basis in order to build OpenStack. You may have noticed that Open Infrastructure projects (such as Zuul, StarlingX, Airship, etc) have leveraged these tools as well as they found them to be extremely powerful and useful to build software by leveraging our four opens. As you might have noticed, there has been many references to OpenDev and it's time that we split off the infrastructure team to it's own separate community with governance and the OpenStack infrastructure team will continue to help work with the OpenDev team in order to keep our lights running (and most likely, those two teams will be the same at the start, but the idea is hoping that one or both grow separately). I'd like to invite our community to please look at the following two patches and leave your comments. Regardless how you feel about this, we'd really appreciate if you can leave a review (-1 or +1) just to show some sort of acknowledgment and help signal that the community is aware of this upcoming change. I'm certainly very excited about it. :) https://review.opendev.org/#/c/710020/ https://review.opendev.org/#/c/703488/ Thanks everyone! Regards, Mohammed From skaplons at redhat.com Sat Feb 29 08:44:29 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sat, 29 Feb 2020 09:44:29 +0100 Subject: [neutron] security group list regression In-Reply-To: References: Message-ID: Hi, I just replied in Your bug report. Can You try to apply patch https://review.opendev.org/#/c/708695/ to see if that will help with this problem? > On 29 Feb 2020, at 02:41, James Denton wrote: > > Hello all, > > We recently upgraded an environment from Newton -> Rocky, and have noticed a pretty severe regression in the time it takes the API to return the list of security groups. This environment has roughly 8,000+ security groups, and it takes nearly 75 seconds for the ‘openstack security group list’ command to complete. I don’t have actual data from the same environment running Newton, but was able to replicate this behavior with the following lab environments running a mix of virtual and baremetal machines: > > Newton (VM) > Rocky (BM) > Stein (VM) > Train (BM) > > Number of sec grps vs time in seconds: > > # Newton Rocky Stein Train > 200 4.1 3.7 5.4 5.2 > 500 5.3 7 11 9.4 > 1000 7.2 12.4 19.2 16 > 2000 9.2 24.2 35.3 30.7 > 3000 12.1 36.5 52 44 > 4000 16.1 47.2 73 58.9 > 5000 18.4 55 90 69 > > As you can see (hopefully), the response time increased significantly between Newton and Rocky, and has grown slightly ever since. We don't know, yet, if this behavior can be seen with other 'list' commands or is limited to secgroups. We're currently verifying on some intermediate releases to see where things went wonky. > > There are some similar recent reports out in the wild with little feedback: > > https://bugzilla.redhat.com/show_bug.cgi?id=1788749 > https://bugzilla.redhat.com/show_bug.cgi?id=1721273 > > I opened a bug here, too: > > https://bugs.launchpad.net/neutron/+bug/1865223 > > Bottom line: Has anyone else experienced similar regressions in recent releases? If so, were you able to address them with any sort of tuning? > > Thanks in advance, > James > — Slawek Kaplonski Senior software engineer Red Hat From fungi at yuggoth.org Sat Feb 29 12:35:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 29 Feb 2020 12:35:34 +0000 Subject: [all] Moving towards OpenDev In-Reply-To: References: Message-ID: <20200229123534.uy42gj3q7p54exhk@yuggoth.org> On 2020-02-29 09:32:05 +0100 (+0100), Mohammed Naser wrote: [...] > As you might have noticed, there has been many references to OpenDev > and it's time that we split off the infrastructure team to it's own > separate community with governance and the OpenStack infrastructure > team will continue to help work with the OpenDev team in order to keep > our lights running (and most likely, those two teams will be the same > at the start, but the idea is hoping that one or both grow > separately). [...] If it helps, by way of explanation, the OpenStack Infrastructure team is still expected to remain under governance of the TC. The non-OpenStack-specific systems and services it maintained will become the responsibility of the OpenDev Sysadmins instead, who are forming their own governing body independent of (but including representation from) Openstack; remaining Openstack-specific systems will still be the charge of the OpenStack Infrastructure team under the proposed plan. And yes, to reiterate, for now those are basically still all the same people just wearing different "hats" but we hope they will diverge somewhat over time as new folks wind up volunteering to help with one effort or the other. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Sat Feb 29 14:38:05 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Sat, 29 Feb 2020 08:38:05 -0600 Subject: [all] Moving towards OpenDev In-Reply-To: <20200229123534.uy42gj3q7p54exhk@yuggoth.org> References: <20200229123534.uy42gj3q7p54exhk@yuggoth.org> Message-ID: > On 2020-02-29 09:32:05 +0100 (+0100), Mohammed Naser wrote: > [...] >> As you might have noticed, there has been many references to OpenDev >> and it's time that we split off the infrastructure team to it's own >> separate community with governance and the OpenStack infrastructure >> team will continue to help work with the OpenDev team in order to keep >> our lights running (and most likely, those two teams will be the same >> at the start, but the idea is hoping that one or both grow >> separately). > [...] I thought this was as settled and in motion for quite awhile now: http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html Is the concern from an infrastructure provider perspective? As far as making it clear to someone that is contributing infrastructure for OpenDev to run on whether those resources are being provided to support the OpenStack project proper, or whether it is or can be used for anything under the OSF umbrella? Or anything the OpenDev team is willing to host, for that matter. Sean From fungi at yuggoth.org Sat Feb 29 14:58:04 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 29 Feb 2020 14:58:04 +0000 Subject: [all] Moving towards OpenDev In-Reply-To: References: <20200229123534.uy42gj3q7p54exhk@yuggoth.org> Message-ID: <20200229145803.nmeemnjbco3kqssa@yuggoth.org> On 2020-02-29 08:38:05 -0600 (-0600), Sean McGinnis wrote: > > On 2020-02-29 09:32:05 +0100 (+0100), Mohammed Naser wrote: > > [...] > > > As you might have noticed, there has been many references to OpenDev > > > and it's time that we split off the infrastructure team to it's own > > > separate community with governance and the OpenStack infrastructure > > > team will continue to help work with the OpenDev team in order to keep > > > our lights running (and most likely, those two teams will be the same > > > at the start, but the idea is hoping that one or both grow > > > separately). > > [...] > > I thought this was as settled and in motion for quite awhile now: > > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html [...] In motion for even longer... I remember circulating these ideas in March of 2018 while we were trapped in a hotel somewhere in the wilds of Ireland (we just hadn't decided on a name for the collaboratory yet): http://lists.openstack.org/pipermail/openstack-infra/2018-March/005843.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: