From mnaser at vexxhost.com Sat Feb 1 08:25:17 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 1 Feb 2020 09:25:17 +0100 Subject: [Watcher] confused about meeting schedule In-Reply-To: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: On Wed, Jan 22, 2020 at 9:10 AM info at dantalion.nl wrote: > > Hello everyone, > > The documentation for Watcher states that meetings will be held on a > bi-weekly basis on odd-weeks, however, last meeting was held on the 8th > of January which is not an odd-week. > > Today I was expecting a meeting as the meetings are held bi-weekly and > the last one was held on the 8th of January, however, there was none. > > Can someone clarify when the next meeting will be held and the > subsequent one after that? Have you managed to hear back on this, Corne? > If these are on even weeks we should also update Watcher's documentation. > > Kind regards, > Corne lukken > From liang.a.fang at intel.com Sat Feb 1 10:10:39 2020 From: liang.a.fang at intel.com (Fang, Liang A) Date: Sat, 1 Feb 2020 10:10:39 +0000 Subject: [cinder] last call for ussuri spec comments In-Reply-To: References: <92eaa3f2-3bfe-3c94-292b-3d91c8256753@gmail.com> Message-ID: Hi Sean Thanks for your comment. Currently only rbd and sheepdog are mounted directly by qemu. Others (including nvmeof) are mounted to host OS first. See: https://github.com/openstack/nova/blob/master/nova/virt/libvirt/driver.py#L169 rbd is popular today. It's a pity that rbd would not be supported by volume local cache. The advantage to mount directly by qemu is security, right? The volume data would not be exposed to host OS. But rbd latency is not good (more than 1 millisecond). On the other hand, if Optane SSD (~10us) is used as volume local cache, RandRead latency would be ~50us when cache hit rate raised up to ~95%. If persistent memory (with latency ~0.x us) is used as volume local cache, latency would be very much small (I have no data on hand, would measure after Chinese new year). I believe the super performance boost would attract Operators. It is not impossible for some operators to change back rbd to mount to host os. At least we can give them the infrastructure-ready. Regards Liang -----Original Message----- From: Sean Mooney Sent: Friday, January 31, 2020 8:19 AM To: Brian Rosmaita ; openstack-discuss at lists.openstack.org Subject: Re: [cinder] last call for ussuri spec comments On Thu, 2020-01-30 at 17:18 -0500, Brian Rosmaita wrote: > On 1/30/20 11:27 AM, Brian Rosmaita wrote: > > The following specs have two +2s. I believe that all expressed > > concerns have been addressed. I intend to merge them at 22:00 UTC > > today unless a serious issue is raised before then. > > > > https://review.opendev.org/#/c/684556/ - support volume-local-cache > > Some concerns were raised with the above patch. Liang, please address > them. Don't worry if you can't get them done before the Friday > deadline, I'm willing to give you a spec freeze exception. I think > the concerns raised will be useful in making clarifications to the > spec, but also in pointing out things that reviewers should keep in > mind when reviewing the implementation. They also point out some > testing directions that will be useful in validating the feature. the one thing i want to raise related to this spec is that the design direction form the nova side is problematic. when reviewing https://review.opendev.org/#/c/689070/ it was noted that the nova libvirt driver has been moving away form mounting cinder volumes on the host and then passing that block device to qemu, in favor of using qemu's nataive ablity to connect directly to remote storage. looking at the latest version of the nova spec https://review.opendev.org/#/c/689070/8/specs/ussuri/approved/support-volume-local-cache.rst at 49 i notes that this feature will be only capable of caching volums that have already been mounted on the host. while keeping the management of the volumes in os-bricks means that the over all impact on nova is minimal considering that this feature would no longer work if we moved to useing qemu native isci support, and that it will not work with NVMEoF volume or ceph im not sure that the nova side will be approved. when i first review the nova spec i mention that i believed local cacheing could a useful feature but this really feels like a capability that should be developed in qemu, specificly the applity to provide a second device as a cache for any disk deivce assgiend to an instance. that would allow local caching to be done regardless of the storage backend used. qemu cannot do that today so i understand that this approch is in the short to medium term likely the only workable solution but i am concerned that the cinder side will be completed in ussuri and the nova side will not. > > With respect to the other spec: > > > https://review.opendev.org/#/c/700977 - add backup id to volume > > metadata > > Rajat had a few vocabulary clarifications that can be addressed in a > follow-up patch. Conceptually, this spec is fine, so I went ahead and > merged it. > > > > > cheers, > > brian > > > From amy at demarco.com Sat Feb 1 15:58:46 2020 From: amy at demarco.com (Amy Marrich) Date: Sat, 1 Feb 2020 09:58:46 -0600 Subject: [horizon] [keystone] Re: [User-committee] Help In-Reply-To: References: Message-ID: <4D16338C-47A4-4EFA-9A99-8ADD61943F63@demarco.com> Pradeep, I am sending this to the OpenStack discuss mailing list where you might be able to receive more help. I have tagged both the Horizon and Keystone teams as the error seems to be from Horizon and concerning Keystone. In order to provide more assistance, information as to what you were doing at the time will be needed. Please confirm this is OpenStack Rocky on CentOS and how you installed, from scratch, TripleO, OpenStack-Ansible, etc. Thanks, Amy (spotz) > On Feb 1, 2020, at 7:11 AM, pradeep pal wrote: > >  > Rocky+Centos 7.7 64bit, > > 2020-02-01 18:26:38.063938 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option > 2020-02-01 18:26:38.549453 mod_wsgi (pid=82228): Target WSGI script '/usr/bin/keystone-wsgi-public' cannot be loaded as Python module. > 2020-02-01 18:26:38.549516 mod_wsgi (pid=82228): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-public'. > 2020-02-01 18:26:38.549558 Traceback (most recent call last): > 2020-02-01 18:26:38.549600 File "/usr/bin/keystone-wsgi-public", line 54, in > 2020-02-01 18:26:38.549666 application = initialize_public_application() > 2020-02-01 18:26:38.549695 File "/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 24, in initialize_public_application > 2020-02-01 18:26:38.549763 name='public', config_files=flask_core._get_config_files()) > 2020-02-01 18:26:38.549799 File "/usr/lib/python2.7/site-packages/keystone/server/flask/core.py", line 149, in initialize_application > 2020-02-01 18:26:38.549862 keystone.server.configure(config_files=config_files) > 2020-02-01 18:26:38.549897 File "/usr/lib/python2.7/site-packages/keystone/server/__init__.py", line 28, in configure > 2020-02-01 18:26:38.549958 keystone.conf.configure() > 2020-02-01 18:26:38.549988 File "/usr/lib/python2.7/site-packages/keystone/conf/__init__.py", line 125, in configure > 2020-02-01 18:26:38.550040 help='Do not monkey-patch threading system modules.')) > 2020-02-01 18:26:38.550084 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2501, in __inner > 2020-02-01 18:26:38.550137 result = f(self, *args, **kwargs) > 2020-02-01 18:26:38.550164 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2776, in register_cli_opt > 2020-02-01 18:26:38.550225 raise ArgsAlreadyParsedError("cannot register CLI option") > 2020-02-01 18:26:38.550276 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option > > > > > _______________________________________________ > User-committee mailing list > User-committee at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Feb 2 00:36:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 01 Feb 2020 18:36:58 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-15 2nd Update (# 2 weeks left to complete) Message-ID: <17003544244.b4776cd0166376.7089453842624552657@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-15 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * 2 weeks left to finish the work. * QA tooling: ** Tempest is dropping py3.5[1]. Tempest plugins can drop py3.5 now if they still support it. ** Updating tox with basepython python3 [2]. ** Pining stable/rocky testing with 23.0.0[3]. ** Updating neutron-tempest-plugins rocky jobs to run with py3 on master and py2 on stable/rocky gate. ** Ironic-tempest-plugin jobs are failing on uploading image on glance. Debugging in progress. * zipp failure fix on py3.5 job is merged. * 5 services listed below still not merged the patches, I request PTLs to review it on priority. Project wise status and need reviews: ============================ Phase-1 status: The OpenStack services have not merged the py2 drop patches: NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). * Adjutant * ec2-api * Karbor * Masakari * Qinling * Tricircle Phase-2 status: This is ongoing work and I think most of the repo have patches up to review. Try to review them on priority. If any query and I missed to respond on review, feel free to ping me in irc. * Most of the tempest plugins and python client patches are good to merge. * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open How you can help: ============== - Review the patches. Push the patches if I missed any repo. [1] https://review.opendev.org/#/c/704840/ [2] https://review.opendev.org/#/c/704688/ [3] https://review.opendev.org/#/c/705098/ -gmann From kiseok7 at gmail.com Sun Feb 2 02:26:15 2020 From: kiseok7 at gmail.com (Kim, Kiseok) Date: Sun, 2 Feb 2020 11:26:15 +0900 Subject: [nova] I would like to add another option for cross_az_attach In-Reply-To: References: Message-ID: Hello all, Can I add the code I used above to the NOVA upstream code? Two changes: * add option "enable_az_attach_list" in nove/[cinder] * passing checking availability zone in check_availability_zone function if there is enable_az_attach_list Thanks, Kiseok Kim On Wed, Jan 22, 2020 at 3:09 PM Kim KS wrote: > Hello, Brin and Matt. > and Thank you. > > I'll tell you more about my use case: > > * First, I create an instance(I'll call it NODE01) and a volume in same > AZ. (so I use 'cross_az_attach = False' option) > * and I create a cinder volume(I'll call it PV01) in different Volume > Zone(I'll call it KubePVZone) > * and then I would like to attach PV01 volume to NODE01 instance. > > KubePVZone is volume zone for kubernetes's persistent volume and NODE01 is > a kubernetes' node. > KubePVZone's volumes can be attached to the other kubernetes's nodes. > > So I would like to use options like: > > [cinder] > cross_az_attach = False > enable_az_attach_list = KubePVZone > > Let me know if there is a lack of explanation. > > I currently use the code by adding in to check_availability_zone method: > > > https://github.com/openstack/nova/blob/058e77e26c1b52ab7d3a79a2b2991ca772318105/nova/volume/cinder.py#L534 > > + if volume['availability_zone'] in > CONF.cinder.enable_az_attach_list: > + LOG.info("allowed AZ for attaching in different availability > zone: %s", > + volume['availability_zone']) > + return > > Best, > Kiseok Kim > > > > 2020. 1. 21. 오전 11:35, Brin Zhang(张百林) 작성: > > > > Hi, Kim KS: > > "cross_az_attach"'s default value is True, that means a llow attach > between instance and volume in different availability zones. > > If False, volumes attached to an instance must be in the same > availability zone in Cinder as the instance availability zone in Nova. > Another thing is, you should care booting an BFV instance from "image", and > this should interact the " allow_availability_zone_fallback" in Cinder, if > " allow_availability_zone_fallback=False" and *that* request AZ does not in > Cinder, the request will be fail. > > > > > > About specify AZ to unshelve a shelved_offloaded server, about the > cross_az_attach something you can know > > > https://github.com/openstack/nova/blob/master/releasenotes/notes/bp-specifying-az-to-unshelve-server-aa355fef1eab2c02.yaml > > > > Availability Zones docs, that contains some description with > cinder.cross_az_attach > > > https://docs.openstack.org/nova/latest/admin/availability-zones.html#implications-for-moving-servers > > > > cross_az_attach configuration: > https://docs.openstack.org/nova/train/configuration/config.html#cinder.cross_az_attach > > > > And cross_az_attach with the server is in > > > https://github.com/openstack/nova/blob/master/nova/volume/cinder.py#L523-L545 > > > > I am not sure why you are need " enable_az_attach_list = AZ1,AZ2" > configuration? > > > > brinzhang > > > > > >> cross_az_attach > >> > >> Hello all, > >> > >> In nova with setting [cinder]/ cross_az_attach option to false, nova > creates > >> instance and volume in same AZ. > >> > >> but some of usecase (in my case), we need to attach new volume in > different > >> AZ to the instance. > >> > >> so I need two options. > >> > >> one is for nova block device mapping and attaching volume and another > is for > >> attaching volume in specified AZ. > >> > >> [cinder] > >> cross_az_attach = False > >> enable_az_attach_list = AZ1,AZ2 > >> > >> how do you all think of it? > >> > >> Best, > >> Kiseok > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pspal83 at hotmail.com Sun Feb 2 06:58:04 2020 From: pspal83 at hotmail.com (pradeep pal) Date: Sun, 2 Feb 2020 06:58:04 +0000 Subject: [horizon] [keystone] Re: [User-committee] Help In-Reply-To: <4D16338C-47A4-4EFA-9A99-8ADD61943F63@demarco.com> References: , <4D16338C-47A4-4EFA-9A99-8ADD61943F63@demarco.com> Message-ID: Hi, Thanks for your reply. I have investigate and found that the issue was comes due to wrong IP use in admin/public/internal on keystone-mange Regards Pradeep Get Outlook for iOS ________________________________ From: Amy Marrich Sent: Saturday, February 1, 2020 9:28:46 PM To: pradeep pal ; openstack-discuss Cc: user-committee at lists.openstack.org Subject: [horizon] [keystone] Re: [User-committee] Help Pradeep, I am sending this to the OpenStack discuss mailing list where you might be able to receive more help. I have tagged both the Horizon and Keystone teams as the error seems to be from Horizon and concerning Keystone. In order to provide more assistance, information as to what you were doing at the time will be needed. Please confirm this is OpenStack Rocky on CentOS and how you installed, from scratch, TripleO, OpenStack-Ansible, etc. Thanks, Amy (spotz) On Feb 1, 2020, at 7:11 AM, pradeep pal wrote:  Rocky+Centos 7.7 64bit, 2020-02-01 18:26:38.063938 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option 2020-02-01 18:26:38.549453 mod_wsgi (pid=82228): Target WSGI script '/usr/bin/keystone-wsgi-public' cannot be loaded as Python module. 2020-02-01 18:26:38.549516 mod_wsgi (pid=82228): Exception occurred processing WSGI script '/usr/bin/keystone-wsgi-public'. 2020-02-01 18:26:38.549558 Traceback (most recent call last): 2020-02-01 18:26:38.549600 File "/usr/bin/keystone-wsgi-public", line 54, in 2020-02-01 18:26:38.549666 application = initialize_public_application() 2020-02-01 18:26:38.549695 File "/usr/lib/python2.7/site-packages/keystone/server/wsgi.py", line 24, in initialize_public_application 2020-02-01 18:26:38.549763 name='public', config_files=flask_core._get_config_files()) 2020-02-01 18:26:38.549799 File "/usr/lib/python2.7/site-packages/keystone/server/flask/core.py", line 149, in initialize_application 2020-02-01 18:26:38.549862 keystone.server.configure(config_files=config_files) 2020-02-01 18:26:38.549897 File "/usr/lib/python2.7/site-packages/keystone/server/__init__.py", line 28, in configure 2020-02-01 18:26:38.549958 keystone.conf.configure() 2020-02-01 18:26:38.549988 File "/usr/lib/python2.7/site-packages/keystone/conf/__init__.py", line 125, in configure 2020-02-01 18:26:38.550040 help='Do not monkey-patch threading system modules.')) 2020-02-01 18:26:38.550084 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2501, in __inner 2020-02-01 18:26:38.550137 result = f(self, *args, **kwargs) 2020-02-01 18:26:38.550164 File "/usr/lib/python2.7/site-packages/oslo_config/cfg.py", line 2776, in register_cli_opt 2020-02-01 18:26:38.550225 raise ArgsAlreadyParsedError("cannot register CLI option") 2020-02-01 18:26:38.550276 ArgsAlreadyParsedError: arguments already parsed: cannot register CLI option _______________________________________________ User-committee mailing list User-committee at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/user-committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Mon Feb 3 02:02:22 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Mon, 3 Feb 2020 10:02:22 +0800 Subject: [kolla] ujson issue affected to few containers. In-Reply-To: References: <801b30a3-62a1-a1e9-c0ef-973baa19b4a0@binero.se> Message-ID: We tested the Rocky deployment based on Ubuntu binary, and confirm that the binary one was not affect the issue. So it means that it will need to use binary type on Rocky if the user going to use Gnocchi + Ceilometer. - Eddie Radosław Piliszek 於 2020年1月31日 週五 下午4:18寫道: > I checked ceilometer and it seems they dropped ujson in queens, it's > only the gnocchi client that still uses it, unfortunately. > > -yoctozepto > > pt., 31 sty 2020 o 09:01 Radosław Piliszek > napisał(a): > > > > Well, the release does not look it's going to happen ever. > > Ubuntu binary rocky probably froze in time so it has a higher chance > > of working, though a potential rebuild will probably kill it as well. > > > > Let's start a general thread about ujson. > > > > -yoctozepto > > > > pt., 31 sty 2020 o 03:26 Eddie Yen napisał(a): > > > > > > In summary, looks like we have to wait the project release the fixed > code on PyPI or compile the source code from its git project. > > > Otherwise these containers may still affected this issue and can't > deploy or working. > > > > > > We may going to try the Ubuntu binary deployment to see it also affect > of not. Perhaps the user may going to deploy with binary on Ubuntu before > the fix release to PyPI. > > > > > > - Eddie > > > > > > Tobias Urdin 於 2020年1月30日 週四 下午4:29寫道: > > >> > > >> Seeing this issue when messing around with Gnocchi on Ubuntu 18.04 as > well. > > >> Temp solved it by installing ujson from master as suggested in [1] > instead of pypi. > > >> > > >> [1] https://github.com/esnme/ultrajson/issues/346 > > >> > > >> On 1/30/20 9:10 AM, Eddie Yen wrote: > > >> > > >> Hi Radosław, > > >> > > >> Sorry about lost distro information, the distro we're using is Ubuntu. > > >> > > >> We have an old copy of ceilometer container image, the ujson.so > version between old and latest are both 1.35 > > >> But only latest one affected this issue. > > >> > > >> BTW, I read the last reply on issue page. Since he said the python 3 > with newer GCC is OK, I think it may caused by python version issue or GCC > compiler versioning. > > >> It may become a huge architect if it really caused by compiling > issue, if Ubuntu updated GCC or python. > > >> > > >> Radosław Piliszek 於 2020年1月30日 週四 > 下午3:48寫道: > > >>> > > >>> Hi Eddie, > > >>> > > >>> the issue is that the project did *not* do a release. > > >>> The latest is still 1.35 from Jan 20, *2016*... [1] > > >>> > > >>> You said only Rocky source - but is this ubuntu or centos? > > >>> > > >>> Also, by the looks of [2] master ceilometer is no longer affected, > but > > >>> monasca and mistral might still be if they call affected paths. > > >>> > > >>> The project looks dead so we are fried unless we override and start > > >>> using its sources from git (hacky hacky). > > >>> > > >>> [1] https://pypi.org/project/ujson/#history > > >>> [2] http://codesearch.openstack.org/?q=ujson&i=nope&files=&repos= > > >>> > > >>> -yoctozepto > > >>> > > >>> > > >>> czw., 30 sty 2020 o 03:31 Eddie Yen > napisał(a): > > >>> > > > >>> > Hi everyone, > > >>> > > > >>> > I'm not sure it should be bug report or not. So I email out about > this issue. > > >>> > > > >>> > In these days, I found the Rocky source deployment always failed > at Ceilometer bootstrapping. Then I found it failed at ceilometer-upgrade. > > >>> > So I tried to looking at ceilometer-upgrade.log and the error > shows it failed to import ujson. > > >>> > > > >>> > https://pastebin.com/nGqsM0uf > > >>> > > > >>> > Then I googled it and found this issue is already happened and > released fixes. > > >>> > https://github.com/esnme/ultrajson/issues/346 > > >>> > > > >>> > But it seems like the container still using the questionable one, > even today (Jan 30 UTC+8). > > >>> > And this not only affected to Ceilometer, but may also Gnocchi. > > >>> > > > >>> > I think we have to patch it, but not sure about the workaround. > > >>> > Does anyone have good idea? > > >>> > > > >>> > Many thanks, > > >>> > Eddie. > > >> > > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Mon Feb 3 06:11:50 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Mon, 3 Feb 2020 06:11:50 +0000 (UTC) Subject: [openstack-dev][kuryr] How to add new worker node References: <420285313.629880.1580710310910.ref@mail.yahoo.com> Message-ID: <420285313.629880.1580710310910@mail.yahoo.com> Hi , I successfully install kuryr-kubernetes using below linkhttps://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/basic.html How to add a new external worker node to existing controller node. Regards, Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Mon Feb 3 07:17:04 2020 From: licanwei_cn at 163.com (licanwei) Date: Mon, 3 Feb 2020 15:17:04 +0800 (GMT+08:00) Subject: [Watcher] confused about meeting schedule In-Reply-To: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: Hi Corne, Sorry for the confusion, it's my fault. i don't realised that the week changed with the new year. There were the Chinese new year holiday in the last two weeks, so we cancelled the irc meeting. Because of the 2019-nCov, we had told to stay at home, i don't know if i can go to work next week, if i can, we will have the irc meeting on 12th Febr. and if not, i will send a notification mail. Thanks, licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 On 01/22/2020 16:07, info at dantalion.nl wrote: Hello everyone, The documentation for Watcher states that meetings will be held on a bi-weekly basis on odd-weeks, however, last meeting was held on the 8th of January which is not an odd-week. Today I was expecting a meeting as the meetings are held bi-weekly and the last one was held on the 8th of January, however, there was none. Can someone clarify when the next meeting will be held and the subsequent one after that? If these are on even weeks we should also update Watcher's documentation. Kind regards, Corne lukken -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Mon Feb 3 07:30:54 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Mon, 3 Feb 2020 08:30:54 +0100 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: Message-ID: One comment inline, On Fri, Jan 24, 2020 at 8:21 PM Alfredo Moralejo Alonso wrote: > > Hi, > > We were given access to CBS to build centos8 dependencies a couple of days > ago and we are still in the process of re-bootstraping it. I hope we'll > have all that is missing in the next days. > > See my comments below. > > Best regards, > > Alfredo > > > On Fri, Jan 24, 2020 at 7:21 PM Wesley Hayutin > wrote: > >> Greetings, >> >> I know the ceph repo is in progress. >> TripleO / RDO is not releasing opendaylight >> >> Can the RDO team comment on the rest of the missing packages here please? >> >> Thank you!! >> >> https://review.opendev.org/#/c/699414/9/kolla/image/build.py >> >> NOTE(mgoddard): Mark images with missing dependencies as unbuildable for >> # CentOS 8. >> 'centos8': { >> "barbican-api", # Missing uwsgi-plugin-python3 >> > We'll take care of uwsgi. > >> "ceph-base", # Missing Ceph repo >> "cinder-base", # Missing Ceph repo >> "collectd", # Missing collectd-ping and >> # collectd-sensubility packages >> > About collectd and sensu, Matthias already replied from OpsTools side > >> "elasticsearch", # Missing elasticsearch repo >> "etcd", # Missing etcd package >> > Given that etcd is not longer in CentOS base (it was in 7), I guess we'll > take care of etcd unless some other sig is building it as part of k8s > family. > >> "fluentd", # Missing td-agent repo >> > See Matthias reply. > >> "glance-base", # Missing Ceph repo >> "gnocchi-base", # Missing Ceph repo >> "hacluster-base", # Missing hacluster repo >> > > That's an alternative repo for HA related packages for CentOS: > > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > > Which still does not provide packages for centos8. > > Note that centos8.1 includes pacemaker, corosync and pcs in > HighAvailability repo. Maybe it could be used instead of the current one. > > >> "ironic-conductor", # Missing shellinabox package >> > > shellinabox is epel. It was never used in tripleo containers, it's really > required? > It's a part of an optional ironic feature. TripleO doesn't use it by default [1] so probably fine to remove. However, there may be people using it outside of RH products. [1] https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/deployment/ironic/ironic-conductor-container-puppet.yaml#L125 > > >> "kibana", # Missing elasticsearch repo >> > > We never provided elasticsearch in the past, is consumed from > elasticsearch repo iirc > > >> "manila-share", # Missing Ceph repo >> "mongodb", # Missing mongodb and mongodb-server >> packages >> > > Mongodb was retired from RDO time ago as it was not longer the recommended > backend for any service. In CentOS7 is pulled from EPEL. > > >> "monasca-grafana", # Using python2 >> "nova-compute", # Missing Ceph repo >> "nova-libvirt", # Missing Ceph repo >> "nova-spicehtml5proxy", # Missing spicehtml5 package >> > > spice-html5 is pulled from epel7 was never part of RDO. Not used in > TripleO. > > >> "opendaylight", # Missing opendaylight repo >> "ovsdpdk", # Not supported on CentOS >> "sensu-base", # Missing sensu package >> > > See Matthias reply. > > >> "tgtd", # Not supported on CentOS 8 >> > > tgtd was replace by scsi-target-utils. It's was never provided in RDO, in > kolla was pulled from epel for 7 > > >> }, >> >> 'centos8+source': { >> "barbican-base", # Missing uwsgi-plugin-python3 >> "bifrost-base", # Bifrost does not support CentOS 8 >> "cyborg-agent", # opae-sdk does not support CentOS 8 >> "freezer-base", # Missing package trickle >> "masakari-monitors", # Missing hacluster repo >> "zun-compute", # Missing Ceph repo >> _______________________________________________ >> dev mailing list >> dev at lists.rdoproject.org >> http://lists.rdoproject.org/mailman/listinfo/dev >> >> To unsubscribe: dev-unsubscribe at lists.rdoproject.org >> > _______________________________________________ > dev mailing list > dev at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngompa13 at gmail.com Sun Feb 2 21:06:21 2020 From: ngompa13 at gmail.com (Neal Gompa) Date: Sun, 2 Feb 2020 16:06:21 -0500 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Wed, Jan 29, 2020 at 9:37 AM Mark Goddard wrote: > > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > wrote: > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > >> > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > >> > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > >> > wrote: > >> > > > >> > > I know it was for masakari. > >> > > Gaëtan had to grab crmsh from opensuse: > >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > >> > > > >> > > -yoctozepto > >> > > >> > Thanks Wes for getting this discussion going. I've been looking at > >> > CentOS 8 today and trying to assess where we are. I created an > >> > Etherpad to track status: > >> > https://etherpad.openstack.org/p/kolla-centos8 > >> > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. > > I found them, thanks. > > > > >> > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > >> code when installing packages. It often happens on the rabbitmq and > >> grafana images. There is a prompt about importing GPG keys prior to > >> the error. > >> > >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > >> > >> Related bug report? https://github.com/containers/libpod/issues/4431 > >> > >> Anyone familiar with it? > >> > > > > Didn't know about this issue. > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > > It seems to be due to the use of a GPG check on the repo (as opposed > to packages). DNF doesn't use keys imported via rpm --import for this > (I'm not sure what it uses), and prompts to add the key. This breaks > without a terminal. More explanation here: > https://review.opendev.org/#/c/704782. > librepo has its own keyring for repo signature verification. -- 真実はいつも一つ!/ Always, there's only one truth! From mnaser at vexxhost.com Mon Feb 3 08:20:33 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Mon, 3 Feb 2020 09:20:33 +0100 Subject: [Watcher] confused about meeting schedule In-Reply-To: References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: Thanks for updating licanwei :) On Mon, Feb 3, 2020 at 8:23 AM licanwei wrote: > Hi Corne, > Sorry for the confusion, it's my fault. i don't realised that the week > changed with the new year. > There were the Chinese new year holiday in the last two weeks, so we > cancelled the irc meeting. > Because of the 2019-nCov, we had told to stay at home, i don't know if i > can go to work next week, if i can, we will have the irc meeting on 12th > Febr. and if not, i will send a notification mail. > > Thanks, > licanwei > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > > > > 签名由 网易邮箱大师 定制 > > On 01/22/2020 16:07, info at dantalion.nl wrote: > Hello everyone, > > The documentation for Watcher states that meetings will be held on a > bi-weekly basis on odd-weeks, however, last meeting was held on the 8th > of January which is not an odd-week. > > Today I was expecting a meeting as the meetings are held bi-weekly and > the last one was held on the 8th of January, however, there was none. > > Can someone clarify when the next meeting will be held and the > subsequent one after that? > > If these are on even weeks we should also update Watcher's documentation. > > Kind regards, > Corne lukken > > -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mark at stackhpc.com Mon Feb 3 08:26:48 2020 From: mark at stackhpc.com (Mark Goddard) Date: Mon, 3 Feb 2020 08:26:48 +0000 Subject: Kayobe Openstack deployment In-Reply-To: References: Message-ID: On Fri, 31 Jan 2020 at 06:06, Tony Pearce wrote: > > Thanks again Mark for your support and patience yesterday. I dont think I would have been able to go beyond that hurdle alone. > > I have gone back to the universe from nothing this morning. The issue I had there was actually the same issue that you had helped me with; so I have now moved past that point. I am running this in a VM and I did not have nested virtualisation enabled on the hosts so I've had to side step to get that implemented. I am sticking with the stable/train. I was not sure about this, but I had figured that as I want to deploy Openstack Train, I'd need the Kayobe stable/train. That's correct - use stable/train for kayobe, kayobe-config and a-universe-from-nothing to deploy Train. > > In terms of the docs - I may be in a good position to help here. I'm not a coder by any means, so I may be in a position to contribute back in this doc sense. That would be a great help. These first experiences with a project's documentation are always valuable when determining where the gaps are. Get in touch if you need help with the tooling. > > Teething issues aside, I really like what I am seeing from Kayobe etc. compared to my previous experience with different deployment tool this seems much more user-friendly. Glad to hear it :) > > Thanks again > > Regards > > Tony > > > On Thu, 30 Jan 2020 at 21:40, Mark Goddard wrote: >> >> On Thu, 30 Jan 2020 at 08:22, Tony Pearce wrote: >> > >> > Hi all - I wanted to ask if there was such a reference architecture / step-by-step deployment guide for Openstack / Kayobe that I could follow to get a better understanding of the components and how to go about deploying it? >> >> Hi Tony, we spoke in the #openstack-kolla IRC channel [1], but I >> thought I'd reply here for the benefit of anyone reading this. >> >> > >> > The documentation is not that great so I'm hitting various issues when trying to follow what is there on the Openstack site. There's a lot of technical things like information on variables - which is fantastic, but there's no context about them. For example, the architecture page is pretty small, when you get further on in the guide it's difficult to contextually link detail back to the architecture. >> >> As discussed in IRC, we are missing some architecture and from scratch >> walkthrough documentation in Kayobe. I've been focusing on the >> configuration reference mostly, but I think it is time to move onto >> these other areas to help new starters. >> >> > >> > I tried to do the all-in-one deployment as well as the "universe from nothing approach" but hit some issues there as well. Plus it's kind of like trying to learn how to drive a bus by riding a micro-scooter :) >> >> I would definitely recommend persevering with the universe from >> nothing demo [2], as it offers the quickest way to get a system up and >> running that you can poke at. It's also a fairly good example of a >> 'bare minimum' configuration. Could you share the issues you had with >> it? For an even simpler setup, you could try [3], which gets you an >> all-in-one control plane/compute host quite quickly. I'd suggest using >> the stable/train branch for a more stable environment. >> >> > >> > Also, the "report bug" bug link on the top of all the pages is going to an error "page does not exist" - not sure that had been realised yet. >> >> Andreas Jaeger kindly proposed a fix for this. Here's the storyboard >> link: https://storyboard.openstack.org/#!/project/openstack/kayobe. >> >> [1] http://eavesdrop.openstack.org/irclogs/%23openstack-kolla/%23openstack-kolla.2020-01-30.log.html#t2020-01-30T04:07:14 >> [2] https://github.com/stackhpc/a-universe-from-nothing >> [3] https://docs.openstack.org/kayobe/latest/development/automated.html#overcloud >> >> > >> > Regards, >> > >> > >> > Tony Pearce | Senior Network Engineer / Infrastructure Lead >> > Cinglevue International >> > >> > Email: tony.pearce at cinglevue.com >> > Web: http://www.cinglevue.com >> > >> > Australia >> > 1 Walsh Loop, Joondalup, WA 6027 Australia. >> > >> > Direct: +61 8 6202 0036 | Main: +61 8 6202 0024 >> > >> > Note: This email and all attachments are the sole property of Cinglevue International Pty Ltd. (or any of its subsidiary entities), and the information contained herein must be considered confidential, unless specified otherwise. If you are not the intended recipient, you must not use or forward the information contained in these documents. If you have received this message in error, please delete the email and notify the sender. >> > >> > From mdulko at redhat.com Mon Feb 3 09:08:35 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 03 Feb 2020 10:08:35 +0100 Subject: [openstack-dev][kuryr] How to add new worker node In-Reply-To: <420285313.629880.1580710310910@mail.yahoo.com> References: <420285313.629880.1580710310910.ref@mail.yahoo.com> <420285313.629880.1580710310910@mail.yahoo.com> Message-ID: On Mon, 2020-02-03 at 06:11 +0000, VeeraReddy wrote: > Hi , > > I successfully install kuryr-kubernetes using below link > https://docs.openstack.org/kuryr-kubernetes/latest/installation/devstack/basic.html > > How to add a new external worker node to existing controller node. Hi, If you used default DevStack's setting for the VIF driver - that is neutron_vif, then on the node you need kubelet, kuryr-daemon and neutron-agent. Besides that if you're using containerized Kuryr, then you need to set KURYR_FORCE_IMAGE_BUILD=true in local.conf. Also you need to set K8s API endpoint using KURYR_K8S_API_URL var. We maintain a multinode configuration that we use in the gate at [1]. The settings in `vars` are for the controller node and the settings in `group-vars.subnode` are for the node. Please note that Zuul has inheritance mechanisms, meaning that more settings are inherited from kuryr-kubernetes-tempest-base [2] and devstack [3] jobs definitions. [1] https://github.com/openstack/kuryr-kubernetes/blob/master/.zuul.d/multinode.yaml [2] https://github.com/openstack/kuryr-kubernetes/blob/28b27c5de2ae10c88295a44312ec1a3d1449f99c/.zuul.d/base.yaml#L16 [3] https://github.com/openstack/devstack/blob/5c6b3c32791f6a1b6e3646e739d41ae86d866d45/.zuul.yaml#L342 Thanks, Michał > > Regards, > Veera. From info at dantalion.nl Mon Feb 3 09:28:11 2020 From: info at dantalion.nl (info at dantalion.nl) Date: Mon, 3 Feb 2020 10:28:11 +0100 Subject: [Watcher] confused about meeting schedule In-Reply-To: References: <522d1ea8-c333-1027-1c6e-b21e8838e07e@dantalion.nl> Message-ID: <528f0c31-9ce4-38dd-c312-6f95a6d681bd@dantalion.nl> Hello Licanwei, I understand, we will continue irc meetings when it is possible. Your and other contributors health is naturally more important. Stay safe, Kind regards, Corne Lukken On 2/3/20 8:17 AM, licanwei wrote: > Hi Corne, > Sorry for the confusion, it's my fault. i don't realised that the week changed with the new year. > There were the Chinese new year holiday in the last two weeks, so we cancelled the irc meeting. > Because of the 2019-nCov, we had told to stay at home, i don't know if i can go to work next week, if i can, we will have the irc meeting on 12th Febr. and if not, i will send a notification mail. > > Thanks, > licanwei > > > > > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > > On 01/22/2020 16:07, info at dantalion.nl wrote: > Hello everyone, > > The documentation for Watcher states that meetings will be held on a > bi-weekly basis on odd-weeks, however, last meeting was held on the 8th > of January which is not an odd-week. > > Today I was expecting a meeting as the meetings are held bi-weekly and > the last one was held on the 8th of January, however, there was none. > > Can someone clarify when the next meeting will be held and the > subsequent one after that? > > If these are on even weeks we should also update Watcher's documentation. > > Kind regards, > Corne lukken > From mdulko at redhat.com Mon Feb 3 11:53:19 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 03 Feb 2020 12:53:19 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core Message-ID: Hi, I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes project. Maysa shown numerous examples of diligent and valuable work in terms of code contribution (e.g. in network policy support), project maintenance and reviews [1]. Please express support or objections by replying to this email. Assuming that there will be no pushback, I'll proceed with granting Maysa core powers by the end of this week. Thanks, Michał [1] https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri From ltomasbo at redhat.com Mon Feb 3 12:15:16 2020 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Mon, 3 Feb 2020 13:15:16 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core In-Reply-To: References: Message-ID: Truly deserved! +2!! She has been doing an amazing work both implementing new features as well as chasing down bugs. On Mon, Feb 3, 2020 at 12:58 PM wrote: > Hi, > > I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes > project. > > Maysa shown numerous examples of diligent and valuable work in terms of > code contribution (e.g. in network policy support), project maintenance > and reviews [1]. > > Please express support or objections by replying to this email. > Assuming that there will be no pushback, I'll proceed with granting > Maysa core powers by the end of this week. > > Thanks, > Michał > > [1] https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri > > > -- LUIS TOMÁS BOLÍVAR Senior Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 3 13:47:08 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Feb 2020 07:47:08 -0600 Subject: =?UTF-8?Q?Re:_[qa]_Proposing_Rados=C5=82aw_Piliszek__to_devstack_core?= In-Reply-To: <41ca2fc4729f25b5f1073f45ecbfe502f4426e24.camel@offenerstapel.de> References: <16fe7556c9c.c1957cd473318.237960248883865388@ghanshyammann.com> <41ca2fc4729f25b5f1073f45ecbfe502f4426e24.camel@offenerstapel.de> Message-ID: <1700b4e08f5.cb1eb2ab203056.6499055620210766435@ghanshyammann.com> Added in the core group. Welcome, Radosław to the team. -gmann ---- On Wed, 29 Jan 2020 08:57:43 -0600 Jens Harbott wrote ---- > On Mon, 2020-01-27 at 08:08 -0600, Ghanshyam Mann wrote: > > Hello Everyone, > > > > Radosław Piliszek (yoctozepto) has been doing nice work in devstack > > from code as well as review perspective. > > He has been helping for many bugs fixes nowadays and having him as > > Core will help us to speed up the things. > > > > I would like to propose him for Devstack Core. You can vote/feedback > > on this email. If no objection by end of this week, I will add him to > > the list. > > Big +2 from me. > > Jens (frickler) > > > From paye600 at gmail.com Mon Feb 3 15:36:45 2020 From: paye600 at gmail.com (Roman Gorshunov) Date: Mon, 3 Feb 2020 16:36:45 +0100 Subject: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx In-Reply-To: References: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> <44C6FA46-6529-453D-9AE1-8908F9E16839@windriver.com> Message-ID: Hello Bin, Yes, that's correct. When you are connecting to the GitHub, your username should be git. GitHub recognizes you by your SSH key [0]. Example: [roman at pc ~]$ ssh git at github.com PTY allocation request failed on channel 0 Hi gorshunovr! You've successfully authenticated, but GitHub does not provide shell access. Connection to github.com closed. [roman at pc ~]$ As you may see, GitHub has recognized me as 'gorshunovr', despite I was using 'git' as a username for SSH. [0] https://help.github.com/en/github/authenticating-to-github/testing-your-ssh-connection Please, use mailing list for the communication, not direct e-mail. Thank you. Best regards, -- Roman Gorshunov On Mon, Feb 3, 2020 at 3:53 PM Qian, Bin wrote: > > Hi Roman, > > Thank you for the information about GitHub mirroring. > Based on the info below, I think what we want to do is to: > 1. create a GitHub account, which has the privilege to commit to our repos, > 2. with the account, create ssh key on zuul server and upload the ssh key to GitHub > 3. use zuul tool [3] to create zuul job secret, embed it to the upload job in zuul.yaml > 4. add the upload job a post job > > But I am not very sure about the step 1, as in the reference [4] below, it states, > "For GitHub, the user parameter is git, not your personal username." > > Would you please let me know if my steps above is correct? > > Thanks, > > Bin > > ________________________________ > From: Waines, Greg > Sent: Thursday, January 30, 2020 10:21 AM > To: Qian, Bin > Cc: Eslimi, Dariush; Khalil, Ghada > Subject: FW: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > fyi > > > > From: Roman Gorshunov > Date: Thursday, January 30, 2020 at 11:38 AM > To: "openstack-discuss at lists.openstack.org" > Subject: Re: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > > > Hello Greg, > > > > - Create a GitHub account for the starlingx, you will have URL on > > GitHub like [0]. You may try to contact GitHub admins and ask to > > release starlingx name, as it seems to be unused. > > - Then create SSH key and upload onto GitHub for that account in > > GitHub interface > > - Create empty repositories under [1] to match your existing > > repositories names here [2] > > - Encrypt SSH private key as described here [3] or here [4] using this tool [5] > > - Create patch to all your projects under [2] to '/.zuul.yaml' file, > > similar to what is listed here [6] > > - Change job name shown on line 407 via URL above, description (line > > 409), git_mirror_repository variable (line 411), secret name (line 414 > > and 418), and SSH key starting from line 424 to 463, to match your > > project's name, repo path on GitHub, and SSH key > > - Submit changes to Gerrit with patches for all your project and get > > them merged. If all goes good, the next change merged would trigger > > your repositories to be synced to GitHub. Status could be seen here > > [7] - search for your newly created job manes, they should be nested > > under "upload-git-mirrorMirrors a tested project repository to a > > remote git server." > > > > Hope it helps. > > > > [0] https://github.com/starlingxxxx > > [1] https://github.com/starlingxxxx/... > > [2] https://opendev.org/starlingx/... > > [3] https://docs.openstack.org/infra/manual/zuulv3.html#secret-variables > > [4] https://docs.openstack.org/infra/manual/creators.html#mirroring-projects-to-git-mirrors > > [5] https://opendev.org/zuul/zuul/src/branch/master/tools/encrypt_secret.py > > [6] https://opendev.org/airship/deckhand/src/commit/51dcea4fa12b0bcce65c381c286e61378a0826e2/.zuul.yaml#L406-L463 > > [7] https://zuul.openstack.org/jobs > > > > Sample content of your addition (patch) to your '/.zuul.yaml' files: > > =================================================== > > - job: > > name: starlingx-compile-upload-git-mirror > > parent: upload-git-mirror > > description: Mirrors starlingx/compile to starlingxxxx/compile > > vars: > > git_mirror_repository: starlingxxxx/compile > > secrets: > > - name: git_mirror_credentials > > secret: starlingx-compile-github-secret > > pass-to-parent: true > > > > - secret: > > name: starlingx-compile-github-secret > > data: > > user: git > > host: github.com > > host_key: github.com ssh-rsa > > AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== > > ssh_key: !encrypted/pkcs1-oaep > > - > > =================================================== > > > > Best regards, > > -- > > Roman Gorshunov > > > > From srinivasd.ctr at kaminario.com Mon Feb 3 13:50:03 2020 From: srinivasd.ctr at kaminario.com (Srinivas Dasthagiri) Date: Mon, 3 Feb 2020 13:50:03 +0000 Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... In-Reply-To: References: , Message-ID: Hi Jay, We are working on Kaminario CI fresh configuration(since it is too old, it has broken). We have communicated with OpenStack-infra community for the suggestions and documentation. One of the community member suggested us to go with manual CI configuration(Not third party CI) instead of CI configuration with puppet architecture(Since it is moving to Ansible). But we did not get documents for all other CI components except ZuulV3 from community. Can you please confirm us that we have to go with manual CI components configuration. If YES, can you please provide us the necessary document to configure openstack CI without using puppet modules. If NO, Shall we configure the CI with puppet modules in order to make Kaminario CI up and running for now and later we will upgrade the CI to use Zuul V3. Shall we go with this approach? Thanks & Regards Srinivas & Venkata Krishna ________________________________ From: Ido Benda Sent: 23 January 2020 18:11 To: jsbryant at electronicjungle.net ; openstack-discuss at lists.openstack.org ; inspur.ci at inspur.com ; wangyong2017 at inspur.com ; Chengwei.Chou at infortrend.com ; Bill.Sung at infortrend.com ; Kuirong.Chen(陳奎融) ; Srinivas Dasthagiri ; nec-cinder-ci at istorage.jp.nec.com ; silvan at quobyte.com ; robert at quobyte.com ; felix at quobyte.com ; bjoern at quobyte.com ; OpenStack Development ; Shlomi Avihou | Zadara ; msdu-openstack at groups.ext.hpe.com Subject: RE: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... Hi Jay, Kaminario’s CI is broken since the drop of Xenial support. We are working to resolve the these issues. Ido Benda www.kaminario.com Mobile: +(972)-52-4799393 E-Mail: ido.benda at kaminario.com From: Jay Bryant Sent: Wednesday, January 22, 2020 21:51 To: openstack-discuss at lists.openstack.org; inspur.ci at inspur.com; wangyong2017 at inspur.com; Chengwei.Chou at infortrend.com; Bill.Sung at infortrend.com; Kuirong.Chen(陳奎融) ; Ido Benda ; Srinivas Dasthagiri ; nec-cinder-ci at istorage.jp.nec.com; silvan at quobyte.com; robert at quobyte.com; felix at quobyte.com; bjoern at quobyte.com; OpenStack Development ; Shlomi Avihou | Zadara ; msdu-openstack at groups.ext.hpe.com Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... All, We once again are at the point in the release where we are talking about 3rd Party CI and what is going on for Cinder. At the moment I have analyzed drivers that have not successfully reported results on a Cinder patch in 30 or more days and have put together the following list of drivers to be unsupported in the Ussuri release: * Inspur Drivers * Infortrend * Kaminario * NEC * Quobyte * Zadara * HPE Drivers If your name is in the list above you are receiving this e-mail directly, not just through the mailing list. If you are working on resolving CI issues please let me know so we can discuss how to proceed. In addition to the fact that we will be pushing up unsupported patches for the drivers above, we have already unsupported and removed a number of drivers during this release. They are as follows: * Unsupported: * MacroSAN Driver * Removed: * ProphetStor Driver * Nimble Storage Driver * Veritas Access Driver * Veritas CNFS Driver * Virtuozzo Storage Driver * Huawei FusionStorage Driver * Sheepdog Storage Driver Obviously we are reaching the point that the number of drivers leaving the community is concerning and it has sparked discussions around the fact that maybe our 3rd Party CI approach isn't working as intended. So what do we do? Just mark drivers unsupported and no longer remove drivers? Do we restore drivers that have recently been removed? We are planning to have further discussion around these questions at our next Cinder meeting in #openstack-meeting-4 on Wednesday, 1/29/20 at 14:00 UTC. If you have thoughts or strong opinions around this topic please join us. Thank you! Jay Bryant jsbryant at electronicjungle.net IRC: jungleboyj -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Feb 3 15:49:43 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 3 Feb 2020 16:49:43 +0100 Subject: =?UTF-8?Q?Re=3A_=5Bqa=5D_Proposing_Rados=C5=82aw_Piliszek_to_devstack_co?= =?UTF-8?Q?re?= In-Reply-To: <1700b4e08f5.cb1eb2ab203056.6499055620210766435@ghanshyammann.com> References: <16fe7556c9c.c1957cd473318.237960248883865388@ghanshyammann.com> <41ca2fc4729f25b5f1073f45ecbfe502f4426e24.camel@offenerstapel.de> <1700b4e08f5.cb1eb2ab203056.6499055620210766435@ghanshyammann.com> Message-ID: Thank you, Ghanshyam. I will do my best. -yoctozepto pon., 3 lut 2020 o 15:03 Ghanshyam Mann napisał(a): > > Added in the core group. Welcome, Radosław to the team. > > -gmann > > > ---- On Wed, 29 Jan 2020 08:57:43 -0600 Jens Harbott wrote ---- > > On Mon, 2020-01-27 at 08:08 -0600, Ghanshyam Mann wrote: > > > Hello Everyone, > > > > > > Radosław Piliszek (yoctozepto) has been doing nice work in devstack > > > from code as well as review perspective. > > > He has been helping for many bugs fixes nowadays and having him as > > > Core will help us to speed up the things. > > > > > > I would like to propose him for Devstack Core. You can vote/feedback > > > on this email. If no objection by end of this week, I will add him to > > > the list. > > > > Big +2 from me. > > > > Jens (frickler) > > > > > > > > From amy at demarco.com Mon Feb 3 15:50:50 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 3 Feb 2020 09:50:50 -0600 Subject: UC Nominations now open Message-ID: The nomination period for the February User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the two sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations can be made by sending an email to the user-committee at lists.openstack.org mailing-list[0], with the subject: “UC candidacy” by February 16, 23:59 UTC aa voting will begin on February 17. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. Criteria for AUC status can be found at https://superuser.openstack.org/articles/auc-community/. If you are still not sure of your status and would like to verify in advance please email myself(amy at demarco.com) and Rain Leander(rleander at redhat.com) as we are serving as the Election Officials. Thanks, Amy Marrich (spotz) 0 - Please make sure you are subscribed to this list before sending in your nomination. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Feb 3 16:02:36 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 Feb 2020 17:02:36 +0100 Subject: [oslo] drop 2.7 support - track releases Message-ID: Hello, FYI you can track the dropping of py2.7 support in oslo by using: https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) Topic: oslo_drop_py2_support We release a major version each time an oslo projects drop the py2.7 support. -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Mon Feb 3 16:32:08 2020 From: hberaud at redhat.com (Herve Beraud) Date: Mon, 3 Feb 2020 17:32:08 +0100 Subject: [oslo] drop 2.7 support - track releases In-Reply-To: References: Message-ID: The goal of this thread is to track oslo releases related to drop of py2.7 support. By creating this thread I want to isolate the oslo status from the " drop-py27-support " to help us to track our advancement internally in oslo. Le lun. 3 févr. 2020 à 17:02, Herve Beraud a écrit : > Hello, > > FYI you can track the dropping of py2.7 support in oslo by using: > > https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) > > Topic: oslo_drop_py2_support > > We release a major version each time an oslo projects drop the py2.7 > support. > > -- > Hervé Beraud > Senior Software Engineer > Red Hat - Openstack Oslo > irc: hberaud > -----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Qian at windriver.com Mon Feb 3 16:33:43 2020 From: Bin.Qian at windriver.com (Qian, Bin) Date: Mon, 3 Feb 2020 16:33:43 +0000 Subject: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx In-Reply-To: References: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> <44C6FA46-6529-453D-9AE1-8908F9E16839@windriver.com> , Message-ID: Roman, That works on my account too with specifying my private key. Thanks, Bin ________________________________________ From: Roman Gorshunov [paye600 at gmail.com] Sent: Monday, February 03, 2020 7:36 AM To: openstack-discuss at lists.openstack.org Cc: Qian, Bin Subject: Re: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx Hello Bin, Yes, that's correct. When you are connecting to the GitHub, your username should be git. GitHub recognizes you by your SSH key [0]. Example: [roman at pc ~]$ ssh git at github.com PTY allocation request failed on channel 0 Hi gorshunovr! You've successfully authenticated, but GitHub does not provide shell access. Connection to github.com closed. [roman at pc ~]$ As you may see, GitHub has recognized me as 'gorshunovr', despite I was using 'git' as a username for SSH. [0] https://help.github.com/en/github/authenticating-to-github/testing-your-ssh-connection Please, use mailing list for the communication, not direct e-mail. Thank you. Best regards, -- Roman Gorshunov On Mon, Feb 3, 2020 at 3:53 PM Qian, Bin wrote: > > Hi Roman, > > Thank you for the information about GitHub mirroring. > Based on the info below, I think what we want to do is to: > 1. create a GitHub account, which has the privilege to commit to our repos, > 2. with the account, create ssh key on zuul server and upload the ssh key to GitHub > 3. use zuul tool [3] to create zuul job secret, embed it to the upload job in zuul.yaml > 4. add the upload job a post job > > But I am not very sure about the step 1, as in the reference [4] below, it states, > "For GitHub, the user parameter is git, not your personal username." > > Would you please let me know if my steps above is correct? > > Thanks, > > Bin > > ________________________________ > From: Waines, Greg > Sent: Thursday, January 30, 2020 10:21 AM > To: Qian, Bin > Cc: Eslimi, Dariush; Khalil, Ghada > Subject: FW: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > fyi > > > > From: Roman Gorshunov > Date: Thursday, January 30, 2020 at 11:38 AM > To: "openstack-discuss at lists.openstack.org" > Subject: Re: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx > > > > Hello Greg, > > > > - Create a GitHub account for the starlingx, you will have URL on > > GitHub like [0]. You may try to contact GitHub admins and ask to > > release starlingx name, as it seems to be unused. > > - Then create SSH key and upload onto GitHub for that account in > > GitHub interface > > - Create empty repositories under [1] to match your existing > > repositories names here [2] > > - Encrypt SSH private key as described here [3] or here [4] using this tool [5] > > - Create patch to all your projects under [2] to '/.zuul.yaml' file, > > similar to what is listed here [6] > > - Change job name shown on line 407 via URL above, description (line > > 409), git_mirror_repository variable (line 411), secret name (line 414 > > and 418), and SSH key starting from line 424 to 463, to match your > > project's name, repo path on GitHub, and SSH key > > - Submit changes to Gerrit with patches for all your project and get > > them merged. If all goes good, the next change merged would trigger > > your repositories to be synced to GitHub. Status could be seen here > > [7] - search for your newly created job manes, they should be nested > > under "upload-git-mirrorMirrors a tested project repository to a > > remote git server." > > > > Hope it helps. > > > > [0] https://github.com/starlingxxxx > > [1] https://github.com/starlingxxxx/... > > [2] https://opendev.org/starlingx/... > > [3] https://docs.openstack.org/infra/manual/zuulv3.html#secret-variables > > [4] https://docs.openstack.org/infra/manual/creators.html#mirroring-projects-to-git-mirrors > > [5] https://opendev.org/zuul/zuul/src/branch/master/tools/encrypt_secret.py > > [6] https://opendev.org/airship/deckhand/src/commit/51dcea4fa12b0bcce65c381c286e61378a0826e2/.zuul.yaml#L406-L463 > > [7] https://zuul.openstack.org/jobs > > > > Sample content of your addition (patch) to your '/.zuul.yaml' files: > > =================================================== > > - job: > > name: starlingx-compile-upload-git-mirror > > parent: upload-git-mirror > > description: Mirrors starlingx/compile to starlingxxxx/compile > > vars: > > git_mirror_repository: starlingxxxx/compile > > secrets: > > - name: git_mirror_credentials > > secret: starlingx-compile-github-secret > > pass-to-parent: true > > > > - secret: > > name: starlingx-compile-github-secret > > data: > > user: git > > host: github.com > > host_key: github.com ssh-rsa > > AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== > > ssh_key: !encrypted/pkcs1-oaep > > - > > =================================================== > > > > Best regards, > > -- > > Roman Gorshunov > > > > From tidwellrdev at gmail.com Mon Feb 3 16:48:30 2020 From: tidwellrdev at gmail.com (Ryan Tidwell) Date: Mon, 3 Feb 2020 10:48:30 -0600 Subject: [neutron] Bug Deputy Report Jan. 27 - Feb. 3 Message-ID: Here's this week's bug report for neutron: https://bugs.launchpad.net/neutron/+bug/1861269 Functional tests failing due to failure with getting datapath ID from ovs Fix merged to master, backports in progress https://review.opendev.org/705401 https://review.opendev.org/705400 https://review.opendev.org/705399 https://review.opendev.org/705398 https://bugs.launchpad.net/neutron/+bug/1861442 QOS minimum bandwidth rejection of non-physnet network and updates should be driver specific This was discussed in the drivers meeting. The short-term plan is move where validation is done, then there is likely an RFE to tackle once this fixed. https://bugs.launchpad.net/neutron/+bug/1861496 All ports of server instance are open even no security group does allow this This one is worth a look as it involves security groups potentially not filtering things as expressed by the user. I asked a follow-up question in launchpad, others should chime in. https://bugs.launchpad.net/neutron/+bug/1861502 [OVN] Mechanism driver - failing to recreate floating IP Some errors related to the OVN ML2 driver are being observed in the gate "TypeError: create_floatingip() missing 1 required positional argument: 'floatingip'" https://bugs.launchpad.net/neutron/+bug/1861670 AttributeError: 'NetworkConnectivityTest' object has no attribute 'safe_client' Fix proposed - https://review.opendev.org/#/c/705413/ https://bugs.launchpad.net/neutron/+bug/1861674 Gateway which is not in subnet CIDR is unsupported in ha router This could use some follow-up by the team for clarification Some OVN scheduler-related issues: [OVN] GW rescheduling mechanism is triggered on every Chassis updated unnecessarily https://bugs.launchpad.net/bugs/1861510 Some PoC code up for review - https://review.opendev.org/#/c/705331/ https://bugs.launchpad.net/bugs/1861509 [OVN] GW rescheduling logic is broken RFE's: https://bugs.launchpad.net/neutron/+bug/1861032 Add support for configuring dnsmasq with multiple IPv6 addresses in same subnet on same port https://bugs.launchpad.net/neutron/+bug/1861529 A port's network should be changable -Ryan -------------- next part -------------- An HTML attachment was scrubbed... URL: From amy at demarco.com Mon Feb 3 16:51:53 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 3 Feb 2020 10:51:53 -0600 Subject: [Ansible] European Ansible Cntributors Summit Message-ID: This was shared on the OpenStack-Ansible channel this morning and I wanted to share with everyone using Ansible who might be interested in attending. https://groups.google.com/forum/#!topic/ansible-outreach/sLte90d5hdc Thanks, Amy (spotz) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 3 17:21:50 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 03 Feb 2020 11:21:50 -0600 Subject: [oslo] drop 2.7 support - track releases In-Reply-To: References: Message-ID: <1700c12980a.e8a768ec216738.734931729905313685@ghanshyammann.com> ---- On Mon, 03 Feb 2020 10:32:08 -0600 Herve Beraud wrote ---- > The goal of this thread is to track oslo releases related to drop of py2.7 support. > By creating this thread I want to isolate the oslo status from the "drop-py27-support" to help us to track our advancement internally in oslo. You can still track the oslo related work with the original topic by adding the extra query for project matching string. - https://review.opendev.org/#/q/topic:drop-py27-support+(status:open+OR+status:merged)+projects:openstack/oslo Main idea to use the single topic "drop-py27-support" for this goal is to track it on OpenStack level or avoid duplicating the work etc. -gmann > > Le lun. 3 févr. 2020 à 17:02, Herve Beraud a écrit : > > > -- > Hervé BeraudSenior Software Engineer > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > Hello, > FYI you can track the dropping of py2.7 support in oslo by using:https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) > Topic: oslo_drop_py2_support > We release a major version each time an oslo projects drop the py2.7 support. > -- Hervé BeraudSenior Software Engineer > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > v6rDpkeNksZ9fFSyoY2o > =ECSj > -----END PGP SIGNATURE----- > > From jmacer at 42iso.com Mon Feb 3 18:03:38 2020 From: jmacer at 42iso.com (jmacer at 42iso.com) Date: Mon, 03 Feb 2020 12:03:38 -0600 Subject: Migration to Openstack from Proxmox Message-ID: <7150dfca5e513345a33573f7d00c3a83@42iso.com> We currently are using proxmox for a envrionment management, and while we were looking at DFS options we came across openstack, and we've decided to start looking at it as a replacement for proxmox. Has anyone made this migration before? Currently we're running a few Windows Server 12/16/19 virtual machines, but mostly centOS7 virtual machines, however what we are developing are micro-services that ideally would be deployed using k8s. Does anyone have any experience migrating between the two, or any other recommendation when considering openstack? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Mon Feb 3 21:36:10 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 3 Feb 2020 21:36:10 +0000 Subject: Virtio memory balloon driver Message-ID: When we build a Centos 7 VM with 1.4T RAM it fails with "[ 17.797177] BUG: unable to handle kernel paging request at ffff988b19478000" I asked in #centos and they asked me to show a list of devices from a working VM (if I use 720G RAM it works). This is the list: [root at alberttest1 ~]# lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon [root at alberttest1 ~]# lsusb Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub They suspect that the "Virtio memory balloon" driver is causing the problem, and that we should disable it. I googled around and found this: http://www.linux-kvm.org/page/Projects/auto-ballooning It looks like memory ballooning is deprecated. How can I get rid of the driver? Also they complained about my host bridge device; they say that we should have a newer one: 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) Where can I specify the host bridge? ok ozzzo one of the devices is called "virtio memory balloon" [13:18:12] do you see that? [13:18:21] yes [13:18:47] i suggest you google that and read about what it does - i think it would [13:19:02] be worth trying to disable that device on your larger vm to see what happens [13:19:18] ok I will try that, thank you [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC (Quit: Leaving) [13:21:45] * Sheogorath[m] (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos [13:22:06] <@TrevorH> I also notice that the VM seems to be using the very old 440FX and there's a newer model of hardware available that might be worth checking [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! [13:22:32] <@TrevorH> I had one of those in about 1996 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Mon Feb 3 22:11:23 2020 From: cboylan at sapwetik.org (Clark Boylan) Date: Mon, 03 Feb 2020 14:11:23 -0800 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> On Mon, Feb 3, 2020, at 1:36 PM, Albert Braden wrote: > > When we build a Centos 7 VM with 1.4T RAM it fails with “[ 17.797177] > BUG: unable to handle kernel paging request at ffff988b19478000” > > > I asked in #centos and they asked me to show a list of devices from a > working VM (if I use 720G RAM it works). This is the list: > > > [root at alberttest1 ~]# lspci > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB > [Natoma/Triton II] (rev 01) > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > > [root at alberttest1 ~]# lsusb > > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > > They suspect that the “Virtio memory balloon” driver is causing the > problem, and that we should disable it. I googled around and found this: > > > http://www.linux-kvm.org/page/Projects/auto-ballooning > > > It looks like memory ballooning is deprecated. How can I get rid of the driver? Looking at Nova's code [0] the memballoon device is only set if mem_stats_period_seconds has a value greater than 0. The default [1] is 10 so you get it by default. I would try setting this config option to 0 and recreating the instance. Note I think this will apply to all VMs and was originally added so that tools could get memory usage statistics. > > > Also they complained about my host bridge device; they say that we > should have a newer one: > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > Where can I specify the host bridge? For this I think you need to set hw_machine_type [2]. Looking at this bug [3] I think the value you may want is q35. > > > ok ozzzo one of the devices is called "virtio memory balloon" > > [13:18:12] do you see that? > > [13:18:21] yes > > [13:18:47] i suggest you google that and read about what it > does - i think it would > > [13:19:02] be worth trying to disable that device on your > larger vm to see what happens > > [13:19:18] ok I will try that, thank you > > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC > (Quit: Leaving) > > [13:21:45] * Sheogorath[m] > (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the > very old 440FX and there's a newer model of hardware available that > might be worth checking > > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > > [13:22:32] <@TrevorH> I had one of those in about 1996 > [0] https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L5840-L5852 [1] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.mem_stats_period_seconds [2] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.hw_machine_type [3] https://bugs.launchpad.net/nova/+bug/1780138 From smooney at redhat.com Mon Feb 3 22:47:26 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Feb 2020 22:47:26 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: On Mon, 2020-02-03 at 21:36 +0000, Albert Braden wrote: > When we build a Centos 7 VM with 1.4T RAM it fails with "[ 17.797177] BUG: unable to handle kernel paging request at > ffff988b19478000" > > I asked in #centos and they asked me to show a list of devices from a working VM (if I use 720G RAM it works). This is > the list: > > [root at alberttest1 ~]# lspci > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > [root at alberttest1 ~]# lsusb > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > They suspect that the "Virtio memory balloon" driver is causing the problem, and that we should disable it. I googled > around and found this: > > http://www.linux-kvm.org/page/Projects/auto-ballooning > > It looks like memory ballooning is deprecated. How can I get rid of the driver? http://www.linux-kvm.org/page/Projects/auto-ballooning states that no qemu that exists today implements that feature but the fact you see it in lspci seams to be in conflict with that. there are several refernce to the feature in later release of qemu and it is documented in libvirt https://libvirt.org/formatdomain.html#elementsMemBalloon there is no way to turn it off specificly currently and im not aware of it being deprecated. the guest will not interact witht he vitio memory balloon by default. it is there too allow the guest to free memory and retrun it to the host to allow copperation between the guests and host to enable memory oversubscription. i belive this normally need the qemu guest agent to be deploy to work fully. with a 1.4TB vm how much memory have you reserved on the host. qemu will need memory to implement the vm emulation and this tends to increase as the guess uses more resouces. my first incliantion would be to check it the vm was killed as a result of a OOM event on the host. > > Also they complained about my host bridge device; they say that we should have a newer one: > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > Where can I specify the host bridge? you change this by specifying the machine type. you can use the q35 machine type instead. q35 is the replacement for i440 but when you enable it it will change a lot of other parameters. i dont know if it will disable the virtio memory ballon or not but if you are using large amount of memory you should also be using hugepages to reduce the hoverhead and improve performance. you can either set the machine type in the config https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.hw_machine_type [libvirt] hw_machine_type=x86_64=q35 or in the guest image https://github.com/openstack/glance/blob/master/etc/metadefs/compute-libvirt-image.json#L60-L64 e.g. hw_machine_type=q35 note in the image you dont include the arch > > ok ozzzo one of the devices is called "virtio memory balloon" > [13:18:12] do you see that? > [13:18:21] yes > [13:18:47] i suggest you google that and read about what it does - i think it would > [13:19:02] be worth trying to disable that device on your larger vm to see what happens > [13:19:18] ok I will try that, thank you > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC (Quit: Leaving) > [13:21:45] * Sheogorath[m] (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the very old 440FX and there's a newer model of > hardware available that might be worth checking > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > [13:22:32] <@TrevorH> I had one of those in about 1996 yes it is an old chip set form the 90s but it is the default that openstack has used since it was created. we will likely change that in a cycle or two but really dont be surprised that we are using 440fx by default. its not really emulating a plathform form 1996. it started that way but it has been updated with the same name kept. with that said it does not support pcie or many other fature which is why we want to move too q35. q35 however while much more modern and secure uses more memroy and does not support older operating systems so there are trade offs. if you need to run centos 5 or 6 i would not be surrpised if you have issue with q35. From smooney at redhat.com Mon Feb 3 22:56:19 2020 From: smooney at redhat.com (Sean Mooney) Date: Mon, 03 Feb 2020 22:56:19 +0000 Subject: Virtio memory balloon driver In-Reply-To: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> References: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> Message-ID: On Mon, 2020-02-03 at 14:11 -0800, Clark Boylan wrote: > On Mon, Feb 3, 2020, at 1:36 PM, Albert Braden wrote: > > > > When we build a Centos 7 VM with 1.4T RAM it fails with “[ 17.797177] > > BUG: unable to handle kernel paging request at ffff988b19478000” > > > > > > I asked in #centos and they asked me to show a list of devices from a > > working VM (if I use 720G RAM it works). This is the list: > > > > > > [root at alberttest1 ~]# lspci > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > > > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > > > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB > > [Natoma/Triton II] (rev 01) > > > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > > > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > > > > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > > > > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > > > > [root at alberttest1 ~]# lsusb > > > > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > > > > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > > > > > They suspect that the “Virtio memory balloon” driver is causing the > > problem, and that we should disable it. I googled around and found this: > > > > > > http://www.linux-kvm.org/page/Projects/auto-ballooning > > > > > > It looks like memory ballooning is deprecated. How can I get rid of the driver? > > Looking at Nova's code [0] the memballoon device is only set if mem_stats_period_seconds has a value greater than 0. > The default [1] is 10 so you get it by default. I would try setting this config option to 0 and recreating the > instance. Note I think this will apply to all VMs and was originally added so that tools could get memory usage > statistics. i forgot about that option. we had talked bout disableing the stats by default at one point. downstream i think we do at least via config on realtime hosts as we found the stat collect causes latency spikes. > > > > > > > Also they complained about my host bridge device; they say that we > > should have a newer one: > > > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > > > Where can I specify the host bridge? > > For this I think you need to set hw_machine_type [2]. Looking at this bug [3] I think the value you may want is q35. yes but if you enable q35 you neeed to also be aware that unlike the pc i440 machine type only 1 addtion pci slot will be allocated so if you want to allow attaching more then one volume or nic after teh vm is booted you need to adjust https://docs.openstack.org/nova/latest/configuration/config.html#libvirt.num_pcie_ports. the more addtional pcie port you enable the more memory is required by qemu regardless of if you use them and by default even without allocating more pcie port qemu uses more memory in q35 mode then when using the pc machine type. you should also be aware that ide bus is not supported by default with q35 which causes issues for some older operating systems if you use config drives. with all that said we do want to eventully make q35 the default in nova but you just need to be aware that changing that has lots of other side effects which is why we have not done it yet. q35 is required for many new feature and is supported but its just not the default. > > > > > > > ok ozzzo one of the devices is called "virtio memory balloon" > > > > [13:18:12] do you see that? > > > > [13:18:21] yes > > > > [13:18:47] i suggest you google that and read about what it > > does - i think it would > > > > [13:19:02] be worth trying to disable that device on your > > larger vm to see what happens > > > > [13:19:18] ok I will try that, thank you > > > > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC > > (Quit: Leaving) > > > > [13:21:45] * Sheogorath[m] > > (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > > > > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the > > very old 440FX and there's a newer model of hardware available that > > might be worth checking > > > > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > > > > [13:22:32] <@TrevorH> I had one of those in about 1996 > > > > [0] https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L5840-L5852 > [1] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.mem_stats_period_seconds > [2] https://docs.openstack.org/nova/train/configuration/config.html#libvirt.hw_machine_type > [3] https://bugs.launchpad.net/nova/+bug/1780138 > From Albert.Braden at synopsys.com Mon Feb 3 23:57:28 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 3 Feb 2020 23:57:28 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't seen any OOM errors. Where should I look for those? -----Original Message----- From: Sean Mooney Sent: Monday, February 3, 2020 2:47 PM To: Albert Braden ; OpenStack Discuss ML Subject: Re: Virtio memory balloon driver On Mon, 2020-02-03 at 21:36 +0000, Albert Braden wrote: > When we build a Centos 7 VM with 1.4T RAM it fails with "[ 17.797177] BUG: unable to handle kernel paging request at > ffff988b19478000" > > I asked in #centos and they asked me to show a list of devices from a working VM (if I use 720G RAM it works). This is > the list: > > [root at alberttest1 ~]# lspci > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > [root at alberttest1 ~]# lsusb > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > They suspect that the "Virtio memory balloon" driver is causing the problem, and that we should disable it. I googled > around and found this: > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linux-2Dkvm.org_page_Projects_auto-2Dballooning&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=iMCOz64cWwXgiqbpObDBdGZPUoLuQp4G931VKc_hqxI&s=uEnvgAhTPKxJpvz6a3bQisI9406ul8Q2SSHDCV1lqvU&e= > > It looks like memory ballooning is deprecated. How can I get rid of the driver? https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linux-2Dkvm.org_page_Projects_auto-2Dballooning&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=iMCOz64cWwXgiqbpObDBdGZPUoLuQp4G931VKc_hqxI&s=uEnvgAhTPKxJpvz6a3bQisI9406ul8Q2SSHDCV1lqvU&e= states that no qemu that exists today implements that feature but the fact you see it in lspci seams to be in conflict with that. there are several refernce to the feature in later release of qemu and it is documented in libvirt https://urldefense.proofpoint.com/v2/url?u=https-3A__libvirt.org_formatdomain.html-23elementsMemBalloon&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=iMCOz64cWwXgiqbpObDBdGZPUoLuQp4G931VKc_hqxI&s=DSej4fERS8HGIYb7CaIkbVBpssWtSxCbBRxukkAH0rI&e= there is no way to turn it off specificly currently and im not aware of it being deprecated. the guest will not interact witht he vitio memory balloon by default. it is there too allow the guest to free memory and retrun it to the host to allow copperation between the guests and host to enable memory oversubscription. i belive this normally need the qemu guest agent to be deploy to work fully. with a 1.4TB vm how much memory have you reserved on the host. qemu will need memory to implement the vm emulation and this tends to increase as the guess uses more resouces. my first incliantion would be to check it the vm was killed as a result of a OOM event on the host. From gagehugo at gmail.com Tue Feb 4 00:57:38 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 3 Feb 2020 18:57:38 -0600 Subject: [security] Security SIG Newsletter - Jan 2020 Message-ID: Hope everyone's 2020 is going good so far, here's the list of updates from the Security SIG. Overall Jan was a pretty quiet month, only have a few update items. #Month Jan 2020 - Security SIG Meeting Info: http://eavesdrop.openstack.org/#Security_SIG_meeting - Weekly on Thursday at 1500 UTC in #openstack-meeting - Agenda: https://etherpad.openstack.org/p/security-agenda - https://security.openstack.org/ - https://wiki.openstack.org/wiki/Security-SIG #Updates - https://review.opendev.org/#/c/678426/ - Update to vmt policy, currently up for formal TC review - https://bugs.launchpad.net/nova/+bug/1492140 - Updates to stable branches - https://bugs.launchpad.net/neutron/+bug/1732067 - Backports to stable branches may require a configuration change #VMT Reports - A full list of publicly marked security issues can be found here: https://bugs.launchpad.net/ossa/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Tue Feb 4 01:00:03 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 3 Feb 2020 19:00:03 -0600 Subject: [security] No Security SIG Meeting - Feb 06th Message-ID: The Security SIG won't be meeting this Thursday, Feb 06th, we will be back next week. For any questions please feel free to ask in #openstack-security. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Tue Feb 4 01:23:45 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 4 Feb 2020 01:23:45 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: <3517e516-67ba-4944-8a41-1861556446f5@www.fastmail.com> Message-ID: I set mem_stats_period_seconds = 0 in nova.conf on controllers and hypervisors, and restarted nova services, and then built another VM, but it still has the balloon device: albertb at alberttest4:~ $ lspci 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB [Natoma/Triton II] (rev 01) 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon I'll try the q35 setting now. -----Original Message----- From: Sean Mooney Sent: Monday, February 3, 2020 2:56 PM To: Clark Boylan ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Mon, 2020-02-03 at 14:11 -0800, Clark Boylan wrote: > On Mon, Feb 3, 2020, at 1:36 PM, Albert Braden wrote: > > > > When we build a Centos 7 VM with 1.4T RAM it fails with “[ 17.797177] > > BUG: unable to handle kernel paging request at ffff988b19478000” > > > > > > I asked in #centos and they asked me to show a list of devices from a > > working VM (if I use 720G RAM it works). This is the list: > > > > > > [root at alberttest1 ~]# lspci > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > 00:01.0 ISA bridge: Intel Corporation 82371SB PIIX3 ISA [Natoma/Triton II] > > > > 00:01.1 IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II] > > > > 00:01.2 USB controller: Intel Corporation 82371SB PIIX3 USB > > [Natoma/Triton II] (rev 01) > > > > 00:01.3 Bridge: Intel Corporation 82371AB/EB/MB PIIX4 ACPI (rev 03) > > > > 00:02.0 VGA compatible controller: Cirrus Logic GD 5446 > > > > 00:03.0 Ethernet controller: Red Hat, Inc. Virtio network device > > > > 00:04.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:05.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:06.0 SCSI storage controller: Red Hat, Inc. Virtio block device > > > > 00:07.0 Unclassified device [00ff]: Red Hat, Inc. Virtio memory balloon > > > > [root at alberttest1 ~]# lsusb > > > > Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd > > > > Bus 001 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub > > > > > > They suspect that the “Virtio memory balloon” driver is causing the > > problem, and that we should disable it. I googled around and found this: > > > > > > https://urldefense.proofpoint.com/v2/url?u=http-3A__www.linux-2Dkvm.org_page_Projects_auto-2Dballooning&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=GWJcQEqsYfdcWE_hfTBSzWDxJYvLhAtPWlFz1EmzJKY&e= > > > > > > It looks like memory ballooning is deprecated. How can I get rid of the driver? > > Looking at Nova's code [0] the memballoon device is only set if mem_stats_period_seconds has a value greater than 0. > The default [1] is 10 so you get it by default. I would try setting this config option to 0 and recreating the > instance. Note I think this will apply to all VMs and was originally added so that tools could get memory usage > statistics. i forgot about that option. we had talked bout disableing the stats by default at one point. downstream i think we do at least via config on realtime hosts as we found the stat collect causes latency spikes. > > > > > > > Also they complained about my host bridge device; they say that we > > should have a newer one: > > > > > > 00:00.0 Host bridge: Intel Corporation 440FX - 82441FX PMC [Natoma] (rev 02) > > > > > > Where can I specify the host bridge? > > For this I think you need to set hw_machine_type [2]. Looking at this bug [3] I think the value you may want is q35. yes but if you enable q35 you neeed to also be aware that unlike the pc i440 machine type only 1 addtion pci slot will be allocated so if you want to allow attaching more then one volume or nic after teh vm is booted you need to adjust https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_nova_latest_configuration_config.html-23libvirt.num-5Fpcie-5Fports&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=y_xdeWDGbFM4fAr32iEVEER8d6hVZunyvwne1QJfOME&e= . the more addtional pcie port you enable the more memory is required by qemu regardless of if you use them and by default even without allocating more pcie port qemu uses more memory in q35 mode then when using the pc machine type. you should also be aware that ide bus is not supported by default with q35 which causes issues for some older operating systems if you use config drives. with all that said we do want to eventully make q35 the default in nova but you just need to be aware that changing that has lots of other side effects which is why we have not done it yet. q35 is required for many new feature and is supported but its just not the default. > > > > > > > ok ozzzo one of the devices is called "virtio memory balloon" > > > > [13:18:12] do you see that? > > > > [13:18:21] yes > > > > [13:18:47] i suggest you google that and read about what it > > does - i think it would > > > > [13:19:02] be worth trying to disable that device on your > > larger vm to see what happens > > > > [13:19:18] ok I will try that, thank you > > > > [13:19:30] * Altiare (~Altiare at unaffiliated/altiare) has quit IRC > > (Quit: Leaving) > > > > [13:21:45] * Sheogorath[m] > > (sheogora1 at gateway/shell/matrix.org/x-uiiwpoddodtgrwwz) joins #centos > > > > [13:22:06] <@TrevorH> I also notice that the VM seems to be using the > > very old 440FX and there's a newer model of hardware available that > > might be worth checking > > > > [13:22:21] <@TrevorH> 440FX chipset is the old old pentium Pro chipset! > > > > [13:22:32] <@TrevorH> I had one of those in about 1996 > > > > [0] https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5840-2DL5852&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=H2dx7K2OyyHfGOjaQ51CGvU0308JWXFBp_80QuxCAPw&e= > [1] https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_nova_train_configuration_config.html-23libvirt.mem-5Fstats-5Fperiod-5Fseconds&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=FWkMnQE0rIldStIjTeTrXlBoCR0Bb06TqNQpjQwwXuM&e= > [2] https://urldefense.proofpoint.com/v2/url?u=https-3A__docs.openstack.org_nova_train_configuration_config.html-23libvirt.hw-5Fmachine-5Ftype&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=p4N-kGlblu2E47dPo8qYHl2hv4BPROlLoM_-5YNbAzc&e= > [3] https://urldefense.proofpoint.com/v2/url?u=https-3A__bugs.launchpad.net_nova_-2Bbug_1780138&d=DwIFaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=CpgL1Yla6XeZXNt8yDzb1d1aB2MMUJ6P4cMQ5cpxSU4&s=wZjiqcg-XvVbIVgTpHRsbPVuZd1K0mM4-BZ6P0JNMj8&e= > From Tushar.Patil at nttdata.com Tue Feb 4 07:53:45 2020 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Tue, 4 Feb 2020 07:53:45 +0000 Subject: [heat][tacker] After Stack is Created, will it change nested stack Id? Message-ID: Hi All, In tacker project, we are using heat API to create stack. Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2. So internally heat will create two nested stacks and add following resources to it:- child stack1 VDU1 - OS::Nova::Server CP1 - OS::Neutron::Port VDU2 - OS::Nova::Server CP2- OS::Neutron::Port child stack2 VDU1 - OS::Nova::Server CP1 - OS::Neutron::Port VDU2 - OS::Nova::Server CP2- OS::Neutron::Port Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack. Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly. My question is after the stack is created for the first time, will it ever change the nested child stack id? Thank you. Regards, tpatil Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From hberaud at redhat.com Tue Feb 4 08:47:41 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 4 Feb 2020 09:47:41 +0100 Subject: [oslo] drop 2.7 support - track releases In-Reply-To: <1700c12980a.e8a768ec216738.734931729905313685@ghanshyammann.com> References: <1700c12980a.e8a768ec216738.734931729905313685@ghanshyammann.com> Message-ID: Thanks for you heads-up, I'm aware of the "drop-py27-support" topic and I don't want to duplicate the work, I just want to simplify the releasing steps of oslo projects for oslo maintainers and offer to us a big picture of released projects where support was dropped. Thanks for the tips but we can't track all the projects managed by oslo team with filters: - openstack/microversion-parse - openstack/debtcollector - openstack/pbr - ... - openstack/tooz All the patches on oslo scope related to the drop of the py27 support use the topic "drop-py27-support" but to track our releasing of these project where the support was dropped and to isolate this part I preferred use "oslo_drop_py2_support". In other words only our patches against openstack/releases use this topic. Sorry if I misleaded some of you with that. Le lun. 3 févr. 2020 à 18:21, Ghanshyam Mann a écrit : > ---- On Mon, 03 Feb 2020 10:32:08 -0600 Herve Beraud > wrote ---- > > The goal of this thread is to track oslo releases related to drop of > py2.7 support. > > By creating this thread I want to isolate the oslo status from the > "drop-py27-support" to help us to track our advancement internally in oslo. > > You can still track the oslo related work with the original topic by > adding the extra query for project matching string. > - > https://review.opendev.org/#/q/topic:drop-py27-support+(status:open+OR+status:merged)+projects:openstack/oslo > > Main idea to use the single topic "drop-py27-support" for this goal is to > track it on OpenStack level or avoid duplicating the work etc. > > -gmann > > > > > Le lun. 3 févr. 2020 à 17:02, Herve Beraud a > écrit : > > > > > > -- > > Hervé BeraudSenior Software Engineer > > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > Hello, > > FYI you can track the dropping of py2.7 support in oslo by using: > https://review.opendev.org/#/q/topic:oslo_drop_py2_support+(status:open+OR+status:merged) > > Topic: oslo_drop_py2_support > > We release a major version each time an oslo projects drop the py2.7 > support. > > -- Hervé BeraudSenior Software Engineer > > Red Hat - Openstack Osloirc: hberaud-----BEGIN PGP SIGNATURE----- > > > > wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ > > Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ > > RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP > > F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G > > 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g > > glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw > > m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ > > hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 > > qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y > > F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 > > B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O > > v6rDpkeNksZ9fFSyoY2o > > =ECSj > > -----END PGP SIGNATURE----- > > > > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From pradeepantil at gmail.com Tue Feb 4 08:37:20 2020 From: pradeepantil at gmail.com (Pradeep Antil) Date: Tue, 4 Feb 2020 14:07:20 +0530 Subject: RDO OpenStack Repo for CentOS 8 Message-ID: Hi Folks, I am trying to deploy single node openstack on CentOS 8 using packstack, but it seems like CentOS 8 packages are not updated in repo. Has anyone tried this before on centos 8 , if yes what repository should I use ? [root at xxxxx~]# dnf install -y https://www.rdoproject.org/repos/rdo-release.rpm Last metadata expiration check: 0:02:17 ago on Tue 04 Feb 2020 03:07:49 AM EST. rdo-release.rpm 936 B/s | 6.4 kB 00:07 Dependencies resolved. ============================================================================================================================== Package Architecture Version Repository Size ============================================================================================================================== Installing: rdo-release noarch stein-3 @commandline 6.4 k Transaction Summary ============================================================================================================================== Install 1 Package Total size: 6.4 k Installed size: 3.1 k Downloading Packages: Running transaction check Transaction check succeeded. Running transaction test Transaction test succeeded. Running transaction Preparing : 1/1 Installing : rdo-release-stein-3.noarch 1/1 Verifying : rdo-release-stein-3.noarch 1/1 Installed: rdo-release-stein-3.noarch Complete! [root at xxxx ~]# [root at xxxx ~]# [root at xxxx ~]# dnf install -y openstack-packstack RDO CentOS-7 - QEMU EV 42 kB/s | 35 kB 00:00 OpenStack Stein Repository 1.3 MB/s | 4.1 MB 00:03 Last metadata expiration check: 0:00:01 ago on Tue 04 Feb 2020 03:12:15 AM EST. Error: Problem: cannot install the best candidate for the job - nothing provides python-netifaces needed by openstack-packstack-1:14.0.0-1.el7.noarch - nothing provides PyYAML needed by openstack-packstack-1:14.0.0-1.el7.noarch - nothing provides python-docutils needed by openstack-packstack-1:14.0.0-1.el7.noarch (try to add '--skip-broken' to skip uninstallable packages or '--nobest' to use not only best candidate packages) [root at xxxxx ~]# -- Best Regards Pradeep Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From amoralej at redhat.com Tue Feb 4 10:27:25 2020 From: amoralej at redhat.com (Alfredo Moralejo Alonso) Date: Tue, 4 Feb 2020 11:27:25 +0100 Subject: [rdo-dev] RDO OpenStack Repo for CentOS 8 In-Reply-To: References: Message-ID: Hi, RDO on CentOS8 is still a work in progress, there is no rdo-release rpm for CentOS8 yet as we are not providing signed packages in CentOS mirrors yet. However, there are available unsigned RDO Trunk repos if you want to start testing already, you can configure them by adding following files to /etc/yum.repos.d. https://trunk.rdoproject.org/centos8-train/delorean-deps.repo https://trunk.rdoproject.org/centos8-train/puppet-passed-ci/delorean.repo If you prefer to use packages from master branches for testing instead of train: https://trunk.rdoproject.org/centos8-master/delorean-deps.repo https://trunk.rdoproject.org/centos8-master/puppet-passed-ci/delorean.repo That should work. Best regards, Alfredo On Tue, Feb 4, 2020 at 9:38 AM Pradeep Antil wrote: > Hi Folks, > > I am trying to deploy single node openstack on CentOS 8 using packstack, > but it seems like CentOS 8 packages are not updated in repo. > > Has anyone tried this before on centos 8 , if yes what repository should I > use ? > > [root at xxxxx~]# dnf install -y > https://www.rdoproject.org/repos/rdo-release.rpm > Last metadata expiration check: 0:02:17 ago on Tue 04 Feb 2020 03:07:49 AM > EST. > rdo-release.rpm > 936 B/s | 6.4 kB 00:07 > Dependencies resolved. > > ============================================================================================================================== > Package Architecture Version > Repository Size > > ============================================================================================================================== > Installing: > rdo-release noarch stein-3 > @commandline 6.4 k > > Transaction Summary > > ============================================================================================================================== > Install 1 Package > > Total size: 6.4 k > Installed size: 3.1 k > Downloading Packages: > Running transaction check > Transaction check succeeded. > Running transaction test > Transaction test succeeded. > Running transaction > Preparing : > 1/1 > Installing : rdo-release-stein-3.noarch > 1/1 > Verifying : rdo-release-stein-3.noarch > 1/1 > > Installed: > rdo-release-stein-3.noarch > > Complete! > [root at xxxx ~]# > [root at xxxx ~]# > [root at xxxx ~]# dnf install -y openstack-packstack > RDO CentOS-7 - QEMU EV > 42 kB/s | 35 kB 00:00 > OpenStack Stein Repository > 1.3 MB/s | 4.1 MB 00:03 > Last metadata expiration check: 0:00:01 ago on Tue 04 Feb 2020 03:12:15 AM > EST. > Error: > Problem: cannot install the best candidate for the job > - nothing provides python-netifaces needed by > openstack-packstack-1:14.0.0-1.el7.noarch > - nothing provides PyYAML needed by > openstack-packstack-1:14.0.0-1.el7.noarch > - nothing provides python-docutils needed by > openstack-packstack-1:14.0.0-1.el7.noarch > (try to add '--skip-broken' to skip uninstallable packages or '--nobest' > to use not only best candidate packages) > [root at xxxxx ~]# > > -- > Best Regards > Pradeep Kumar > _______________________________________________ > dev mailing list > dev at lists.rdoproject.org > http://lists.rdoproject.org/mailman/listinfo/dev > > To unsubscribe: dev-unsubscribe at lists.rdoproject.org > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 4 11:54:39 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Feb 2020 11:54:39 +0000 Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... In-Reply-To: References: Message-ID: <20200204115438.vmdtuidaklmjbhkh@yuggoth.org> On 2020-02-03 13:50:03 +0000 (+0000), Srinivas Dasthagiri wrote: > We are working on Kaminario CI fresh configuration(since it is too > old, it has broken). We have communicated with OpenStack-infra > community for the suggestions and documentation. One of the > community member suggested us to go with manual CI > configuration(Not third party CI) instead of CI configuration with > puppet architecture(Since it is moving to Ansible). But we did not > get documents for all other CI components except ZuulV3 from > community. [...] To clarify that recommendation, it was to build a third-party CI system by reading the documentation for the components you're going to use and understanding how they work. The old Puppet-based documentation you're referring to was written by some operators of third-party CI systems but not kept updated, so the versions of software it would need if you follow it would be years-old and in some cases (especially Jenkins and associated plug-ins) dangerously unsafe to connect to the internet due to widely-known security vulnerabilities in the versions you'd have to use. Modern versions of the tools you would want to use are well-documented, there's just no current document explaining exactly how to use them together for the exact situation you're in without having to understand how the software works. Many folks in your situation seem to want someone else to provide a simple walk-through for them, so until someone who is in your situation (maybe you?) takes the time to gain familiarity with recent versions of CI software and publish some documentation on how you got it communicating correctly with OpenDev's Gerrit deployment, such a walk-through is not going to exist. But even that, if not kept current, will quickly fall stale: all of those components, including Gerrit itself, need updating over time to address bugs and security vulnerabilities, and those updates occasionally come with backward-incompatible behavior changes. People maintaining these third-party CI systems are going to need to *stay* familiar with the software they're running, keep on top of necessary behavior and configuration changes over time, and update any new walk-through document accordingly so that it doesn't wind up in the same state as the one we had. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Tue Feb 4 12:00:59 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 4 Feb 2020 12:00:59 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: Message-ID: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > seen any OOM errors. Where should I look for those? [...] The `dmesg` utility on the hypervisor host should show you the kernel's log ring buffer contents (the -T flag is useful to translate its timestamps into something more readable than seconds since boot too). If the ring buffer has overwritten the relevant timeframe then look for signs of kernel OOM killer invocation in your syslog or persistent journald storage. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From missile0407 at gmail.com Tue Feb 4 12:18:29 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 4 Feb 2020 20:18:29 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. Message-ID: Hi everyone, We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 Storage (Ceph OSD) site without internet. We did the shutdown few days ago since CNY holidays. Today we re-launch whole cluster back. First we met the issue that MariaDB containers keep restarting, and we fixed by using mariadb_recovery command. After that we check the status of each services, and found that all services shown at Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP connection, or other error found when check the downed service log. We tried reboot each servers but the situation still a same. Then we found the RabbitMQ log not updating, the last log still stayed at the date we shutdown. Logged in to RabbitMQ container and type "rabbitmqctl status" shows connection refused, and tried access its web manager from :15672 on browser just gave us "503 Service unavailable" message. Also no port 5672 listening. I searched this issue on the internet but only few information about this. One of solution is delete some files in mnesia folder, another is remove rabbitmq container and its volume then re-deploy. But both are not sure. Does anyone know how to solve it? Many thanks, Eddie. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Tue Feb 4 12:42:42 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Tue, 04 Feb 2020 13:42:42 +0100 Subject: [kuryr] Deprecation of Ingress support and namespace isolation In-Reply-To: <97e0d4450c2777bcc4f8f2aff39dfdb0150fbe09.camel@redhat.com> References: <97e0d4450c2777bcc4f8f2aff39dfdb0150fbe09.camel@redhat.com> Message-ID: <5e8503ed1b353b5fde7fabb78ec08d2562cde41b.camel@redhat.com> Hi, We just triggered merge of the patch removing OpenShift Routes (Ingress) support. It was not tested for a long while and we really doubt it was working. We'll be following with namespace isolation feature removal, which might be a bit more controversial, but has a really easy workaround of just using network policies to do the same. Thanks, Michał On Mon, 2020-01-20 at 17:51 +0100, Michał Dulko wrote: > Hi, > > I've decided to put up a patch [1] deprecating the aforementioned > features. It's motivated by the fact that there are better ways to do > both: > > * Ingress can be done by another controller or through cloud provider. > * Namespace isolation can be achieved through network policies. > > Both alternative ways are way better tested and there's nobody > maintaining the deprecated features. I'm open to keep them if someone > using them steps up. > > With the fact that Kuryr seems to be tied more with Kubernetes releases > than with OpenStack ones and given there will be no objections, we > might start removing the code in the Ussuri timeframe. > > [1] https://review.opendev.org/#/c/703420/ > > Thanks, > Michał > From zhangbailin at inspur.com Tue Feb 4 12:46:32 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 4 Feb 2020 12:46:32 +0000 Subject: [nova] noVNC console with password authentication Message-ID: <792398a13fa74b83b0f8c3dc6b5ec0af@inspur.com> Hi all: About https://review.opendev.org/#/c/623120/ SPEC, there are two different perspectives, one from Alex and one from SPEC author Jingyu. 1. @Jingyu’s point is add“vnc_password”to the instance’s metadata,“vnc_password”is only provided for libvirtd support. As described in SPEC, the“vnc_password”parameter is populated when the instance generates XML, and when show server details that pop the “vnc_password”from nova api to ensure its security. That we can refer to the implementation of "adminPass" to understand this method. Its advantage is that it will not break the current nova api, you only need to store“vnc_password”in the instance's metadata. The disadvantage is that“vnc_password”is in the metadata but the user cannot get it. In addition, after we are evacuate/rebuild a server that we should reset it’s“vnc_password”, or take out "vnc_password" from the original instance and write into the new instance during evacuate/rebuild. 2. @Alex’s suggestion is change the Create Console API, add“vnc_password”as a new request optional parameter to the request body, that when we request create the remote console, if the“vnc_password”is not None we will reset the server’s vnc passwd, if“vnc_password”is None, that it will use the novnc password set last time you opened the console. The advantage is that it is more simple and convenient than storing "vnc_password" in metadata. When evacuate/rebuild, there is no need to consider the problem of "vnc_password" storage, but when we first open the console, we need to set it in the request body the value of "vnc_password". The disadvantage is that you need to add a new microversion to support this feature, which will break the current nova API (Create Console API). In addition, from the working principle of the RFB protocol, nova does not care about the "vnc_password" parameter passed in to obtain the Console URL. The verification of the vnc password is the job of the vnc server maintained by libvirtd. We look forward to completing it before the SPEC freeze, and hope to get more feedback, especially from the nova core team. SPEC link: https://review.opendev.org/#/c/623120/ brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Feb 4 13:19:36 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 4 Feb 2020 08:19:36 -0500 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: ⁹ On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: > Hi everyone, > > We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 > Storage (Ceph OSD) > site without internet. We did the shutdown few days ago since CNY > holidays. > > Today we re-launch whole cluster back. First we met the issue that MariaDB > containers keep > restarting, and we fixed by using mariadb_recovery command. > After that we check the status of each services, and found that all > services shown at > Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP > connection, > or other error found when check the downed service log. > > We tried reboot each servers but the situation still a same. Then we found > the RabbitMQ log not > updating, the last log still stayed at the date we shutdown. Logged in to > RabbitMQ container and > type "rabbitmqctl status" shows connection refused, and tried access its > web manager from > :15672 on browser just gave us "503 Service unavailable" message. > Also no port 5672 > listening. > Any chance you have a NIC that didn't come up? What is in the log of the container itself? (ie. docker log rabbitmq). > I searched this issue on the internet but only few information about this. > One of solution is delete > some files in mnesia folder, another is remove rabbitmq container and its > volume then re-deploy. > But both are not sure. Does anyone know how to solve it? > > > Many thanks, > Eddie. > -Erik > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emccormick at cirrusseven.com Tue Feb 4 13:33:51 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 4 Feb 2020 08:33:51 -0500 Subject: Migration to Openstack from Proxmox In-Reply-To: <7150dfca5e513345a33573f7d00c3a83@42iso.com> References: <7150dfca5e513345a33573f7d00c3a83@42iso.com> Message-ID: On Mon, Feb 3, 2020, 3:28 PM wrote: > We currently are using proxmox for a envrionment management, and while we > were looking at DFS options we came across openstack, and we've decided to > start looking at it as a replacement for proxmox. Has anyone made this > migration before? > Keep on mind that you're still going to need a storage system like Ceph separate from Openstack whereas you may be used concept being native in Proxmox. Besides that, migration is pretty straightforward. I assume you're already running on KVM, so you should be able to snapshot and import your current instances, volumes, etc. Currently we're running a few Windows Server 12/16/19 virtual machines, but > mostly centOS7 virtual machines, however what we are developing are > micro-services that ideally would be deployed using k8s. > Openstack and k8s are great together. Check out the Openstack Magnum project, cloudprovider-openstack, and if you're trying to so multiattach persistent volumes, Manila. Does anyone have any experience migrating between the two, or any other > recommendation when considering openstack? > Follow one of the install guides on docs.openstack.org to do a manual install so you get familiar with all the bits and bobs under the hood. After that, pick a deployment project to create your production cluster. Kolla-ansible, Openstack-ansible, Triple-O, and Juju are some popular ones. Cheers, Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Tue Feb 4 13:45:18 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Tue, 4 Feb 2020 21:45:18 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Erik, I'm already checked NIC link and no issue found. Pinging the nodes each other on each interfaces is OK. And I'm not check docker logs about rabbitmq sbecause it works normally. I'll check that out later. -Eddie Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: > ⁹ > > On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: > >> Hi everyone, >> >> We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 >> Storage (Ceph OSD) >> site without internet. We did the shutdown few days ago since CNY >> holidays. >> >> Today we re-launch whole cluster back. First we met the issue that >> MariaDB containers keep >> restarting, and we fixed by using mariadb_recovery command. >> After that we check the status of each services, and found that all >> services shown at >> Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP >> connection, >> or other error found when check the downed service log. >> >> We tried reboot each servers but the situation still a same. Then we >> found the RabbitMQ log not >> updating, the last log still stayed at the date we shutdown. Logged in to >> RabbitMQ container and >> type "rabbitmqctl status" shows connection refused, and tried access its >> web manager from >> :15672 on browser just gave us "503 Service unavailable" message. >> Also no port 5672 >> listening. >> > > > Any chance you have a NIC that didn't come up? What is in the log of the > container itself? (ie. docker log rabbitmq). > > >> I searched this issue on the internet but only few information about >> this. One of solution is delete >> some files in mnesia folder, another is remove rabbitmq container and its >> volume then re-deploy. >> But both are not sure. Does anyone know how to solve it? >> >> >> Many thanks, >> Eddie. >> > > -Erik > >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Tue Feb 4 14:07:30 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 Feb 2020 08:07:30 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-15 2nd Update (# 2 weeks left to complete) In-Reply-To: <17003544244.b4776cd0166376.7089453842624552657@ghanshyammann.com> References: <17003544244.b4776cd0166376.7089453842624552657@ghanshyammann.com> Message-ID: <17010870c29.10825a9a8258458.5988453878282738658@ghanshyammann.com> ---- On Sat, 01 Feb 2020 18:36:58 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > Below is the progress on "Drop Python 2.7 Support" at end of R-15 week. > > Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > Highlights: > ======== > * 2 weeks left to finish the work. > > * QA tooling: > ** Tempest is dropping py3.5[1]. Tempest plugins can drop py3.5 now if they still support it. > ** Updating tox with basepython python3 [2]. > ** Pining stable/rocky testing with 23.0.0[3]. > ** Updating neutron-tempest-plugins rocky jobs to run with py3 on master and py2 on stable/rocky gate. > ** Ironic-tempest-plugin jobs are failing on uploading image on glance. Debugging in progress. > > * zipp failure fix on py3.5 job is merged. > > * 5 services listed below still not merged the patches, I request PTLs to review it on priority. > > Project wise status and need reviews: > ============================ > Phase-1 status: > The OpenStack services have not merged the py2 drop patches: > NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). > > * Adjutant > * ec2-api > * Karbor > * Masakari > * Qinling > * Tricircle > > Phase-2 status: > This is ongoing work and I think most of the repo have patches up to review. > Try to review them on priority. If any query and I missed to respond on review, feel free to ping me > in irc. > > * Most of the tempest plugins and python client patches are good to merge. > > * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open Oslo lib dropped the py2 and now would not be able to work on py2 jobs on master gate. You might see the failure if your project has not merged the patch. openstack-tox-py27 job will fail for sure. This is good time to merge your patch to drop the py2. I will not suggest capping the requirement to unblock the gate instead drop the py2 from your repo. -gmann > > > How you can help: > ============== > - Review the patches. Push the patches if I missed any repo. > > [1] https://review.opendev.org/#/c/704840/ > [2] https://review.opendev.org/#/c/704688/ > [3] https://review.opendev.org/#/c/705098/ > > -gmann > > From emccormick at cirrusseven.com Tue Feb 4 14:27:48 2020 From: emccormick at cirrusseven.com (Erik McCormick) Date: Tue, 4 Feb 2020 09:27:48 -0500 Subject: Migration to Openstack from Proxmox In-Reply-To: References: <7150dfca5e513345a33573f7d00c3a83@42iso.com> Message-ID: +list again :) On Tue, Feb 4, 2020, 8:46 AM wrote: > Thanks for the reply. > > So if I understand you, swift is not an acceptable storage solution to > couple with OpenStack? I also think that I missed something in your first > paragraph "whereas you may be used concept being native in Proxmox." Are > you talking about how Proxmox has the local storage on the server AND the > NFS storage? > Swift is an Openstack project so sure it's good to use with Openstack or even on it's own. It is Object Storage (like AWS S3). If you want volumes you'll want some other system like Ceph, some NFS server, etc. Proxmox is capable of deploying Ceph and that's what I was referring to. > All of our VM's are the standard VM. We do have a few "templates" but > those are not really anything special. I do have quite a few images > pre-built, so that is my biggest concern with a migration. > You can easily import those images into Glance (Openstack image service);and then launch instances using them. > I know, just from looking at the docs/videos out there that OpenStack is > definitely more akin to AWS than it is ProxMox, but would you say it is > worth it from a manage a multi-location hardware environment to switch to > OpenStack from proxmox? > This is mainly a matter of your use case. Openstack is complex; more complex than Proxmox by quite a bit. It is also extremely powerful and has many projects to provide lots of cloudy services. Many of those services are analogous to ones found in AWS and other public clouds. It is well suited to orchestrate large numbers of instances, volumes, networks, etc. across many hypervisors, Proxmox, as far as I recall, is more of a straight virtualization platform. I also recall it being fairly simple and reliable. If it meets your needs, don't complicate your life. You may want to sign up for a trial at an Openstack-based public cloud provider to get familiar with the functionality before committing to building your own. You can find a list here: https://www.openstack.org/passport/ Cheers, Erik Thanks! > > > Jason > > -------- Original Message -------- > Subject: Re: Migration to Openstack from Proxmox > Date: 2020-02-04 07:33 > From: Erik McCormick > To: jmacer at 42iso.com > > > > > On Mon, Feb 3, 2020, 3:28 PM wrote: > > We currently are using proxmox for a envrionment management, and while we > were looking at DFS options we came across openstack, and we've decided to > start looking at it as a replacement for proxmox. Has anyone made this > migration before? > > > Keep on mind that you're still going to need a storage system like Ceph > separate from Openstack whereas you may be used concept being native in > Proxmox. > > Besides that, migration is pretty straightforward. I assume you're already > running on KVM, so you should be able to snapshot and import your current > instances, volumes, etc. > > > > Currently we're running a few Windows Server 12/16/19 virtual machines, > but mostly centOS7 virtual machines, however what we are developing are > micro-services that ideally would be deployed using k8s. > > Openstack and k8s are great together. Check out the Openstack Magnum > project, cloudprovider-openstack, and if you're trying to so multiattach > persistent volumes, Manila. > > > Does anyone have any experience migrating between the two, or any other > recommendation when considering openstack? > > Follow one of the install guides on docs.openstack.org to do a manual > install so you get familiar with all the bits and bobs under the hood. > After that, pick a deployment project to create your production cluster. > Kolla-ansible, Openstack-ansible, Triple-O, and Juju are some popular ones. > > Cheers, > Erik > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Feb 4 16:09:25 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 4 Feb 2020 09:09:25 -0700 Subject: [tripleo] rework of triple squads and the tripleo mtg. Message-ID: Greetings, As mentioned at the previous tripleo meeting [1], we're going to revisit the current tripleo squads and the expectations for those squads at the tripleo meeting. Currently we have the following squads.. 1. upgrades 2. edge 3. integration 4. validations 5. networking 6. transformation 7. ci A reasonable update could include the following.. 1. validations 2. transformation 3. mistral-to-ansible 4. CI 5. Ceph / Integration?? maybe just Ceph? 6. others?? The squads should reflect major current efforts by the TripleO team IMHO. For the meetings, I would propose we use this time and space to give context to current reviews in progress and solicit feedback. It's also a good time and space to discuss any upstream blockers for those reviews. Let's give this one week for comments etc.. Next week we'll update the etherpad list and squads. The etherpad list will be a decent way to communicate which reviews need attention. Thanks all!!! [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Tue Feb 4 16:39:07 2020 From: johfulto at redhat.com (John Fulton) Date: Tue, 4 Feb 2020 11:39:07 -0500 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin wrote: > > Greetings, > > As mentioned at the previous tripleo meeting [1], we're going to revisit the current tripleo squads and the expectations for those squads at the tripleo meeting. > > Currently we have the following squads.. > 1. upgrades > 2. edge > 3. integration > 4. validations > 5. networking > 6. transformation > 7. ci > > A reasonable update could include the following.. > > 1. validations > 2. transformation > 3. mistral-to-ansible > 4. CI > 5. Ceph / Integration?? maybe just Ceph? I'm fine with "Ceph". The original intent of going from "Ceph Integration" to the more generic "Integration" was that it could include anyone using external-deploy-steps to deploy non-openstack projects with TripleO (k8s, skydive, etc). Though those things happened, we didn't really get anyone else to join the squad or update our etherpad so I'm fine with renaming it to Ceph. We're still active but our etherpad was getting old. I updated it just now. Gulio? Francesco? Alan? John > 6. others?? > > The squads should reflect major current efforts by the TripleO team IMHO. > > For the meetings, I would propose we use this time and space to give context to current reviews in progress and solicit feedback. It's also a good time and space to discuss any upstream blockers for those reviews. > > Let's give this one week for comments etc.. Next week we'll update the etherpad list and squads. The etherpad list will be a decent way to communicate which reviews need attention. > > Thanks all!!! > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > From marios at redhat.com Tue Feb 4 16:48:19 2020 From: marios at redhat.com (Marios Andreou) Date: Tue, 4 Feb 2020 18:48:19 +0200 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 6:12 PM Wesley Hayutin wrote: > Greetings, > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > Currently we have the following squads.. > 1. upgrades > 2. edge > 3. integration > 4. validations > 5. networking > 6. transformation > 7. ci > > A reasonable update could include the following.. > > 1. validations > 2. transformation > 3. mistral-to-ansible > 4. CI > 5. Ceph / Integration?? maybe just Ceph? > 6. others?? > > The squads should reflect major current efforts by the TripleO team IMHO. > so I agree with the notion that current efforts are discussed/reported on/summarized etc during the tripleo weekly but I'd be careful about saying 'these are the only squads'. Something like upgrades for example is still very much ongoing (though as far as I understand this is now splintered into 3 subgroups, updates, upgrades and migrations) even if they aren't reporting during the meeting as a habit. Instead of just accepting this absence and saying there is no upgrades squad, instead lets try and get them to check in to the meeting more often. I think upgrades is a special case, perhaps the above also applies to the networking squad too. Otherwise agree with your new list, but as above I'd be careful not to exclude folks that *are* working on $tripleo_stuff but for _reasons_ (likely workload/pressure) aren't coming to the tripleo weekly. just my 2c thanks > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > Thanks all!!! > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fpantano at redhat.com Tue Feb 4 17:14:35 2020 From: fpantano at redhat.com (Francesco Pantano) Date: Tue, 4 Feb 2020 18:14:35 +0100 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > > > Currently we have the following squads.. > > 1. upgrades > > 2. edge > > 3. integration > > 4. validations > > 5. networking > > 6. transformation > > 7. ci > > > > A reasonable update could include the following.. > > > > 1. validations > > 2. transformation > > 3. mistral-to-ansible > > 4. CI > > 5. Ceph / Integration?? maybe just Ceph? > > I'm fine with "Ceph". The original intent of going from "Ceph > Integration" to the more generic "Integration" was that it could > include anyone using external-deploy-steps to deploy non-openstack > projects with TripleO (k8s, skydive, etc). Though those things > happened, we didn't really get anyone else to join the squad or update > our etherpad so I'm fine with renaming it to Ceph. We're still active > but our etherpad was getting old. I updated it just now. > Gulio? Francesco? Alan? > Agree here and I also updated the etherpad as well [1] w/ our current status and the open topics we still have on ceph side. Not sure if we want to use "Integration" since the topics couldn't be only ceph related but can involve other storage components. Giulio, Alan, wdyt? > > John > > > 6. others?? > > > > The squads should reflect major current efforts by the TripleO team IMHO. > > > > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > > > Thanks all!!! > > > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > > > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From kendall at openstack.org Tue Feb 4 17:31:20 2020 From: kendall at openstack.org (Kendall Waters) Date: Tue, 4 Feb 2020 11:31:20 -0600 Subject: Sponsorship Prospectus is Now Live - OpenDev+PTG Vancouver 2020 Message-ID: Hi everyone, The sponsorship prospectus for the upcoming OpenDev + PTG event happening in Vancouver, BC on June 8-11 is now live! We expect this to be an important event for sponsors, and will have a place for sponsors to have a physical presence (embedded in the event rather in a separate sponsors hall), as well as branding throughout the event and the option for a keynote in the morning for headline sponsors. OpenDev + PTG is a collaborative event organized by the OpenStack Foundation (OSF) gathering developers, system architects, and operators to address common open source infrastructure challenges. OpenDev will take place June 8-10 and the PTG will take place June 8-11. Each day will be broken into three parts: Short kickoff with all attendees to set the goals for the day or discuss the outcomes of the previous day. Think of this like a mini-keynote, challenging your thoughts around the topic areas before you head into real collaborative sessions. OpenDev: Morning discussions covering projects like Airship, Ansible, Ceph, Kata Containers, Kubernetes, OpenStack, StarlingX, Zuul and more centered around one of four different topics: Hardware Automation Large-scale Usage of Open Source Infrastructure Software Containers in Production Key challenges for open source in 2020 PTG: Afternoon working sessions for project teams and SIGs to continue the morning’s discussions. The sponsorship contract will be available to sign starting tomorrow, February 5 at 9:30am PST at https://www.openstack.org/events/opendev-ptg-2020/sponsors . Please let me know if you have any questions. Cheers, Kendall Kendall Waters Perez OpenStack Marketing & Events kendall at openstack.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kuirong.Chen at infortrend.com Tue Feb 4 09:11:04 2020 From: Kuirong.Chen at infortrend.com (=?utf-8?B?S3Vpcm9uZy5DaGVuKOmZs+WljuiejSk=?=) Date: Tue, 4 Feb 2020 09:11:04 +0000 Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... In-Reply-To: References: Message-ID: Hi jay, Infortrend third part CI is broken for some reason, I’ll check our environment and resolve it. KuiRong Software Design Dept.II Ext. 7125 From: Jay Bryant Sent: Thursday, January 23, 2020 3:51 AM To: openstack-discuss at lists.openstack.org; inspur.ci at inspur.com; wangyong2017 at inspur.com; Chengwei.Chou(周政緯) ; Bill.Sung(宋柏毅) ; Kuirong.Chen(陳奎融) ; ido.benda at kaminario.com; srinivasd.ctr at kaminario.com; nec-cinder-ci at istorage.jp.nec.com; silvan at quobyte.com; robert at quobyte.com; felix at quobyte.com; bjoern at quobyte.com; OpenStack Development ; Shlomi Avihou | Zadara ; msdu-openstack at groups.ext.hpe.com Subject: [cinder][ci] Cinder drivers being Unsupported and General CI Status ... All, We once again are at the point in the release where we are talking about 3rd Party CI and what is going on for Cinder. At the moment I have analyzed drivers that have not successfully reported results on a Cinder patch in 30 or more days and have put together the following list of drivers to be unsupported in the Ussuri release: * Inspur Drivers * Infortrend * Kaminario * NEC * Quobyte * Zadara * HPE Drivers If your name is in the list above you are receiving this e-mail directly, not just through the mailing list. If you are working on resolving CI issues please let me know so we can discuss how to proceed. In addition to the fact that we will be pushing up unsupported patches for the drivers above, we have already unsupported and removed a number of drivers during this release. They are as follows: * Unsupported: * MacroSAN Driver * Removed: * ProphetStor Driver * Nimble Storage Driver * Veritas Access Driver * Veritas CNFS Driver * Virtuozzo Storage Driver * Huawei FusionStorage Driver * Sheepdog Storage Driver Obviously we are reaching the point that the number of drivers leaving the community is concerning and it has sparked discussions around the fact that maybe our 3rd Party CI approach isn't working as intended. So what do we do? Just mark drivers unsupported and no longer remove drivers? Do we restore drivers that have recently been removed? We are planning to have further discussion around these questions at our next Cinder meeting in #openstack-meeting-4 on Wednesday, 1/29/20 at 14:00 UTC. If you have thoughts or strong opinions around this topic please join us. Thank you! Jay Bryant jsbryant at electronicjungle.net IRC: jungleboyj -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 4 19:16:46 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 4 Feb 2020 11:16:46 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #1 Message-ID: Hello All! At last a dedicated update solely to the Contrib & PTL Docs community goal! Get excited :) At this point the goal has been accepted[1], and the template[2] has been created and merged! So, the next step is for all projects to use the cookiecutter template[2] and fill in the extra details after you generate the .rst that should auto populate some of the information. As you are doing that, please assign yourself the associated tasks in the story I created[3]. If you have any questions or concerns, please let me know! -Kendall Nelson (diablo_rojo) [1] Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html [2] Docs Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From viroel at gmail.com Tue Feb 4 19:33:03 2020 From: viroel at gmail.com (Douglas) Date: Tue, 4 Feb 2020 16:33:03 -0300 Subject: [manila] Manila Drivers and CI Status Message-ID: Hi all, In the past months we have been analyzing drivers that are not reporting any status on patches submitted to the master branch of the openstack/manila repository [1]. We would like to request to driver maintainers to pay attention on the following warnings: 1. CI's that are not running and qualifying changes on master branch will be marked as `Not Qualified` on Manila's wiki page [2]. This annotation should help clarify to deployers/operators what drivers have been tested. A link to this will be created within Manila's administrator and user documentation [3]. 2. Python runtimes for Train and Ussuri cycles have been published by the OpenStack TC and we expect vendor CIs to adhere to these [4][5]. That said, no CIs should be running master (Ussuri) with python < 3.6. 3. If your CI system, and recheck trigger are missing from the Third Party CI wiki page [6], please add it. As has been the norm, no vendor driver changes will be allowed to merge without Third Party CI assuring us that the change has been tested for regressions. If you have any questions/concerns regarding CI and testing your drivers, please reach out to us here (openstack-discuss at lists.openstack.org) or on #openstack-manila [1] https://docs.google.com/spreadsheets/d/1dBSCqtQKoyFMX6oWTahhP9Z133NRFW3ynezq1CItx8M/edit#gid=0 [2] https://wiki.openstack.org/wiki/Manila [3] https://docs.openstack.org/manila/latest/index.html [4] https://governance.openstack.org/tc/reference/runtimes/train.html [5] https://governance.openstack.org/tc/reference/runtimes/ussuri.html [6] https://wiki.openstack.org/wiki/ThirdPartySystems Thanks! Douglas Viroel (dviroel) -------------- next part -------------- An HTML attachment was scrubbed... URL: From doriamgray89 at gmail.com Tue Feb 4 19:57:34 2020 From: doriamgray89 at gmail.com (=?UTF-8?Q?Alain_Viv=C3=B3?=) Date: Tue, 4 Feb 2020 14:57:34 -0500 Subject: Problem with juju osd deploy Message-ID: I tried to deploy ceph-osd with juju for the openstack installation in this guide https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-juju.html but, when i depoy ceph-osd and ceph-mon, osd not acquire sdv disk, my 4 nodes have a sdb with 2 TB, but vm not add this storage. And another question, osd should not be installed on the host? -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Tue Feb 4 20:20:28 2020 From: abishop at redhat.com (Alan Bishop) Date: Tue, 4 Feb 2020 12:20:28 -0800 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano wrote: > > > On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > >> On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin >> wrote: >> > >> > Greetings, >> > >> > As mentioned at the previous tripleo meeting [1], we're going to >> revisit the current tripleo squads and the expectations for those squads at >> the tripleo meeting. >> > >> > Currently we have the following squads.. >> > 1. upgrades >> > 2. edge >> > 3. integration >> > 4. validations >> > 5. networking >> > 6. transformation >> > 7. ci >> > >> > A reasonable update could include the following.. >> > >> > 1. validations >> > 2. transformation >> > 3. mistral-to-ansible >> > 4. CI >> > 5. Ceph / Integration?? maybe just Ceph? >> >> I'm fine with "Ceph". The original intent of going from "Ceph >> Integration" to the more generic "Integration" was that it could >> include anyone using external-deploy-steps to deploy non-openstack >> projects with TripleO (k8s, skydive, etc). Though those things >> happened, we didn't really get anyone else to join the squad or update >> our etherpad so I'm fine with renaming it to Ceph. We're still active >> but our etherpad was getting old. I updated it just now. > > >> Gulio? Francesco? Alan? >> > > Agree here and I also updated the etherpad as well [1] w/ > our current status and the open topics we still have on > ceph side. > Not sure if we want to use "Integration" since the topics > couldn't be only ceph related but can involve other storage > components. > > Giulio, Alan, wdyt? > "Ceph integration" makes the most sense to me, but I'm fine with just naming it "Ceph" as we all know what that means. Alan > >> >> John >> >> > 6. others?? >> > >> > The squads should reflect major current efforts by the TripleO team >> IMHO. >> > >> > For the meetings, I would propose we use this time and space to give >> context to current reviews in progress and solicit feedback. It's also a >> good time and space to discuss any upstream blockers for those reviews. >> > >> > Let's give this one week for comments etc.. Next week we'll update the >> etherpad list and squads. The etherpad list will be a decent way to >> communicate which reviews need attention. >> > >> > Thanks all!!! >> > >> > [1] >> http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html >> > >> > >> >> [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status > > -- > Francesco Pantano > GPG KEY: F41BD75C > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter.matulis at canonical.com Tue Feb 4 21:06:18 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Tue, 4 Feb 2020 16:06:18 -0500 Subject: Problem with juju osd deploy In-Reply-To: References: Message-ID: On Tue, Feb 4, 2020 at 3:00 PM Alain Vivó wrote: > > I tried to deploy ceph-osd with juju for the openstack installation in this guide > https://docs.openstack.org/project-deploy-guide/charm-deployment-guide/rocky/install-juju.html > Hi Alain, This guide has recently been entirely refreshed. The new changes are visible by going to: https://docs.openstack.org/project-deploy-guide/charm-deployment-guide > but, when i depoy ceph-osd and ceph-mon, osd not acquire sdv disk, my 4 nodes have a sdb with 2 TB, but vm not add this storage. > You will need to set option 'osd-devices' (on the CLI or by a YAML file) to the devices found on your OSD nodes. The default is just /dev/vdb. > And another question, osd should not be installed on the host? If you have device /dev/sdb on machine X and /dev/sdb is listed by 'osd-devices' then deploying the ceph-osd charm on machine X should do everything for you (/dev/sdb will be initialised for use as an OSD). Peter Matulis From feilong at catalyst.net.nz Tue Feb 4 21:23:38 2020 From: feilong at catalyst.net.nz (Feilong Wang) Date: Wed, 5 Feb 2020 10:23:38 +1300 Subject: [magnum] Help / Pointers on mirroring https://opendev.org/starlingx in github.com for CNCF certification of starlingx In-Reply-To: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> References: <0C3D22C0-A751-4E56-B47F-B879225854D7@windriver.com> Message-ID: <854bcaa6-784f-30e9-c208-796f7d85f7bb@catalyst.net.nz> Hi Greg, Currently the conformance test result upload is manually done by me. We didn't setup an automated pipeline for this because the openstack infra doesn't support nested visualization, so we can't run Magnum and generate the conformance test fully automated. But if StartingX can do that, it should be doable. On 31/01/20 4:44 AM, Waines, Greg wrote: > > Hello, > >   > > I am working in the OpenStack StarlingX team. > > We are working on getting StarlingX certified through the CNCF > conformance > program, https://www.cncf.io/certification/software-conformance/ . > > ( in the same way that you guys, OpenStack Magnum project,  got > certified with CNCF ) > > As you know, in order for the logo to be shown as based on > open-source, CNCF requires that the code be mirrored on github.com . > > e.g. https://github.com/openstack/magnum > >   > > The openstack foundation guys did provide some info on how to do this: > > /The further steps for the project owner to take:/ > > /* create a dedicated account for zuul/ > > /* create the individual empty repos/ > > /* add a job to each repo to do the mirroring, like:/ > > /  * https://opendev.org/airship/deckhand/src/commit/51dcea4fa12b0bcce65c381c286e61378a0826e2/.zuul.yaml#L406-L463/ > > / / > > /Also, you can find documentation for the parent job > here: https://zuul-ci.org/docs/zuul-jobs/general-jobs.html#job-upload-git-mirror/ > >   > > ... maybe it’s cause I don’t know anything about zuul jobs, but these > instructions are not super clear to me. > >   > > */Is the person who did this for magnum available to provide some more > detailed instructions or help on doing this ?/* > >   > > Let me know ... any help is much appreciated, > > Greg. > >   > >   > -- Cheers & Best regards, Feilong Wang (王飞龙) Head of R&D Catalyst Cloud - Cloud Native New Zealand -------------------------------------------------------------------------- Tel: +64-48032246 Email: flwang at catalyst.net.nz Level 6, Catalyst House, 150 Willis Street, Wellington -------------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Tue Feb 4 23:50:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 4 Feb 2020 16:50:14 -0700 Subject: [tripleo] py27 tests are being removed Message-ID: Greetings, Just in case you didn't see or hear about it. TripleO is removing all the py27 tox tests [1]. See the bug for details why. We really need to get CentOS-8 jobs in place ASAP for ussuri and train, that will continue to be the upstream CI teams focus until it's done. Thank you [1] https://review.opendev.org/#/q/topic:1861803+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Wed Feb 5 00:42:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 04 Feb 2020 18:42:47 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> Message-ID: <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> I am writing at top now for easy ready. * Gate status: - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. [1] https://review.opendev.org/#/c/705089/ [2] https://review.opendev.org/#/c/705870/ -gmann ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > Hello Everyone, > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > of 'EOLing python2 drama' in subject :). > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > to install the latest neutron-lib and failing. > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > from master and kepe testing py2 on stable bracnhes. > > > > We have two way to fix this: > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > I am trying this first[2] and testing[3]. > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > Tried option#2: > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > or distro-specific job like centos7 etc where we have < py3.6. > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > Testing your cloud with the latest Tempest is the best possible way. > > Going with option#1: > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > for all possible distro/py version. > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > on >py3.6 env(venv or separate node). > * Patch is up - https://review.opendev.org/#/c/704840/ > > 2.Modify Tempest tox env basepython to py3 > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > like fedora or future distro > *Patch is up- https://review.opendev.org/#/c/704688/2 > > 3. Use compatible Tempest & its plugin tag for distro having * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > in-tree plugins for their stable branch testing. > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > in neutron-vpnaas case. This will be easy for future maintenance also. > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > [1] https://review.opendev.org/#/c/703476/ > [2] https://review.opendev.org/#/c/703011/ > [3] https://releases.openstack.org/rocky/#rocky-tempest > > -gmann > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > [2] https://review.opendev.org/#/c/703011/ > > [3] https://review.opendev.org/#/c/703012/ > > > > > > -gmanne > > > > > > > > > > > From kazumasa.nomura.rx at hitachi.com Wed Feb 5 05:16:54 2020 From: kazumasa.nomura.rx at hitachi.com (=?iso-2022-jp?B?GyRCTG5CPE9CQDUbKEIgLyBOT01VUkEbJEIhJBsoQktBWlVNQVNB?=) Date: Wed, 5 Feb 2020 05:16:54 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? Message-ID: Hi everyone, I work in Hitachi, Ltd as cinder driver development member. I am trying to build cinder third-party CI by using Software Factory. Is there anyone already used Software Factory for cinder third-party CI? If you have already built it, I want to get you to give me the information how to build it. If you are trying to build it, I want to share settings and procedures for building it and cooperate with anyone trying to build it. Please contact me if you are trying to build third-party CI by using Software Factory. Thanks, Kazumasa Nomura E-mail: kazumasa.nomura.rx at hitachi.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From wespark at suddenlink.net Wed Feb 5 06:46:51 2020 From: wespark at suddenlink.net (wespark at suddenlink.net) Date: Tue, 04 Feb 2020 22:46:51 -0800 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: References: Message-ID: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> HI 野村和正 I have the interest to know how you are doing. can you share the info to me? Thanks. On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: > Hi everyone, > > I work in Hitachi, Ltd as cinder driver development member. > > I am trying to build cinder third-party CI by using Software Factory. > Is there anyone already used Software Factory for cinder third-party > CI? > > If you have already built it, I want to get you to give me the > information how to build it. > > If you are trying to build it, I want to share settings and procedures > for building it and cooperate with anyone trying to build it. > > Please contact me if you are trying to build third-party CI by using > Software Factory. > > Thanks, > > Kazumasa Nomura > > E-mail: kazumasa.nomura.rx at hitachi.com From rico.lin.guanyu at gmail.com Wed Feb 5 08:07:51 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Wed, 5 Feb 2020 16:07:51 +0800 Subject: [tc] February meeting agenda Message-ID: Hello everyone, Our next meeting is happening this Thursday (the 6th), and the agenda is, as usual, on the wiki! Here is a primer of the agenda for this month: - Report on large scale sig - Report on tc/uc merge - report on the post for the analysis of the survey - Report on the convo Telemetry - Report on multi-arch SIG - report on infra liaison and static hosting - report on stable branch policy work - report on the oslo metrics project - report on the community goals for U and V, py2 drop - report on release naming - report on the ideas repo - report on charter change - Report on whether SIG guidelines worked - volunteers to represent OpenStack at the OpenDev advisory board - Report on the OSF board initiatives - Dropping side projects: using golden signals See you all in meeting:) -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcin.juszkiewicz at linaro.org Wed Feb 5 08:21:08 2020 From: marcin.juszkiewicz at linaro.org (Marcin Juszkiewicz) Date: Wed, 5 Feb 2020 09:21:08 +0100 Subject: [tripleo] py27 tests are being removed In-Reply-To: References: Message-ID: W dniu 05.02.2020 o 00:50, Wesley Hayutin pisze: > Just in case you didn't see or hear about it. TripleO is removing all the > py27 tox tests [1]. See the bug for details why. We really need to get > CentOS-8 jobs in place ASAP for ussuri and train, that will continue to be > the upstream CI teams focus until it's done. You also have a job in Kolla - the only thing which blocks removal of py2 from project. From witold.bedyk at suse.com Wed Feb 5 09:17:18 2020 From: witold.bedyk at suse.com (Witek Bedyk) Date: Wed, 5 Feb 2020 10:17:18 +0100 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: References: Message-ID: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> Hi, we're using ujson in Monasca for serialization all over the place. While changing it to any other alternative is probably a drop-in replacement, we have in the past chosen to use ujson because of better performance. It is of great importance, in particular in persister. Current alternatives include orjson [1] and rapidjson [2]. We're going to measure which of them works best for our use case and how much faster they are compared to standard library module. Assuming there is a significant performance benefit, is there any preference from requirements team which one to include in global requirements? I haven't seen any distro packages for any of them. [1] https://pypi.org/project/orjson/ [2] https://pypi.org/project/python-rapidjson/ Best greetings Witek On 1/31/20 9:34 AM, Radosław Piliszek wrote: > This is a spinoff discussion of [1] to attract more people. > > As the subject goes, the situation of ujson is bad. Still, monasca and > gnocchi (both server and client) seem to be using it which may break > depending on compiler. > The original issue is that the released version of ujson is in > non-spec-conforming C which may break randomly based on used compiler > and linker. > There has been no release of ujson for more than 4 years. > > Based on general project activity, Monasca is probably able to fix it > but Gnocchi not so surely... > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > -yoctozepto > From kazumasa.nomura.rx at hitachi.com Wed Feb 5 09:26:36 2020 From: kazumasa.nomura.rx at hitachi.com (=?utf-8?B?6YeO5p2R5ZKM5q2jIC8gTk9NVVJB77yMS0FaVU1BU0E=?=) Date: Wed, 5 Feb 2020 09:26:36 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> Message-ID: Hi Wespark, I have started to take a look Software Factory from last week. I'm just getting started and trying to run sfconfig. If I got a beneficial information during building Software Factory, I let community members know. Thanks, Kazumasa Nomura E-mail: kazumasa.nomura.rx at hitachi.com -----Original Message----- From: wespark at suddenlink.net Sent: Wednesday, February 5, 2020 3:47 PM To: 野村和正 / NOMURA,KAZUMASA Cc: openstack-discuss at lists.openstack.org Subject: Re: [cinder] anyone using Software Factory for third-party CI? HI 野村和正 I have the interest to know how you are doing. can you share the info to me? Thanks. On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: > Hi everyone, > > I work in Hitachi, Ltd as cinder driver development member. > > I am trying to build cinder third-party CI by using Software Factory. > Is there anyone already used Software Factory for cinder third-party > CI? > > If you have already built it, I want to get you to give me the > information how to build it. > > If you are trying to build it, I want to share settings and procedures > for building it and cooperate with anyone trying to build it. > > Please contact me if you are trying to build third-party CI by using > Software Factory. > > Thanks, > > Kazumasa Nomura > > E-mail: kazumasa.nomura.rx at hitachi.com From mark at stackhpc.com Wed Feb 5 10:35:58 2020 From: mark at stackhpc.com (Mark Goddard) Date: Wed, 5 Feb 2020 10:35:58 +0000 Subject: [rdo-dev] [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Sun, 2 Feb 2020 at 21:06, Neal Gompa wrote: > > On Wed, Jan 29, 2020 at 9:37 AM Mark Goddard wrote: > > > > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > > wrote: > > > > > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > > >> > > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > > >> > > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > > >> > wrote: > > >> > > > > >> > > I know it was for masakari. > > >> > > Gaëtan had to grab crmsh from opensuse: > > >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > > >> > > > > >> > > -yoctozepto > > >> > > > >> > Thanks Wes for getting this discussion going. I've been looking at > > >> > CentOS 8 today and trying to assess where we are. I created an > > >> > Etherpad to track status: > > >> > https://etherpad.openstack.org/p/kolla-centos8 > > >> > > > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. > > > > I found them, thanks. > > > > > > > >> > > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > > >> code when installing packages. It often happens on the rabbitmq and > > >> grafana images. There is a prompt about importing GPG keys prior to > > >> the error. > > >> > > >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > > >> > > >> Related bug report? https://github.com/containers/libpod/issues/4431 > > >> > > >> Anyone familiar with it? > > >> > > > > > > Didn't know about this issue. > > > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > > > > It seems to be due to the use of a GPG check on the repo (as opposed > > to packages). DNF doesn't use keys imported via rpm --import for this > > (I'm not sure what it uses), and prompts to add the key. This breaks > > without a terminal. More explanation here: > > https://review.opendev.org/#/c/704782. > > > > librepo has its own keyring for repo signature verification. Thanks Neal. Any pointers on how to add keys to it? > > > > -- > 真実はいつも一つ!/ Always, there's only one truth! From pfb29 at cam.ac.uk Wed Feb 5 10:53:32 2020 From: pfb29 at cam.ac.uk (Paul Browne) Date: Wed, 5 Feb 2020 10:53:32 +0000 Subject: Active-Active Cinder + RBD driver + Co-ordination Message-ID: Hi list, I had a quick question about Active-Active in cinder-volume and cinder-backup stable/stein and RBD driver, if anyone can help. Using only the Ceph RBD driver for volume backends, is it required to run A-A cinder services with clustering configuration so that they form a cluster? And, if so, is an external coordinator (redis/etcd/Consul) necessary, again only using RBD driver? Best docs I could find on this so far were; https://docs.openstack.org/cinder/latest/contributor/high_availability.html , I support more aimed at devs/contributoers than operators, but it's not 100% clear to me on these questions Thanks, Paul -- ******************* Paul Browne Research Computing Platforms University Information Services Roger Needham Building JJ Thompson Avenue University of Cambridge Cambridge United Kingdom E-Mail: pfb29 at cam.ac.uk Tel: 0044-1223-746548 ******************* -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Wed Feb 5 11:33:25 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Wed, 5 Feb 2020 19:33:25 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Today I tried to recovery RabbitMQ back, but still not useful, even delete everything about data and configs for RabbitMQ then re-deploy (without destroy). And I found that the /etc/hosts on every nodes all been flushed, the hostname resolve data created by kolla-ansible are gone. Checked and found that the MAAS just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which caused /etc/hosts been reset everytime when boot. Not sure it was a root cause or not but unfortunately I already reset whole RabbitMQ data, so only I can do is destroy and deploy again. Fortunately this cluster was just beginning so no VM launch, and no do complex setup yet. I think the issue may solved, although still need a time to investigate. Based on this experience, need to notice about this may going to happen if using MAAS to deploy the OS. -Eddie Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: > Hi Erik, > > I'm already checked NIC link and no issue found. Pinging the nodes each > other on each interfaces is OK. > And I'm not check docker logs about rabbitmq sbecause it works normally. > I'll check that out later. > > -Eddie > > Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: > >> ⁹ >> >> On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: >> >>> Hi everyone, >>> >>> We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + >>> 3 Storage (Ceph OSD) >>> site without internet. We did the shutdown few days ago since CNY >>> holidays. >>> >>> Today we re-launch whole cluster back. First we met the issue that >>> MariaDB containers keep >>> restarting, and we fixed by using mariadb_recovery command. >>> After that we check the status of each services, and found that all >>> services shown at >>> Admin > System > System Information are DOWN. Strange is no MariaDB, >>> AMQP connection, >>> or other error found when check the downed service log. >>> >>> We tried reboot each servers but the situation still a same. Then we >>> found the RabbitMQ log not >>> updating, the last log still stayed at the date we shutdown. Logged in >>> to RabbitMQ container and >>> type "rabbitmqctl status" shows connection refused, and tried access its >>> web manager from >>> :15672 on browser just gave us "503 Service unavailable" message. >>> Also no port 5672 >>> listening. >>> >> >> >> Any chance you have a NIC that didn't come up? What is in the log of the >> container itself? (ie. docker log rabbitmq). >> >> >>> I searched this issue on the internet but only few information about >>> this. One of solution is delete >>> some files in mnesia folder, another is remove rabbitmq container and >>> its volume then re-deploy. >>> But both are not sure. Does anyone know how to solve it? >>> >>> >>> Many thanks, >>> Eddie. >>> >> >> -Erik >> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Feb 5 11:42:44 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 05 Feb 2020 11:42:44 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> Message-ID: <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> On Wed, 2020-02-05 at 09:26 +0000, 野村和正 / NOMURA,KAZUMASA wrote: > Hi Wespark, > > I have started to take a look Software Factory from last week. I'm just getting started and trying to run sfconfig. > If I got a beneficial information during building Software Factory, I let community members know. downstream at redhat we are starting to use software factory in the compute team for our internal ci. software factory is just a packaged version of zuul so haveing used zuul to do third party ci in the past i think it will be a good way of doing third party ci. that said zuul is also easy to deploy form source so its really a personaly choice if you deploy software factory or zuul nativivly. zuul is an excllent ci/gating system for both 1st and 3rd party ci. > > Thanks, > Kazumasa Nomura > E-mail: kazumasa.nomura.rx at hitachi.com > > -----Original Message----- > From: wespark at suddenlink.net > Sent: Wednesday, February 5, 2020 3:47 PM > To: 野村和正 / NOMURA,KAZUMASA > Cc: openstack-discuss at lists.openstack.org > Subject: Re: [cinder] anyone using Software Factory for third-party CI? > > HI 野村和正 > > I have the interest to know how you are doing. can you share the info to me? > > Thanks. > > > On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: > > Hi everyone, > > > > I work in Hitachi, Ltd as cinder driver development member. > > > > I am trying to build cinder third-party CI by using Software Factory. > > Is there anyone already used Software Factory for cinder third-party > > CI? > > > > If you have already built it, I want to get you to give me the > > information how to build it. > > > > If you are trying to build it, I want to share settings and procedures > > for building it and cooperate with anyone trying to build it. > > > > Please contact me if you are trying to build third-party CI by using > > Software Factory. > > > > Thanks, > > > > Kazumasa Nomura > > > > E-mail: kazumasa.nomura.rx at hitachi.com From tdecacqu at redhat.com Wed Feb 5 13:52:00 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 05 Feb 2020 13:52:00 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> Message-ID: <87zhdxuoov.tristanC@fedora> Hi, I am a member of the SF development team. We started to work on a guide to document how to setup the services for OpenStack third-party CI here: https://softwarefactory-project.io/r/17097 Feel free to reach out if you have issues with the configuration. Regards, -Tristan On Tue, Feb 04, 2020 at 22:46 wespark at suddenlink.net wrote: > HI 野村和正 > > I have the interest to know how you are doing. can you share the info to > me? > > Thanks. > > > On 2020-02-05 13:16, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi everyone, >> >> I work in Hitachi, Ltd as cinder driver development member. >> >> I am trying to build cinder third-party CI by using Software Factory. >> Is there anyone already used Software Factory for cinder third-party >> CI? >> >> If you have already built it, I want to get you to give me the >> information how to build it. >> >> If you are trying to build it, I want to share settings and procedures >> for building it and cooperate with anyone trying to build it. >> >> Please contact me if you are trying to build third-party CI by using >> Software Factory. >> >> Thanks, >> >> Kazumasa Nomura >> >> E-mail: kazumasa.nomura.rx at hitachi.com -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From tdecacqu at redhat.com Wed Feb 5 14:27:28 2020 From: tdecacqu at redhat.com (Tristan Cacqueray) Date: Wed, 05 Feb 2020 14:27:28 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> Message-ID: <87wo91un1r.tristanC@fedora> On Wed, Feb 05, 2020 at 11:42 Sean Mooney wrote: > On Wed, 2020-02-05 at 09:26 +0000, 野村和正 / NOMURA,KAZUMASA wrote: >> Hi Wespark, >> >> I have started to take a look Software Factory from last week. I'm just getting started and trying to run sfconfig. >> If I got a beneficial information during building Software Factory, I let community members know. > downstream at redhat we are starting to use software factory in the compute team for our internal ci. > software factory is just a packaged version of zuul so haveing used zuul to do third party ci in the past i > think it will be a good way of doing third party ci. that said zuul is also easy to deploy form source so > its really a personaly choice if you deploy software factory or zuul nativivly. zuul is an excllent ci/gating system > for both 1st and 3rd party ci. Please note that Software Factory is not just a packaged version of Zuul. It also integrates extra components such as Grafana for the monitoring and Logstash for indexing builds. We find that installing from source and configuring by hand all the services needed to operate a third party ci can be complicated. Thus Software Factory also provides a configuration playbook to deploy and configure all the services together in just a few commands. Regards, -Tristan -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 487 bytes Desc: not available URL: From smooney at redhat.com Wed Feb 5 14:37:07 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 05 Feb 2020 14:37:07 +0000 Subject: [cinder] anyone using Software Factory for third-party CI? In-Reply-To: <87wo91un1r.tristanC@fedora> References: <5b364b7347266e71bb5552564c9b65ce@suddenlink.net> <77b049390d81dd9270394dbd3911c56dc9f1d51c.camel@redhat.com> <87wo91un1r.tristanC@fedora> Message-ID: <3bc440019a3c9d705d77835e5b0b406e81fe99f7.camel@redhat.com> On Wed, 2020-02-05 at 14:27 +0000, Tristan Cacqueray wrote: > On Wed, Feb 05, 2020 at 11:42 Sean Mooney wrote: > > On Wed, 2020-02-05 at 09:26 +0000, 野村和正 / NOMURA,KAZUMASA wrote: > > > Hi Wespark, > > > > > > I have started to take a look Software Factory from last week. I'm just getting started and trying to run > > > sfconfig. > > > If I got a beneficial information during building Software Factory, I let community members know. > > > > downstream at redhat we are starting to use software factory in the compute team for our internal ci. > > software factory is just a packaged version of zuul so haveing used zuul to do third party ci in the past i > > think it will be a good way of doing third party ci. that said zuul is also easy to deploy form source so > > its really a personaly choice if you deploy software factory or zuul nativivly. zuul is an excllent ci/gating system > > for both 1st and 3rd party ci. > > Please note that Software Factory is not just a packaged version of > Zuul. It also integrates extra components such as Grafana for the > monitoring and Logstash for indexing builds. yes that is true. > > We find that installing from source and configuring by hand all the > services needed to operate a third party ci can be complicated. > Thus Software Factory also provides a configuration playbook to deploy > and configure all the services together in just a few commands. yes although i have done it 3 times, once by hand manually, once with kubernets and once with the docker compose exmaple my experince is you can do it by hand in a day. kubernetes took me about a week or two weeks to get it working and the docker comose example that zuul has which deploy gerrit,zuul,nodepool and a contianer as a static worker node can be done in like an hour. im sure Software Factory helps and i will proably try it next time but its not hard to do it manually its just involved. if software factory makes it just a few commands and supprots day two operation like reconfiguration and upgrades simpley then that is defintly added value. > > Regards, > -Tristan From aschultz at redhat.com Wed Feb 5 14:40:26 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 5 Feb 2020 07:40:26 -0700 Subject: [tripleo] py27 tests are being removed In-Reply-To: References: Message-ID: For clarification (because this has come up multiple times now), we're not officially removing py27 support yet. We're killing off the job because of incompatible versions being released and it sounds like there isn't a desire to cap it in master. Because we're still using centos7 for our CI, we still need to be able to run the actual code under python2 until we get centos8 jobs up. On Wed, Feb 5, 2020 at 1:27 AM Marcin Juszkiewicz wrote: > > W dniu 05.02.2020 o 00:50, Wesley Hayutin pisze: > > > Just in case you didn't see or hear about it. TripleO is removing all the > > py27 tox tests [1]. See the bug for details why. We really need to get > > CentOS-8 jobs in place ASAP for ussuri and train, that will continue to be > > the upstream CI teams focus until it's done. > > You also have a job in Kolla - the only thing which blocks removal of > py2 from project. > From gmann at ghanshyammann.com Wed Feb 5 15:15:58 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Feb 2020 09:15:58 -0600 Subject: [all] Gate status: Stable/ocata|pike|queens|rocky is broken: Avoid recheck Message-ID: <17015ec15d8.113d69dac313692.3141092730262956226@ghanshyammann.com> Hello Everyone, Stable/ocata, pike, queens, rocky gate is broken now due to Temepst dependency require >=py3.6. I summarized the situation in ML[1]. Do not recheck on failed patches of those branches until the job is explicitly disabling Tempest. Fixes are in progress, I will update the status here once fixes are merged. [1]http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012371.html -gmann From ashlee at openstack.org Wed Feb 5 16:25:58 2020 From: ashlee at openstack.org (Ashlee Ferguson) Date: Wed, 5 Feb 2020 10:25:58 -0600 Subject: Help Shape the Track Content for OpenDev + PTG, June 8-11 in Vancouver In-Reply-To: References: Message-ID: <9C495D22-CA8F-4030-9961-ABE72B1F7DC3@openstack.org> Hi everyone, Just a reminder to nominate yourself by February 11 if you’re interested in volunteering as an OpenDev Programming Committee member, discussion Moderator, or would like to suggest topics for moderated discussions within a particular Track. We’re looking for subject matter experts on the following OpenDev Tracks: - Hardware Automation (accelerators, provisioning hardware, networking) - Large-scale Usage of Open Source Infrastructure Software (scale pain points, multi-location, CI/CD) - Containers in Production (isolation, virtualization, telecom containers) - Key Challenges for Open Source in 2020 (beyond licensing, public clouds, ethics) Nominate yourself here: https://openstackfoundation.formstack.com/forms/opendev_vancouver2020_volunteer Please let me know if you have any questions! Thanks, Ashlee > On Jan 29, 2020, at 4:04 PM, Ashlee Ferguson wrote: > > Hi everyone, > > Hopefully by now you’ve heard about our upcoming event, OpenDev + PTG Vancouver, June 8-11, 2020 . We need your help shaping this event! Our vision is for the content to be programmed by you-- the community. OSF is looking to kick things off by selecting members for OpenDev Programming Committees for each Track. That Program Committee will then select Moderators who will lead interactive discussions on a particular topic within the track. Below you'll have the opportunity to nominate yourself for a position on the Programming Committee, as a Moderator, or both, as well as suggesting specific Topics within each Track. PTG programming will kick off in the coming weeks. > > If you’re interested in volunteering as an OpenDev Programming Committee member, discussion Moderator, or would like to suggest topics for moderated discussions within a particular Track, please read the details below, and then fill out this form . > > We’re looking for subject matter experts on the following OpenDev Tracks: > - Hardware Automation (accelerators, provisioning hardware, networking) > - Large-scale Usage of Open Source Infrastructure Software (scale pain points, multi-location, CI/CD) > - Containers in Production (isolation, virtualization, telecom containers) > - Key Challenges for Open Source in 2020 (beyond licensing, public clouds, ethics) > > OpenDev Programming Committee members will: > Work with other Committee members, which will include OSF representatives, to curate OpenDev content based on subject expertise, community input, and relevance to open source infrastructure > Promote the individual Tracks within your networks > Review community input and suggestions for Track discussions > Solicit moderators from your network if you know someone who is a subject matter expert > Ensure diversity of speakers and companies represented in your Track > Focus topics around on real-world user stories and technical, in-the-trenches experiences > > Programming Committee members need to be available during the following dates/time commitments: > 8 - 10 hours from February - May for bi-weekly calls with your Track's Programming Committee (plus a couple of OSF representatives to facilitate the call) > OpenDev, June 8 - 10, 2020 (not required, but preferred) > Programming Committee members will receive a complimentary pass to the event > > Programming Committees will be comprised of a few people per Track who will work to select a handful of topics and moderators for each Track. The exact topic counts will be determined before Committees begin deciding. > > OpenDev Discussion Moderators will > Be appointed by the Programming Committees > Facilitate discussions within a particular Track > Have adequate knowledge and experience to lead and moderate discussion around certain topics during the event > Work with Programming Committee to decide focal point of discussion > > Moderators need to be available to attend OpenDev, June 8 - 10, 2020, and will receive a complimentary pass. > > > Programming Committee nominations are open until February 11. Deadlines to volunteer to be a moderator and suggest topics will be in late February. > > > Nominate yourself or suggest discussion topics here: https://openstackfoundation.formstack.com/forms/opendev_vancouver2020_volunteer > > Cheers, > The OpenStack Foundation > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mthode at mthode.org Wed Feb 5 16:46:30 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 5 Feb 2020 10:46:30 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> Message-ID: <20200205164630.xidg42ego5kvg4ia@mthode.org> On 20-02-05 10:17:18, Witek Bedyk wrote: > Hi, > > we're using ujson in Monasca for serialization all over the place. While > changing it to any other alternative is probably a drop-in replacement, we > have in the past chosen to use ujson because of better performance. It is of > great importance, in particular in persister. Current alternatives include > orjson [1] and rapidjson [2]. We're going to measure which of them works > best for our use case and how much faster they are compared to standard > library module. > > Assuming there is a significant performance benefit, is there any preference > from requirements team which one to include in global requirements? I > haven't seen any distro packages for any of them. > > [1] https://pypi.org/project/orjson/ > [2] https://pypi.org/project/python-rapidjson/ > > Best greetings > Witek > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > This is a spinoff discussion of [1] to attract more people. > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > gnocchi (both server and client) seem to be using it which may break > > depending on compiler. > > The original issue is that the released version of ujson is in > > non-spec-conforming C which may break randomly based on used compiler > > and linker. > > There has been no release of ujson for more than 4 years. > > > > Based on general project activity, Monasca is probably able to fix it > > but Gnocchi not so surely... > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > -yoctozepto > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 in requiring glibc 2.18, released 2013, or later. orjson does not support PyPy. Given the above (I think we still need to support py35 at least) I'm not sure we can use it. Though it is my preferred other than that... (faster than ujson, more updates (last release yesterday), etc) -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From Arkady.Kanevsky at dell.com Wed Feb 5 16:58:09 2020 From: Arkady.Kanevsky at dell.com (Arkady.Kanevsky at dell.com) Date: Wed, 5 Feb 2020 16:58:09 +0000 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: References: Message-ID: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> Is there a need for Ironic integration one? From: Alan Bishop Sent: Tuesday, February 4, 2020 2:20 PM To: Francesco Pantano Cc: John Fulton; Wesley Hayutin; OpenStack Discuss; Phil Weeks Subject: Re: [tripleo] rework of triple squads and the tripleo mtg. [EXTERNAL EMAIL] On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano > wrote: On Tue, Feb 4, 2020 at 5:45 PM John Fulton > wrote: On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > Greetings, > > As mentioned at the previous tripleo meeting [1], we're going to revisit the current tripleo squads and the expectations for those squads at the tripleo meeting. > > Currently we have the following squads.. > 1. upgrades > 2. edge > 3. integration > 4. validations > 5. networking > 6. transformation > 7. ci > > A reasonable update could include the following.. > > 1. validations > 2. transformation > 3. mistral-to-ansible > 4. CI > 5. Ceph / Integration?? maybe just Ceph? I'm fine with "Ceph". The original intent of going from "Ceph Integration" to the more generic "Integration" was that it could include anyone using external-deploy-steps to deploy non-openstack projects with TripleO (k8s, skydive, etc). Though those things happened, we didn't really get anyone else to join the squad or update our etherpad so I'm fine with renaming it to Ceph. We're still active but our etherpad was getting old. I updated it just now. Gulio? Francesco? Alan? Agree here and I also updated the etherpad as well [1] w/ our current status and the open topics we still have on ceph side. Not sure if we want to use "Integration" since the topics couldn't be only ceph related but can involve other storage components. Giulio, Alan, wdyt? "Ceph integration" makes the most sense to me, but I'm fine with just naming it "Ceph" as we all know what that means. Alan John > 6. others?? > > The squads should reflect major current efforts by the TripleO team IMHO. > > For the meetings, I would propose we use this time and space to give context to current reviews in progress and solicit feedback. It's also a good time and space to discuss any upstream blockers for those reviews. > > Let's give this one week for comments etc.. Next week we'll update the etherpad list and squads. The etherpad list will be a decent way to communicate which reviews need attention. > > Thanks all!!! > > [1] http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status -- Francesco Pantano GPG KEY: F41BD75C -------------- next part -------------- An HTML attachment was scrubbed... URL: From witold.bedyk at suse.com Wed Feb 5 17:09:05 2020 From: witold.bedyk at suse.com (Witek Bedyk) Date: Wed, 5 Feb 2020 18:09:05 +0100 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205164630.xidg42ego5kvg4ia@mthode.org> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> Message-ID: <7498de77-6493-630e-0994-4dde4f28340b@suse.com> > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > in requiring glibc 2.18, released 2013, or later. orjson does not > support PyPy. > > Given the above (I think we still need to support py35 at least) I'm not > sure we can use it. Though it is my preferred other than that... > (faster than ujson, more updates (last release yesterday), etc) > Thanks Matthew, what's the status of Python 3.5 support? We've been dropping unit tests for 3.5 in Train [1]. [1] https://governance.openstack.org/tc/goals/selected/train/python3-updates.html Witek From mthode at mthode.org Wed Feb 5 17:27:22 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 5 Feb 2020 11:27:22 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <7498de77-6493-630e-0994-4dde4f28340b@suse.com> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <7498de77-6493-630e-0994-4dde4f28340b@suse.com> Message-ID: <20200205172722.brzykcdwk44abhj4@mthode.org> On 20-02-05 18:09:05, Witek Bedyk wrote: > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > > in requiring glibc 2.18, released 2013, or later. orjson does not > > support PyPy. > > > > Given the above (I think we still need to support py35 at least) I'm not > > sure we can use it. Though it is my preferred other than that... > > (faster than ujson, more updates (last release yesterday), etc) > > > > > Thanks Matthew, > what's the status of Python 3.5 support? We've been dropping unit tests for > 3.5 in Train [1]. > > [1] > https://governance.openstack.org/tc/goals/selected/train/python3-updates.html > > Witek > Looks like you are right, for some reason I thought we still supported it. looks good to me then -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From kendall at openstack.org Wed Feb 5 17:28:19 2020 From: kendall at openstack.org (Kendall Waters) Date: Wed, 5 Feb 2020 11:28:19 -0600 Subject: [OpenStack Foundation] Sponsorship Prospectus is Now Live - OpenDev+PTG Vancouver 2020 In-Reply-To: References: Message-ID: <2FE8AB56-F93D-4082-8FAF-C16FBCE99E09@openstack.org> Hi everyone, Sponsor sales for the Vancouver OpenDev + PTG event are now open! Below are 3 easy steps to become a sponsor: Step 1 - Review the sponsorship prospectus  Step 2 - Sign the Master Event Sponsorship Agreement (This step is only for companies who are sponsoring an OpenStack Foundation event for the first time) Step 3 - Sign the OpenDev + PTG Sponsorship Agreement Please let me know if you have any questions or would like to set up a call to talk through the sponsorship options. Cheers, Kendall Kendall Waters Perez OpenStack Marketing & Events kendall at openstack.org > On Feb 4, 2020, at 11:31 AM, Kendall Waters wrote: > > Hi everyone, > > The sponsorship prospectus for the upcoming OpenDev + PTG event happening in Vancouver, BC on June 8-11 is now live! We expect this to be an important event for sponsors, and will have a place for sponsors to have a physical presence (embedded in the event rather in a separate sponsors hall), as well as branding throughout the event and the option for a keynote in the morning for headline sponsors. > > OpenDev + PTG is a collaborative event organized by the OpenStack Foundation (OSF) gathering developers, system architects, and operators to address common open source infrastructure challenges. OpenDev will take place June 8-10 and the PTG will take place June 8-11. > > Each day will be broken into three parts: > Short kickoff with all attendees to set the goals for the day or discuss the outcomes of the previous day. Think of this like a mini-keynote, challenging your thoughts around the topic areas before you head into real collaborative sessions. > OpenDev: Morning discussions covering projects like Airship, Ansible, Ceph, Kata Containers, Kubernetes, OpenStack, StarlingX, Zuul and more centered around one of four different topics: > Hardware Automation > Large-scale Usage of Open Source Infrastructure Software > Containers in Production > Key challenges for open source in 2020 > PTG: Afternoon working sessions for project teams and SIGs to continue the morning’s discussions. > > The sponsorship contract will be available to sign starting tomorrow, February 5 at 9:30am PST at https://www.openstack.org/events/opendev-ptg-2020/sponsors . > > Please let me know if you have any questions. > > Cheers, > Kendall > > Kendall Waters Perez > OpenStack Marketing & Events > kendall at openstack.org > > > > > _______________________________________________ > Foundation mailing list > Foundation at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Feb 5 17:33:34 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 5 Feb 2020 17:33:34 +0000 Subject: Virtio memory balloon driver In-Reply-To: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> Message-ID: When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to disable it. How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? Console log: https://f.perl.bot/p/njvgbm The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 Dmesg: [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state Syslog: Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: host-cpu Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed and duplex are not writable. Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 -----Original Message----- From: Jeremy Stanley Sent: Tuesday, February 4, 2020 4:01 AM To: openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > seen any OOM errors. Where should I look for those? [...] The `dmesg` utility on the hypervisor host should show you the kernel's log ring buffer contents (the -T flag is useful to translate its timestamps into something more readable than seconds since boot too). If the ring buffer has overwritten the relevant timeframe then look for signs of kernel OOM killer invocation in your syslog or persistent journald storage. -- Jeremy Stanley From Albert.Braden at synopsys.com Wed Feb 5 17:40:59 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 5 Feb 2020 17:40:59 +0000 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Eddie, This is the process that I use to reset RMQ when it fails. RMQ messages are ephemeral; losing your old RMQ messages doesn’t ruin the cluster. On master: service rabbitmq-server stop ps auxw|grep rabbit (kill any rabbit processes) rm -rf /var/lib/rabbitmq/mnesia/* service rabbitmq-server start rabbitmqctl add_user admin rabbitmqctl set_user_tags admin administrator rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" rabbitmqctl add_user openstack rabbitmqctl set_permissions -p / openstack ".*" ".*" ".*" rabbitmqctl set_policy ha-all "" '{"ha-mode":"all"}' rabbitmqctl list_policies on slaves: rabbitmqctl stop_app If RMQ fails to reset on a slave, or fails to start after resetting, then: service rabbitmq-server stop ps auxw|grep rabbit (kill any rabbit processes) rm -rf /var/lib/rabbitmq/mnesia/* service rabbitmq-server start rabbitmqctl stop_app rabbitmqctl reset rabbitmqctl start_app rabbitmqctl stop_app rabbitmqctl join_cluster rabbit@ rabbitmqctl start_app From: Eddie Yen Sent: Wednesday, February 5, 2020 3:33 AM To: openstack-discuss Subject: Re: [kolla] All services stats DOWN after re-launch whole cluster. Today I tried to recovery RabbitMQ back, but still not useful, even delete everything about data and configs for RabbitMQ then re-deploy (without destroy). And I found that the /etc/hosts on every nodes all been flushed, the hostname resolve data created by kolla-ansible are gone. Checked and found that the MAAS just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which caused /etc/hosts been reset everytime when boot. Not sure it was a root cause or not but unfortunately I already reset whole RabbitMQ data, so only I can do is destroy and deploy again. Fortunately this cluster was just beginning so no VM launch, and no do complex setup yet. I think the issue may solved, although still need a time to investigate. Based on this experience, need to notice about this may going to happen if using MAAS to deploy the OS. -Eddie Eddie Yen > 於 2020年2月4日 週二 下午9:45寫道: Hi Erik, I'm already checked NIC link and no issue found. Pinging the nodes each other on each interfaces is OK. And I'm not check docker logs about rabbitmq sbecause it works normally. I'll check that out later. -Eddie Erik McCormick > 於 2020年2月4日 週二 下午9:19寫道: ⁹ On Tue, Feb 4, 2020, 7:20 AM Eddie Yen > wrote: Hi everyone, We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 Storage (Ceph OSD) site without internet. We did the shutdown few days ago since CNY holidays. Today we re-launch whole cluster back. First we met the issue that MariaDB containers keep restarting, and we fixed by using mariadb_recovery command. After that we check the status of each services, and found that all services shown at Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP connection, or other error found when check the downed service log. We tried reboot each servers but the situation still a same. Then we found the RabbitMQ log not updating, the last log still stayed at the date we shutdown. Logged in to RabbitMQ container and type "rabbitmqctl status" shows connection refused, and tried access its web manager from :15672 on browser just gave us "503 Service unavailable" message. Also no port 5672 listening. Any chance you have a NIC that didn't come up? What is in the log of the container itself? (ie. docker log rabbitmq). I searched this issue on the internet but only few information about this. One of solution is delete some files in mnesia folder, another is remove rabbitmq container and its volume then re-deploy. But both are not sure. Does anyone know how to solve it? Many thanks, Eddie. -Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Feb 5 18:24:32 2020 From: smooney at redhat.com (Sean Mooney) Date: Wed, 05 Feb 2020 18:24:32 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> Message-ID: <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://opendev.org/openstack/nova/src/branch/master/nova/virt/libvirt/driver.py#L5842-L5852 it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://f.perl.bot/p/njvgbm > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From gmann at ghanshyammann.com Wed Feb 5 18:39:17 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Feb 2020 12:39:17 -0600 Subject: [tc][uc][all] Starting community-wide goals ideas for V series Message-ID: <17016a63ba1.dc0cafe2322988.5181705946513725916@ghanshyammann.com> Hello everyone, We are in R14 week of Ussuri cycle which means It's time to start the discussions about community-wide goals ideas for the V series. Community-wide goals are important in term of solving and improving a technical area across OpenStack as a whole. It has lot more benefits to be considered from users as well from a developers perspective. See [1] for more details about community-wide goals and process. We have the Zuulv3 migration goal already accepted and pre-selected for v cycle. If you are interested in proposing a goal, please write down the idea on this etherpad[2] - https://etherpad.openstack.org/p/YVR-v-series-goals Accordingly, we will start the separate ML discussion over each goal idea. Also, you can refer to the backlogs of community-wide goals from this[3] and ussuri cycle goals[4]. NOTE: TC has defined the goal process schedule[5] to streamlined the process and ready with goals for projects to plan/implement at the start of the cycle. I am hoping to start that schedule for W cycle goals. [1] https://governance.openstack.org/tc/goals/index.html [2] https://etherpad.openstack.org/p/YVR-v-series-goals [3] https://etherpad.openstack.org/p/community-goals [4] https://etherpad.openstack.org/p/PVG-u-series-goals [5] https://governance.openstack.org/tc/goals/#goal-selection-schedule -gmann From jeremyfreudberg at gmail.com Wed Feb 5 18:39:32 2020 From: jeremyfreudberg at gmail.com (Jeremy Freudberg) Date: Wed, 5 Feb 2020 13:39:32 -0500 Subject: [sahara] Cancelling Sahara meeting February 6 Message-ID: Hi all, There will be no Sahara meeting 2020-02-06, the principal reason being that I am on PTO. Holler if you need anything. Thanks, Jeremy From Rajini.Karthik at Dell.com Wed Feb 5 19:36:36 2020 From: Rajini.Karthik at Dell.com (Rajini.Karthik at Dell.com) Date: Wed, 5 Feb 2020 19:36:36 +0000 Subject: [Ironic] [Sushy] [CI] 3rd Party CI Message-ID: <264e9786861f4e55ad614f990d4585aa@AUSX13MPS308.AMER.DELL.COM> Announcing Dell 3rd Party Ironic CI is now functional for Openstack/Sushy library. Please take a look at https://review.opendev.org/#/c/705289/ Regards Rajini -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Feb 5 19:48:49 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 5 Feb 2020 19:48:49 +0000 Subject: Virtio memory balloon driver In-Reply-To: <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> Message-ID: Thanks Sean! Should I start a bug report for this? -----Original Message----- From: Sean Mooney Sent: Wednesday, February 5, 2020 10:25 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5842-2DL5852&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=_kEGfZqTkPscjy0GJB2N_WBXRJPEt2400ADV12hhxR8&e= it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.perl.bot_p_njvgbm&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=5J3hH_mxdtOyNFqbW6j9yGiSyMXmhy3bXrmXRHkJ9I0&e= > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From gmann at ghanshyammann.com Wed Feb 5 20:28:28 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 05 Feb 2020 14:28:28 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> Message-ID: <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- > I am writing at top now for easy ready. > > * Gate status: > - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. > - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. > - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates > the tempest tox env with master u-c needs to be fixed[2]. While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems to cap the upper-constraint like it was proposed by chandan[1]. NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain such cap only for Tempest & its plugins till stable/rocky EOL. Background on this issue: -------------------------------- Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. ------------------------------------------- [1] https://review.opendev.org/#/c/705685/ [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 [3] https://review.opendev.org/#/c/705089 [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml -gmann > - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. > > * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED > - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. > > [1] https://review.opendev.org/#/c/705089/ > [2] https://review.opendev.org/#/c/705870/ > > -gmann > > ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > > Hello Everyone, > > > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > > of 'EOLing python2 drama' in subject :). > > > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > > to install the latest neutron-lib and failing. > > > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > > from master and kepe testing py2 on stable bracnhes. > > > > > > We have two way to fix this: > > > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > > I am trying this first[2] and testing[3]. > > > > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > > > Tried option#2: > > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > > or distro-specific job like centos7 etc where we have < py3.6. > > > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > > Testing your cloud with the latest Tempest is the best possible way. > > > > Going with option#1: > > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > > for all possible distro/py version. > > > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > > on >py3.6 env(venv or separate node). > > * Patch is up - https://review.opendev.org/#/c/704840/ > > > > 2.Modify Tempest tox env basepython to py3 > > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > > like fedora or future distro > > *Patch is up- https://review.opendev.org/#/c/704688/2 > > > > 3. Use compatible Tempest & its plugin tag for distro having > * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > > in-tree plugins for their stable branch testing. > > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > > in neutron-vpnaas case. This will be easy for future maintenance also. > > > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > > > [1] https://review.opendev.org/#/c/703476/ > > [2] https://review.opendev.org/#/c/703011/ > > [3] https://releases.openstack.org/rocky/#rocky-tempest > > > > -gmann > > > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > > [2] https://review.opendev.org/#/c/703011/ > > > [3] https://review.opendev.org/#/c/703012/ > > > > > > > > > -gmanne > > > > > > > > > > > > > > > > > > > > From mtreinish at kortar.org Wed Feb 5 21:38:26 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 5 Feb 2020 16:38:26 -0500 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205164630.xidg42ego5kvg4ia@mthode.org> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> Message-ID: <20200205213826.GA898532@zeong> On Wed, Feb 05, 2020 at 10:46:30AM -0600, Matthew Thode wrote: > On 20-02-05 10:17:18, Witek Bedyk wrote: > > Hi, > > > > we're using ujson in Monasca for serialization all over the place. While > > changing it to any other alternative is probably a drop-in replacement, we > > have in the past chosen to use ujson because of better performance. It is of > > great importance, in particular in persister. Current alternatives include > > orjson [1] and rapidjson [2]. We're going to measure which of them works > > best for our use case and how much faster they are compared to standard > > library module. > > > > Assuming there is a significant performance benefit, is there any preference > > from requirements team which one to include in global requirements? I > > haven't seen any distro packages for any of them. > > > > [1] https://pypi.org/project/orjson/ > > [2] https://pypi.org/project/python-rapidjson/ > > > > Best greetings > > Witek > > > > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > > This is a spinoff discussion of [1] to attract more people. > > > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > > gnocchi (both server and client) seem to be using it which may break > > > depending on compiler. > > > The original issue is that the released version of ujson is in > > > non-spec-conforming C which may break randomly based on used compiler > > > and linker. > > > There has been no release of ujson for more than 4 years. > > > > > > Based on general project activity, Monasca is probably able to fix it > > > but Gnocchi not so surely... > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > > > -yoctozepto > > > > > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > in requiring glibc 2.18, released 2013, or later. orjson does not > support PyPy. > > Given the above (I think we still need to support py35 at least) I'm not > sure we can use it. Though it is my preferred other than that... > (faster than ujson, more updates (last release yesterday), etc) It's also probably worth looking at the thread on this from August [1] discussing potentially using orjson. While they've added sdists since that original discussion (because of the pyo3-pack support being added) building it locally requires having rust nightly installed. This means for anyone on a non-x86_64 platform (including i686) will need to have rust nightly installed to pip install a package. Not that it's a super big burden, rustup makes it pretty easy to do, but it's a pretty uncommon thing for most people. But, I think that combined with no py35 support probably makes it a difficult thing to add to g-r. [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008849.html -Matthew Treinish -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mthode at mthode.org Wed Feb 5 22:21:43 2020 From: mthode at mthode.org (Matthew Thode) Date: Wed, 5 Feb 2020 16:21:43 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205213826.GA898532@zeong> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> Message-ID: <20200205222143.xortzgo3zgk7d2im@mthode.org> On 20-02-05 16:38:26, Matthew Treinish wrote: > On Wed, Feb 05, 2020 at 10:46:30AM -0600, Matthew Thode wrote: > > On 20-02-05 10:17:18, Witek Bedyk wrote: > > > Hi, > > > > > > we're using ujson in Monasca for serialization all over the place. While > > > changing it to any other alternative is probably a drop-in replacement, we > > > have in the past chosen to use ujson because of better performance. It is of > > > great importance, in particular in persister. Current alternatives include > > > orjson [1] and rapidjson [2]. We're going to measure which of them works > > > best for our use case and how much faster they are compared to standard > > > library module. > > > > > > Assuming there is a significant performance benefit, is there any preference > > > from requirements team which one to include in global requirements? I > > > haven't seen any distro packages for any of them. > > > > > > [1] https://pypi.org/project/orjson/ > > > [2] https://pypi.org/project/python-rapidjson/ > > > > > > Best greetings > > > Witek > > > > > > > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > > > This is a spinoff discussion of [1] to attract more people. > > > > > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > > > gnocchi (both server and client) seem to be using it which may break > > > > depending on compiler. > > > > The original issue is that the released version of ujson is in > > > > non-spec-conforming C which may break randomly based on used compiler > > > > and linker. > > > > There has been no release of ujson for more than 4 years. > > > > > > > > Based on general project activity, Monasca is probably able to fix it > > > > but Gnocchi not so surely... > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > > > > > -yoctozepto > > > > > > > > > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > > in requiring glibc 2.18, released 2013, or later. orjson does not > > support PyPy. > > > > Given the above (I think we still need to support py35 at least) I'm not > > sure we can use it. Though it is my preferred other than that... > > (faster than ujson, more updates (last release yesterday), etc) > > It's also probably worth looking at the thread on this from August [1] > discussing potentially using orjson. While they've added sdists since that > original discussion (because of the pyo3-pack support being added) building > it locally requires having rust nightly installed. This means for anyone on > a non-x86_64 platform (including i686) will need to have rust nightly > installed to pip install a package. Not that it's a super big burden, rustup > makes it pretty easy to do, but it's a pretty uncommon thing for most people. > But, I think that combined with no py35 support probably makes it a difficult > thing to add to g-r. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008849.html > > -Matthew Treinish Forgot about that, does it still need nightly though? I'd hope that it doesn't but wouldn't be surprised if it does. Arch support is important though, some jobs execute on ppc64 and arm64 as well iirc. -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From mtreinish at kortar.org Wed Feb 5 23:03:38 2020 From: mtreinish at kortar.org (Matthew Treinish) Date: Wed, 5 Feb 2020 18:03:38 -0500 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205222143.xortzgo3zgk7d2im@mthode.org> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> <20200205222143.xortzgo3zgk7d2im@mthode.org> Message-ID: <20200205230338.GB898532@zeong> On Wed, Feb 05, 2020 at 04:21:43PM -0600, Matthew Thode wrote: > On 20-02-05 16:38:26, Matthew Treinish wrote: > > On Wed, Feb 05, 2020 at 10:46:30AM -0600, Matthew Thode wrote: > > > On 20-02-05 10:17:18, Witek Bedyk wrote: > > > > Hi, > > > > > > > > we're using ujson in Monasca for serialization all over the place. While > > > > changing it to any other alternative is probably a drop-in replacement, we > > > > have in the past chosen to use ujson because of better performance. It is of > > > > great importance, in particular in persister. Current alternatives include > > > > orjson [1] and rapidjson [2]. We're going to measure which of them works > > > > best for our use case and how much faster they are compared to standard > > > > library module. > > > > > > > > Assuming there is a significant performance benefit, is there any preference > > > > from requirements team which one to include in global requirements? I > > > > haven't seen any distro packages for any of them. > > > > > > > > [1] https://pypi.org/project/orjson/ > > > > [2] https://pypi.org/project/python-rapidjson/ > > > > > > > > Best greetings > > > > Witek > > > > > > > > > > > > On 1/31/20 9:34 AM, Radosław Piliszek wrote: > > > > > This is a spinoff discussion of [1] to attract more people. > > > > > > > > > > As the subject goes, the situation of ujson is bad. Still, monasca and > > > > > gnocchi (both server and client) seem to be using it which may break > > > > > depending on compiler. > > > > > The original issue is that the released version of ujson is in > > > > > non-spec-conforming C which may break randomly based on used compiler > > > > > and linker. > > > > > There has been no release of ujson for more than 4 years. > > > > > > > > > > Based on general project activity, Monasca is probably able to fix it > > > > > but Gnocchi not so surely... > > > > > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-January/thread.html > > > > > > > > > > -yoctozepto > > > > > > > > > > > > > > > orjson supports CPython 3.6, 3.7, 3.8, and 3.9. It distributes wheels > > > for Linux, macOS, and Windows. The manylinux1 wheel differs from PEP 513 > > > in requiring glibc 2.18, released 2013, or later. orjson does not > > > support PyPy. > > > > > > Given the above (I think we still need to support py35 at least) I'm not > > > sure we can use it. Though it is my preferred other than that... > > > (faster than ujson, more updates (last release yesterday), etc) > > > > It's also probably worth looking at the thread on this from August [1] > > discussing potentially using orjson. While they've added sdists since that > > original discussion (because of the pyo3-pack support being added) building > > it locally requires having rust nightly installed. This means for anyone on > > a non-x86_64 platform (including i686) will need to have rust nightly > > installed to pip install a package. Not that it's a super big burden, rustup > > makes it pretty easy to do, but it's a pretty uncommon thing for most people. > > But, I think that combined with no py35 support probably makes it a difficult > > thing to add to g-r. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008849.html > > > > -Matthew Treinish > > Forgot about that, does it still need nightly though? I'd hope that it > doesn't but wouldn't be surprised if it does. Arch support is important > though, some jobs execute on ppc64 and arm64 as well iirc. > It does, it's because of PyO3 [1] which is the library most projects use for a python<->rust interface. PyO3 relies on some rust language features which are not part of stable yet. That being said I've got a couple rust/python projects[2][3] and have never had an issue with nightly rust, it's surprisingly stable and useable, and rustup makes installing it and keeping up to date simple. But despite that, for a project the size of OpenStack I think it's probably a bit much to ask for anyone not on x86_64 to need to have rust nightly installed just to install things via pip. -Matthew Treinish [1] https://github.com/PyO3/pyo3 [2] https://github.com/mtreinish/retworkx [3] https://github.com/mtreinish/pyrqasm -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From emilien at redhat.com Thu Feb 6 00:14:31 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 5 Feb 2020 19:14:31 -0500 Subject: [tripleo] deep-dive containers & tooling Message-ID: Hi folks, On Friday I'll do a deep-dive on where we are with container tools. It's basically an update on the removal of Paunch, what will change etc. I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask questions or give feedback. https://bluejeans.com/6007759543 Link of the slides: https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Thu Feb 6 00:22:06 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 6 Feb 2020 08:22:06 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Albert, thanks for your process. I'll record these and think about how to do this on kolla if similar issue happen in the future. -Eddie Albert Braden 於 2020年2月6日 週四 上午1:41寫道: > Hi Eddie, > > > > This is the process that I use to reset RMQ when it fails. RMQ messages > are ephemeral; losing your old RMQ messages doesn’t ruin the cluster. > > > > On master: > > service rabbitmq-server stop > > ps auxw|grep rabbit > > (kill any rabbit processes) > > rm -rf /var/lib/rabbitmq/mnesia/* > > service rabbitmq-server start > > rabbitmqctl add_user admin > > rabbitmqctl set_user_tags admin administrator > > rabbitmqctl set_permissions -p / admin ".*" ".*" ".*" > > rabbitmqctl add_user openstack > > rabbitmqctl set_permissions -p / openstack ".*" ".*" ".*" > > rabbitmqctl set_policy ha-all "" '{"ha-mode":"all"}' > > rabbitmqctl list_policies > > > > on slaves: > > rabbitmqctl stop_app > > If RMQ fails to reset on a slave, or fails to start after resetting, then: > > service rabbitmq-server stop > > ps auxw|grep rabbit > > (kill any rabbit processes) > > rm -rf /var/lib/rabbitmq/mnesia/* > > service rabbitmq-server start > > rabbitmqctl stop_app > > rabbitmqctl reset > > rabbitmqctl start_app > > rabbitmqctl stop_app > > rabbitmqctl join_cluster rabbit@ > > rabbitmqctl start_app > > > > *From:* Eddie Yen > *Sent:* Wednesday, February 5, 2020 3:33 AM > *To:* openstack-discuss > *Subject:* Re: [kolla] All services stats DOWN after re-launch whole > cluster. > > > > Today I tried to recovery RabbitMQ back, but still not useful, even delete > everything > > about data and configs for RabbitMQ then re-deploy (without destroy). > > > > And I found that the /etc/hosts on every nodes all been flushed, the > hostname > > resolve data created by kolla-ansible are gone. Checked and found that the > MAAS > > just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which > caused > > /etc/hosts been reset everytime when boot. > > > > Not sure it was a root cause or not but unfortunately I already reset > whole RabbitMQ > > data, so only I can do is destroy and deploy again. Fortunately this > cluster was just > > beginning so no VM launch, and no do complex setup yet. > > > > I think the issue may solved, although still need a time to investigate. > Based on this > > experience, need to notice about this may going to happen if using MAAS to > deploy > > the OS. > > > > -Eddie > > > > Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: > > Hi Erik, > > > > I'm already checked NIC link and no issue found. Pinging the nodes each > other on each interfaces is OK. > > And I'm not check docker logs about rabbitmq sbecause it works normally. > I'll check that out later. > > > > -Eddie > > > > Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: > > ⁹ > > On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: > > Hi everyone, > > > > We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 > Storage (Ceph OSD) > > site without internet. We did the shutdown few days ago since CNY > holidays. > > > > Today we re-launch whole cluster back. First we met the issue that MariaDB > containers keep > > restarting, and we fixed by using mariadb_recovery command. > > After that we check the status of each services, and found that all > services shown at > > Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP > connection, > > or other error found when check the downed service log. > > > > We tried reboot each servers but the situation still a same. Then we found > the RabbitMQ log not > > updating, the last log still stayed at the date we shutdown. Logged in to > RabbitMQ container and > > type "rabbitmqctl status" shows connection refused, and tried access its > web manager from > > :15672 on browser just gave us "503 Service unavailable" message. > Also no port 5672 > > listening. > > > > > > Any chance you have a NIC that didn't come up? What is in the log of the > container itself? (ie. docker log rabbitmq). > > > > > > I searched this issue on the internet but only few information about this. > One of solution is delete > > some files in mnesia folder, another is remove rabbitmq container and its > volume then re-deploy. > > But both are not sure. Does anyone know how to solve it? > > > > > > Many thanks, > > Eddie. > > > > -Erik > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From emilien at redhat.com Thu Feb 6 00:22:02 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 5 Feb 2020 19:22:02 -0500 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Of course it'll be recorded and the link will be available for everyone. On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, wrote: > Hi folks, > > On Friday I'll do a deep-dive on where we are with container tools. > It's basically an update on the removal of Paunch, what will change etc. > > I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask > questions or give feedback. > > https://bluejeans.com/6007759543 > Link of the slides: > https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dangtrinhnt at gmail.com Thu Feb 6 01:25:23 2020 From: dangtrinhnt at gmail.com (Trinh Nguyen) Date: Thu, 6 Feb 2020 10:25:23 +0900 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #1 In-Reply-To: References: Message-ID: Thank Kendall for the effort. Searchlight has landed the doc on master :D [1][2] [1] https://review.opendev.org/#/c/705968/ [2] https://docs.openstack.org/searchlight/latest/contributor/contributing.html On Wed, Feb 5, 2020 at 4:20 AM Kendall Nelson wrote: > > Hello All! > > At last a dedicated update solely to the Contrib & PTL Docs community > goal! Get excited :) > > At this point the goal has been accepted[1], and the template[2] has been > created and merged! > > So, the next step is for all projects to use the cookiecutter template[2] > and fill in the extra details after you generate the .rst that should auto > populate some of the information. > > As you are doing that, please assign yourself the associated tasks in the > story I created[3]. > > If you have any questions or concerns, please let me know! > > -Kendall Nelson (diablo_rojo) > > > [1] Goal: > https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html > [2] Docs Template: > https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst > [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 > > > -- *Trinh Nguyen* *www.edlab.xyz * -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Thu Feb 6 07:09:39 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 6 Feb 2020 07:09:39 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> Message-ID: <1416517604.208171.1580972979203@mail.yahoo.com> Hi,I am trying to run kubelet in arm64 platform1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile2.    Generated kuryr-cni-arm64 container.3.     my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) My master node in x86 installed successfully using devstack While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) COntroller logs: http://paste.openstack.org/show/789209/ Please help me to fix the issue Veera. Regards, Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hello at dincercelik.com Thu Feb 6 07:13:10 2020 From: hello at dincercelik.com (Dincer Celik) Date: Thu, 6 Feb 2020 10:13:10 +0300 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Eddie, Seems like an issue[1] which has been fixed previously. Could you please let me know which version are you using? -osmanlicilegi [1] https://bugs.launchpad.net/kolla-ansible/+bug/1837699 > On 5 Feb 2020, at 14:33, Eddie Yen wrote: > > Today I tried to recovery RabbitMQ back, but still not useful, even delete everything > about data and configs for RabbitMQ then re-deploy (without destroy). > > And I found that the /etc/hosts on every nodes all been flushed, the hostname > resolve data created by kolla-ansible are gone. Checked and found that the MAAS > just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which caused > /etc/hosts been reset everytime when boot. > > Not sure it was a root cause or not but unfortunately I already reset whole RabbitMQ > data, so only I can do is destroy and deploy again. Fortunately this cluster was just > beginning so no VM launch, and no do complex setup yet. > > I think the issue may solved, although still need a time to investigate. Based on this > experience, need to notice about this may going to happen if using MAAS to deploy > the OS. > > -Eddie > > Eddie Yen > 於 2020年2月4日 週二 下午9:45寫道: > Hi Erik, > > I'm already checked NIC link and no issue found. Pinging the nodes each other on each interfaces is OK. > And I'm not check docker logs about rabbitmq sbecause it works normally. I'll check that out later. > > -Eddie > > Erik McCormick > 於 2020年2月4日 週二 下午9:19寫道: > ⁹ > > On Tue, Feb 4, 2020, 7:20 AM Eddie Yen > wrote: > Hi everyone, > > We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + 3 Storage (Ceph OSD) > site without internet. We did the shutdown few days ago since CNY holidays. > > Today we re-launch whole cluster back. First we met the issue that MariaDB containers keep > restarting, and we fixed by using mariadb_recovery command. > After that we check the status of each services, and found that all services shown at > Admin > System > System Information are DOWN. Strange is no MariaDB, AMQP connection, > or other error found when check the downed service log. > > We tried reboot each servers but the situation still a same. Then we found the RabbitMQ log not > updating, the last log still stayed at the date we shutdown. Logged in to RabbitMQ container and > type "rabbitmqctl status" shows connection refused, and tried access its web manager from > :15672 on browser just gave us "503 Service unavailable" message. Also no port 5672 > listening. > > > Any chance you have a NIC that didn't come up? What is in the log of the container itself? (ie. docker log rabbitmq). > > > I searched this issue on the internet but only few information about this. One of solution is delete > some files in mnesia folder, another is remove rabbitmq container and its volume then re-deploy. > But both are not sure. Does anyone know how to solve it? > > > Many thanks, > Eddie. > > -Erik -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Feb 6 07:35:03 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 6 Feb 2020 08:35:03 +0100 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: <20200205230338.GB898532@zeong> References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> <20200205222143.xortzgo3zgk7d2im@mthode.org> <20200205230338.GB898532@zeong> Message-ID: Alrighty, folks. The summary is: orjson is bad due to requirement of PyO3 to run rust nightly on non-x86_64. Then what about the other contestant, rapidjson? [1] It's a wrapper over a popular C++ json library. Both wrapper and lib look alive enough to be considered serious. [1] https://pypi.org/project/python-rapidjson/ -yoctozepto From missile0407 at gmail.com Thu Feb 6 07:57:21 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 6 Feb 2020 15:57:21 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: Hi Dincer, I'm using Rocky, and seems like this fix didn't merge to stable/rocky. And also what you wrote about flush host table issue in MAAS deployment. -Eddie Dincer Celik 於 2020年2月6日 週四 下午3:13寫道: > Hi Eddie, > > Seems like an issue[1] which has been fixed previously. Could you please > let me know which version are you using? > > -osmanlicilegi > > [1] https://bugs.launchpad.net/kolla-ansible/+bug/1837699 > > On 5 Feb 2020, at 14:33, Eddie Yen wrote: > > Today I tried to recovery RabbitMQ back, but still not useful, even delete > everything > about data and configs for RabbitMQ then re-deploy (without destroy). > > And I found that the /etc/hosts on every nodes all been flushed, the > hostname > resolve data created by kolla-ansible are gone. Checked and found that the > MAAS > just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which > caused > /etc/hosts been reset everytime when boot. > > Not sure it was a root cause or not but unfortunately I already reset > whole RabbitMQ > data, so only I can do is destroy and deploy again. Fortunately this > cluster was just > beginning so no VM launch, and no do complex setup yet. > > I think the issue may solved, although still need a time to investigate. > Based on this > experience, need to notice about this may going to happen if using MAAS to > deploy > the OS. > > -Eddie > > Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: > >> Hi Erik, >> >> I'm already checked NIC link and no issue found. Pinging the nodes each >> other on each interfaces is OK. >> And I'm not check docker logs about rabbitmq sbecause it works normally. >> I'll check that out later. >> >> -Eddie >> >> Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: >> >>> ⁹ >>> >>> On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: >>> >>>> Hi everyone, >>>> >>>> We have the Kolla Openstack site, which is 3 HCI (Controller+Compute) + >>>> 3 Storage (Ceph OSD) >>>> site without internet. We did the shutdown few days ago since CNY >>>> holidays. >>>> >>>> Today we re-launch whole cluster back. First we met the issue that >>>> MariaDB containers keep >>>> restarting, and we fixed by using mariadb_recovery command. >>>> After that we check the status of each services, and found that all >>>> services shown at >>>> Admin > System > System Information are DOWN. Strange is no MariaDB, >>>> AMQP connection, >>>> or other error found when check the downed service log. >>>> >>>> We tried reboot each servers but the situation still a same. Then we >>>> found the RabbitMQ log not >>>> updating, the last log still stayed at the date we shutdown. Logged in >>>> to RabbitMQ container and >>>> type "rabbitmqctl status" shows connection refused, and tried access >>>> its web manager from >>>> :15672 on browser just gave us "503 Service unavailable" message. >>>> Also no port 5672 >>>> listening. >>>> >>> >>> >>> Any chance you have a NIC that didn't come up? What is in the log of the >>> container itself? (ie. docker log rabbitmq). >>> >>> >>>> I searched this issue on the internet but only few information about >>>> this. One of solution is delete >>>> some files in mnesia folder, another is remove rabbitmq container and >>>> its volume then re-deploy. >>>> But both are not sure. Does anyone know how to solve it? >>>> >>>> >>>> Many thanks, >>>> Eddie. >>>> >>> >>> -Erik >>> >>>> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From missile0407 at gmail.com Thu Feb 6 08:08:25 2020 From: missile0407 at gmail.com (Eddie Yen) Date: Thu, 6 Feb 2020 16:08:25 +0800 Subject: [kolla] All services stats DOWN after re-launch whole cluster. In-Reply-To: References: Message-ID: (Click the "Send" button too fast...) Thanks to Dincer's information. Looks like the issue has already been resolved before but not merge to the branch we're using. I'll do the cherry-pick to stable/rocky later. -Eddie Eddie Yen 於 2020年2月6日 週四 下午3:57寫道: > Hi Dincer, > > I'm using Rocky, and seems like this fix didn't merge to stable/rocky. > And also what you wrote about flush host table issue in MAAS deployment. > > -Eddie > > Dincer Celik 於 2020年2月6日 週四 下午3:13寫道: > >> Hi Eddie, >> >> Seems like an issue[1] which has been fixed previously. Could you please >> let me know which version are you using? >> >> -osmanlicilegi >> >> [1] https://bugs.launchpad.net/kolla-ansible/+bug/1837699 >> >> On 5 Feb 2020, at 14:33, Eddie Yen wrote: >> >> Today I tried to recovery RabbitMQ back, but still not useful, even >> delete everything >> about data and configs for RabbitMQ then re-deploy (without destroy). >> >> And I found that the /etc/hosts on every nodes all been flushed, the >> hostname >> resolve data created by kolla-ansible are gone. Checked and found that >> the MAAS >> just enabled manage_etc_hosts config in /etc/cloud/cloud.cfg.d/ which >> caused >> /etc/hosts been reset everytime when boot. >> >> Not sure it was a root cause or not but unfortunately I already reset >> whole RabbitMQ >> data, so only I can do is destroy and deploy again. Fortunately this >> cluster was just >> beginning so no VM launch, and no do complex setup yet. >> >> I think the issue may solved, although still need a time to investigate. >> Based on this >> experience, need to notice about this may going to happen if using MAAS >> to deploy >> the OS. >> >> -Eddie >> >> Eddie Yen 於 2020年2月4日 週二 下午9:45寫道: >> >>> Hi Erik, >>> >>> I'm already checked NIC link and no issue found. Pinging the nodes each >>> other on each interfaces is OK. >>> And I'm not check docker logs about rabbitmq sbecause it works normally. >>> I'll check that out later. >>> >>> -Eddie >>> >>> Erik McCormick 於 2020年2月4日 週二 下午9:19寫道: >>> >>>> ⁹ >>>> >>>> On Tue, Feb 4, 2020, 7:20 AM Eddie Yen wrote: >>>> >>>>> Hi everyone, >>>>> >>>>> We have the Kolla Openstack site, which is 3 HCI >>>>> (Controller+Compute) + 3 Storage (Ceph OSD) >>>>> site without internet. We did the shutdown few days ago since CNY >>>>> holidays. >>>>> >>>>> Today we re-launch whole cluster back. First we met the issue that >>>>> MariaDB containers keep >>>>> restarting, and we fixed by using mariadb_recovery command. >>>>> After that we check the status of each services, and found that all >>>>> services shown at >>>>> Admin > System > System Information are DOWN. Strange is no MariaDB, >>>>> AMQP connection, >>>>> or other error found when check the downed service log. >>>>> >>>>> We tried reboot each servers but the situation still a same. Then we >>>>> found the RabbitMQ log not >>>>> updating, the last log still stayed at the date we shutdown. Logged in >>>>> to RabbitMQ container and >>>>> type "rabbitmqctl status" shows connection refused, and tried access >>>>> its web manager from >>>>> :15672 on browser just gave us "503 Service unavailable" message. >>>>> Also no port 5672 >>>>> listening. >>>>> >>>> >>>> >>>> Any chance you have a NIC that didn't come up? What is in the log of >>>> the container itself? (ie. docker log rabbitmq). >>>> >>>> >>>>> I searched this issue on the internet but only few information about >>>>> this. One of solution is delete >>>>> some files in mnesia folder, another is remove rabbitmq container and >>>>> its volume then re-deploy. >>>>> But both are not sure. Does anyone know how to solve it? >>>>> >>>>> >>>>> Many thanks, >>>>> Eddie. >>>>> >>>> >>>> -Erik >>>> >>>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Thu Feb 6 08:27:35 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 6 Feb 2020 09:27:35 +0100 Subject: [queens]cinder][iscsi] issue In-Reply-To: References: Message-ID: Hello , On centos kvm nodes, setting skip_kpartx no in multipath.conf solved the problem and now os_brick can flush maps Ignazio Il giorno lun 3 feb 2020 alle ore 15:13 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello all we are testing openstack queens cinder driver for Unity iscsi > (driver cinder.volume.drivers.dell_emc.unity.Driver). > > The unity storage is a Unity600 Version 4.5.10.5.001 > > We are facing an issue when we try to detach volume from a virtual machine > with two or more volumes attached (this happens often but not always). > > We also facing the same issue live migrating a virtual machine. > > > Multimaph -f does not work because it returns "map in use" and the volume > is not detached. > > Attached here there hare nova-compute e volume logs in debug mode. > > Could anyone help me ? > > Ignazio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Feb 6 09:15:12 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 6 Feb 2020 10:15:12 +0100 Subject: Active-Active Cinder + RBD driver + Co-ordination In-Reply-To: References: Message-ID: <20200206091512.lh5qh366prup6qop@localhost> On 05/02, Paul Browne wrote: > Hi list, > > I had a quick question about Active-Active in cinder-volume and > cinder-backup stable/stein and RBD driver, if anyone can help. > > Using only the Ceph RBD driver for volume backends, is it required to run > A-A cinder services with clustering configuration so that they form a > cluster? > > And, if so, is an external coordinator (redis/etcd/Consul) necessary, > again only using RBD driver? > Hi, For the time being the cluster concept only applies to the cinder-volume service. The cinder-backup service has 2 modes of operation: only backup from current node (so we can have different backup drivers on each node) deployed as A/P, and backup from any node (then all drivers must be the same) deployed as A/A. When deploying cinder-volume as active-active a coordinator is required to perform the functions of a DLM to respect mutual exclusion sections across the whole cluster. Drivers that can be used for the coordination are those available in Tooz [1] that support locks (afaik they all support them). If you don't form a cluster and deploy all cinder-volume services in A/A sharing the same host you'll end up in a world of pain, among other things, as soon as you add a new node or a cinder-volume service is restarted, as it will disturb all ongoing operations from the other cinder-volume services. I hope this helps. Cheers, Gorka. [1]: https://docs.openstack.org/tooz/latest/user/drivers.html > Best docs I could find on this so far were; > > https://docs.openstack.org/cinder/latest/contributor/high_availability.html > , > > I support more aimed at devs/contributoers than operators, but it's not > 100% clear to me on these questions > > Thanks, > Paul > > -- > ******************* > Paul Browne > Research Computing Platforms > University Information Services > Roger Needham Building > JJ Thompson Avenue > University of Cambridge > Cambridge > United Kingdom > E-Mail: pfb29 at cam.ac.uk > Tel: 0044-1223-746548 > ******************* From chkumar246 at gmail.com Thu Feb 6 09:42:16 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Thu, 6 Feb 2020 15:12:16 +0530 Subject: [tripleo] no rechecks Message-ID: Hello, TripleO jobs are failing with following bug. upstream tripleo jobs using os_tempest failing with ERROR: No matching distribution found for oslo.db===7.0.0 -> https://bugs.launchpad.net/tripleo/+bug/1862134 Fix is in progress: https://review.opendev.org/#/c/706196/ We need this patch to land before it's safe to blindly recheck. We appreciate your patience while we resolve the issues. Thanks, Chandan Kumar From florian at datalounges.com Thu Feb 6 10:01:42 2020 From: florian at datalounges.com (Florian Rommel) Date: Thu, 06 Feb 2020 12:01:42 +0200 Subject: [charms] modular deployment Message-ID: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> Hi, we are starting to look into deploying Openstack with Charms ,however there seems to be NO documentation on how to separate ceph osd and mon out of the Openstack-base bundle. We already have a ceph cluster so we would like to reuse that instead of hyperconverging. Is there any way do do this with charms or do we need to look into openstack ansible? Thank you and best regards, //F -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Thu Feb 6 10:13:26 2020 From: james.page at canonical.com (James Page) Date: Thu, 6 Feb 2020 10:13:26 +0000 Subject: [charms] modular deployment In-Reply-To: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> References: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> Message-ID: Hi Florian On Thu, Feb 6, 2020 at 10:03 AM Florian Rommel wrote: > Hi, we are starting to look into deploying Openstack with Charms ,however > there seems to be NO documentation on how to separate ceph osd and mon out > of the Openstack-base bundle. We already have a ceph cluster so we would > like to reuse that instead of hyperconverging. > > > > Is there any way do do this with charms or do we need to look into > openstack ansible? > This is not a typical deployment but is something that is possible - you can use the ceph-proxy to wire your existing ceph deployment into charm deployed OpenStack: https://jaas.ai/ceph-proxy Basically it acts as an intermediary between the OpenStack Charms and the existing Ceph deployment. Cheers James > -------------- next part -------------- An HTML attachment was scrubbed... URL: From florian at datalounges.com Thu Feb 6 10:15:38 2020 From: florian at datalounges.com (Florian Rommel) Date: Thu, 06 Feb 2020 12:15:38 +0200 Subject: [charms] modular deployment In-Reply-To: References: <1A1FB32D-7D00-48FF-BC9D-D2556976897D@datalounges.com> Message-ID: Hi James, thanks for that.. I will check this out, this might do the trick from briefly checking it over.. Best regards, //F From: James Page Date: Thursday 6. February 2020 at 12.13 To: Florian Rommel Cc: "openstack-discuss at lists.openstack.org" Subject: Re: [charms] modular deployment Hi Florian On Thu, Feb 6, 2020 at 10:03 AM Florian Rommel wrote: Hi, we are starting to look into deploying Openstack with Charms ,however there seems to be NO documentation on how to separate ceph osd and mon out of the Openstack-base bundle. We already have a ceph cluster so we would like to reuse that instead of hyperconverging. Is there any way do do this with charms or do we need to look into openstack ansible? This is not a typical deployment but is something that is possible - you can use the ceph-proxy to wire your existing ceph deployment into charm deployed OpenStack: https://jaas.ai/ceph-proxy Basically it acts as an intermediary between the OpenStack Charms and the existing Ceph deployment. Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Thu Feb 6 10:35:59 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 06 Feb 2020 11:35:59 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <1416517604.208171.1580972979203@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> Message-ID: <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> Hi, The logs you provided doesn't seem to indicate any issues. Please provide logs of kuryr-daemon (kuryr-cni pod). Thanks, Michał On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > Hi, > I am trying to run kubelet in arm64 platform > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > 2. Generated kuryr-cni-arm64 container. > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > My master node in x86 installed successfully using devstack > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > COntroller logs: http://paste.openstack.org/show/789209/ > > Please help me to fix the issue > > Veera. > > > > > > > > > Regards, > Veera. From veeraready at yahoo.co.in Thu Feb 6 10:46:02 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 6 Feb 2020 10:46:02 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> Message-ID: <541855276.335476.1580985962108@mail.yahoo.com> Hi mdulko,Please find kuryr-cni logshttp://paste.openstack.org/show/789209/ Regards, Veera. On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: Hi, The logs you provided doesn't seem to indicate any issues. Please provide logs of kuryr-daemon (kuryr-cni pod). Thanks, Michał On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > Hi, > I am trying to run kubelet in arm64 platform > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > 2.    Generated kuryr-cni-arm64 container. > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > My master node in x86 installed successfully using devstack > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > COntroller logs: http://paste.openstack.org/show/789209/ > > Please help me to fix the issue > > Veera. > > > > > > > > > Regards, > Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Feb 6 11:07:24 2020 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 6 Feb 2020 12:07:24 +0100 Subject: [ops][cinder] Moving volume to new type In-Reply-To: <20200130163623.alxwbl3jt5w2bldw@csail.mit.edu> References: <20200130163623.alxwbl3jt5w2bldw@csail.mit.edu> Message-ID: <20200206110724.mj36cproak2t67az@localhost> On 30/01, Jonathan Proulx wrote: > Hi All, > > I'm currently languishing on Mitaka so perhaps further back than help > can reach but...if anyone can tell me if this is something dumb I'm > doing or a know bug in mitaka that's preventing me for movign volumes > from one type to anyother it'd be a big help. > > In the further past I did a cinder backend migration by creating a new > volume type then changing all the existign volume sto the new type. > This is how we got from iSCSI to RBD (probably in Grizzly or Havana). > > Currently I'm starting to move from one RBD pool to an other and seems > like this should work in the same way. Both pools and types exist and > I can create volumes in either but when I run: > > openstack volume set --type ssd test-vol > > it rather silently fails to do anything (CLI returns 0), looking into > schedulerlogs I see: > > # yup 2 "hosts" to check > DEBUG cinder.scheduler.base_filter Starting with 2 host(s) get_filtered_objects > DEBUG cinder.scheduler.base_filter Filter AvailabilityZoneFilter returned 2 host(s) get_filtered_objects > DEBUG cinder.scheduler.filters.capacity_filter Space information for volume creation on host nimbus-1 at ssdrbd#ssdrbd (requested / avail): 8/47527.78 host_passes > DEBUG cinder.scheduler.base_filter Filter CapacityFilter returned 2 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/base > DEBUG cinder.scheduler.filters.capabilities_filter extra_spec requirement 'ssdrbd' does not match 'rbd' _satisfies_extra_specs /usr/lib/python2.7/dist- > DEBUG cinder.scheduler.filters.capabilities_filter host 'nimbus-1 at rbd#rbd': free_capacity_gb: 71127.03, pools: None fails resource_type extra_specs req > DEBUG cinder.scheduler.base_filter Filter CapabilitiesFilter returned 1 host(s) get_filtered_objects /usr/lib/python2.7/dist-packages/cinder/scheduler/ > > # after filtering we have one > DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1 at ssdrbd#ssdrbd': free_capacity_gb: 47527.78, pools: None] _get_weighted_candidates > > # but it fails? > ERROR cinder.scheduler.manager Could not find a host for volume 49299c0b-8bcf-4cdb-a0e1-dec055b0e78c with type bc2bc9ad-b0ad-43d2-93db-456d750f194d. Hi, This looks like you didn't say that it was OK to migrate volumes on the retype. Did you set the migration policy on the request to "on-demand": cinder retype --migration-policy on-demand test-vol ssd Cheers, Gorka. > > > successfully creating a volume in ssdrbd is identical to that point, except rather than ERROR on the last line it goes to: > > # Actually chooses 'nimbus-1 at ssdrbd#ssdrbd' as top host > > DEBUG cinder.scheduler.filter_scheduler Filtered [host 'nimbus-1 at ssdrbd#ssdrbd': free_capacity_gb: 47527.8, pools: None] _get_weighted_candidates > DEBUG cinder.scheduler.filter_scheduler Choosing nimbus-1 at ssdrbd#ssdrbd _choose_top_host > > # then goes and makes volume > > DEBUG oslo_messaging._drivers.amqpdriver CAST unique_id: 1b7a9d88402a41f8b889b88a2e2a198d exchange 'openstack' topic 'cinder-volume' _send > DEBUG cinder.scheduler.manager Task 'cinder.scheduler.flows.create_volume.ScheduleCreateVolumeTask;volume:create' (e70dcc3f-7d88-4542-abff-f1a1293e90fb) transitioned into state 'SUCCESS' from state 'RUNNING' with result 'None' _task_receiver > > > Anyone recognize this situation? > > Since I'm retiring the old spinning disks I can also "solve" this on > the Ceph side by changing the crush map such that the old rbd pool > just picks all ssds. So this isn't critical, but in the transitional > period until I have enough SSD capacity to really throw *everything* > over, there are some hot spot volumes it would be really nice to move > this way. > > Thanks, > -Jon > From mdulko at redhat.com Thu Feb 6 11:45:49 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 06 Feb 2020 12:45:49 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <541855276.335476.1580985962108@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> Message-ID: Hm, nothing too troubling there too, besides Kubernetes not answering on /healthz endpoint. Are those full logs, including the moment you tried spawning a container there? It seems like you only pasted the fragments with tracebacks regarding failures to read /healthz endpoint of kube-apiserver. That is another problem you should investigate - that causes Kuryr pods to restart. At first I'd disable the healthchecks (remove readinessProbe and livenessProbe from Kuryr pod definitions) and try to get fresh set of logs. On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > Hi mdulko, > Please find kuryr-cni logs > http://paste.openstack.org/show/789209/ > > > Regards, > Veera. > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > Hi, > > The logs you provided doesn't seem to indicate any issues. Please > provide logs of kuryr-daemon (kuryr-cni pod). > > Thanks, > Michał > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > Hi, > > I am trying to run kubelet in arm64 platform > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > 2. Generated kuryr-cni-arm64 container. > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > My master node in x86 installed successfully using devstack > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > Please help me to fix the issue > > > > Veera. > > > > > > > > > > > > > > > > > > Regards, > > Veera. > > From rico.lin.guanyu at gmail.com Thu Feb 6 15:22:24 2020 From: rico.lin.guanyu at gmail.com (Rico Lin) Date: Thu, 6 Feb 2020 23:22:24 +0800 Subject: [tc] February meeting agenda In-Reply-To: References: Message-ID: Hi all The meeting logs are available here: http://eavesdrop.openstack.org/meetings/tc/2020/tc.2020-02-06-14.00.html Notice that some discussion continues at office hour time. Please check. Thank you everyone! On Wed, Feb 5, 2020 at 4:07 PM Rico Lin wrote: > Hello everyone, > > Our next meeting is happening this Thursday (the 6th), and the > agenda is, as usual, on the wiki! > > Here is a primer of the agenda for this month: > - Report on large scale sig > - Report on tc/uc merge > - report on the post for the analysis of the survey > - Report on the convo Telemetry > - Report on multi-arch SIG > - report on infra liaison and static hosting > - report on stable branch policy work > - report on the oslo metrics project > - report on the community goals for U and V, py2 drop > - report on release naming > - report on the ideas repo > - report on charter change > - Report on whether SIG guidelines worked > - volunteers to represent OpenStack at the OpenDev advisory board > - Report on the OSF board initiatives > - Dropping side projects: using golden signals > > See you all in meeting:) > > > -- > May The Force of OpenStack Be With You, > > *Rico Lin*irc: ricolin > > -- May The Force of OpenStack Be With You, *Rico Lin*irc: ricolin -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Feb 6 16:04:14 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 6 Feb 2020 10:04:14 -0600 Subject: [nova] Ussuri spec push (and scrub) Message-ID: Hi all. Ahead of spec freeze next Thursday (Feb 13) we've agreed to do a review day next Tuesday, Feb 11th. If you own a blueprint [1] that isn't already Design:Approved (there are 20 as of this writing) and are still interested in landing it in Ussuri, please make sure your spec is review-ready, and plan to be present in #openstack-nova on Tuesday so we can work through any issues quickly. If you are a nova maintainer, please plan as much time as possible on Tuesday to review open specs [2] and discuss them in IRC. As a reminder, soon after spec freeze I would like to scrub the list of Design:Approved blueprints ("if we're going to do this, here's how") to decide which should be Direction:Approved ("we're going to do this in Ussuri") and which should be deferred. I will send a separate email about this after the milestone. Thanks, efried [1] https://blueprints.launchpad.net/nova/ussuri [2] https://review.opendev.org/#/q/project:openstack/nova-specs+path:%255Especs/ussuri/approved/.*+status:open From openstack at fried.cc Thu Feb 6 16:04:35 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 6 Feb 2020 10:04:35 -0600 Subject: [nova][ptg] Vancouver Planning Message-ID: <07f5fe7b-8539-315b-e30f-cf0920ea0ccd@fried.cc> Nova- I've started an etherpad for the Victoria PTG [1]. I've been asked to let the organizers know how much space and time we're going to need within the next few weeks, so please register your attendance and add any topics you know of as soon as possible. Thanks, efried [1] https://etherpad.openstack.org/p/nova-victoria-ptg From mthode at mthode.org Thu Feb 6 16:40:31 2020 From: mthode at mthode.org (Matthew Thode) Date: Thu, 6 Feb 2020 10:40:31 -0600 Subject: [all][requirements][monasca][gnocchi][kolla] ujson, not maintained for over 4 years, has compiler issues In-Reply-To: References: <9f0e9019-7934-caaa-1f06-48b3437609c5@suse.com> <20200205164630.xidg42ego5kvg4ia@mthode.org> <20200205213826.GA898532@zeong> <20200205222143.xortzgo3zgk7d2im@mthode.org> <20200205230338.GB898532@zeong> Message-ID: <20200206164031.6wsm2li34equapno@mthode.org> On 20-02-06 08:35:03, Radosław Piliszek wrote: > Alrighty, folks. > > The summary is: orjson is bad due to requirement of PyO3 to run rust > nightly on non-x86_64. > > Then what about the other contestant, rapidjson? [1] > It's a wrapper over a popular C++ json library. > Both wrapper and lib look alive enough to be considered serious. > > [1] https://pypi.org/project/python-rapidjson/ > > -yoctozepto > Yep, rapidjson looks fine to me -- Matthew Thode -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: not available URL: From ignaziocassano at gmail.com Thu Feb 6 19:58:56 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 6 Feb 2020 20:58:56 +0100 Subject: [Neutron][segments] Message-ID: Hello, I am reading about openstack neutron routed provider networks. If I understood well, using segments I can create more subnets within the same provider net vlan id. Correct? In documentation seems I must use different physical network for each segment. I am using only one physical network ...in other words I created an openvswitch with a bridge (br-ex) with an interface on which I receive all my vlans (trunk). Why must I use different physical net? Sorry, but my net skill is poor. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Thu Feb 6 20:40:59 2020 From: zbitter at redhat.com (Zane Bitter) Date: Thu, 6 Feb 2020 15:40:59 -0500 Subject: [heat][tacker] After Stack is Created, will it change nested stack Id? In-Reply-To: References: Message-ID: <72fa408d-072a-7f72-91b1-caba3a7d1d6a@redhat.com> Hi Tushar, Great question. On 4/02/20 2:53 am, Patil, Tushar wrote: > Hi All, > > In tacker project, we are using heat API to create stack. > Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2. OK, so the scaled unit is a stack containing two servers and two ports. > So internally heat will create two nested stacks and add following resources to it:- > > child stack1 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port > > child stack2 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port In fact, Heat will create 3 nested stacks - one child stack for the AutoScalingGroup that contains two Template resources, which each have a grandchild stack (the ones you list above). I'm sure you know this, but I mention it because it makes what I'm about to say below clearer. > Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack. > > Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly. That's entirely reasonable. > My question is after the stack is created for the first time, will it ever change the nested child stack id? Short answer: no. Long answer: yes ;) In general normal updates and such will never result in the (grand)child stack ID changing. Even if a resource inside the stack fails (so the stack gets left in UPDATE_FAILED state), the next update will just try to update it again in-place. Obviously if you scale down the AutoScalingGroup and then scale it back up again, you'll end up with the different grandchild stack there. The only other time it gets replaced is if you use the "mark unhealthy" command on the template resource in the child stack (i.e. the autoscaling group stack), or on the AutoScalingGroup resource itself in the parent stack. If you do this a whole new replacement (grand)child stack will get replaced. Marking only the resources within the grandchild stack (e.g. VDU1) will *not* cause the stack to be replaced, so you should be OK. In code: https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/stack_resource.py#L106-L135 Hope that helps. Feel free to ask if you need more clarification. cheers, Zane. From skaplons at redhat.com Thu Feb 6 20:55:52 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 6 Feb 2020 21:55:52 +0100 Subject: [neutron][ptg] Vancouver attendance and planning Message-ID: Hi Neutrinos, As You probably know, Victoria PTG will be in Vancouver in June. I have been asked by organisers how much space and time we will need for Neutron so I started etherpad [1]. Please put there Your name if You are planning to go to Vancouver. Even if it’s not confirmed yet. I need to have number of people before Sunday, March 2nd. Also if You have any ideas about topics which we should discuss there, please write them in this etherpad too. [1] https://etherpad.openstack.org/p/neutron-victoria-ptg — Slawek Kaplonski Senior software engineer Red Hat From kennelson11 at gmail.com Thu Feb 6 21:00:12 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 6 Feb 2020 13:00:12 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #1 In-Reply-To: References: Message-ID: Woohoo! Looks awesome Trinh! Thanks for getting things started! -Kendall (diablo_rojo) On Wed, Feb 5, 2020 at 5:25 PM Trinh Nguyen wrote: > Thank Kendall for the effort. Searchlight has landed the doc on master :D > [1][2] > > [1] https://review.opendev.org/#/c/705968/ > [2] > https://docs.openstack.org/searchlight/latest/contributor/contributing.html > > > > On Wed, Feb 5, 2020 at 4:20 AM Kendall Nelson > wrote: > >> >> Hello All! >> >> At last a dedicated update solely to the Contrib & PTL Docs community >> goal! Get excited :) >> >> At this point the goal has been accepted[1], and the template[2] has been >> created and merged! >> >> So, the next step is for all projects to use the cookiecutter template[2] >> and fill in the extra details after you generate the .rst that should auto >> populate some of the information. >> >> As you are doing that, please assign yourself the associated tasks in the >> story I created[3]. >> >> If you have any questions or concerns, please let me know! >> >> -Kendall Nelson (diablo_rojo) >> >> >> [1] Goal: >> https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html >> [2] Docs Template: >> https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst >> [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 >> >> >> > > -- > *Trinh Nguyen* > *www.edlab.xyz * > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Fri Feb 7 00:23:51 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 06 Feb 2020 18:23:51 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> Message-ID: <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> ---- On Wed, 05 Feb 2020 14:28:28 -0600 Ghanshyam Mann wrote ---- > ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- > > I am writing at top now for easy ready. > > > > * Gate status: > > - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for > - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. > > - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. > > - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates > > the tempest tox env with master u-c needs to be fixed[2]. > > While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for > stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems > to cap the upper-constraint like it was proposed by chandan[1]. > > NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain > such cap only for Tempest & its plugins till stable/rocky EOL. Updates: ====== Instead of capping requirement, Tempest role run-tempest fix on master branch work fine because of Zuul pickup the job definition and playbooks from master branch. But there is another issue occurring here. grenade job fails and blocks the fixes on stable branches to merge because old branch has to be fixed first. So we need to merge the stable/ocata fix first and then stab;e/pike and so on. New bug: ======= Another bug on py2 jobs even on stable/train etc. - https://bugs.launchpad.net/tempest/+bug/1862240 Tempest tox env moved to py3 fail the py2 jobs on stable branch jobs who are still using deprecated tox env 'all-plugin'. plugins are installed on py2 in py2 jobs and tempest create tox env with py3 and so does 'all-plugin' which is sitepackage=True try to find plugins on py3 and fail. The solution is to replace the 'all-plugin' tox env to 'all'. I am fixing it for designate. But due to grenade job nature, we need to reverse the fixes here, first, merge the stable/stein and then stable/train. Stay tuned for updates and more bugs. The stable branches gate is still not green, I will update the status on ML about gate staus. -gmann > > Background on this issue: > -------------------------------- > > Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on > stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with > stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. > > I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. > ------------------------------------------- > > [1] https://review.opendev.org/#/c/705685/ > [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 > [3] https://review.opendev.org/#/c/705089 > [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml > > -gmann > > > > - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. > > > > * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED > > - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. > > > > [1] https://review.opendev.org/#/c/705089/ > > [2] https://review.opendev.org/#/c/705870/ > > > > -gmann > > > > ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > > > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > > > Hello Everyone, > > > > > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > > > of 'EOLing python2 drama' in subject :). > > > > > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > > > to install the latest neutron-lib and failing. > > > > > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > > > from master and kepe testing py2 on stable bracnhes. > > > > > > > > We have two way to fix this: > > > > > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > > > I am trying this first[2] and testing[3]. > > > > > > > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > > > > > Tried option#2: > > > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > > > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > > > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > > > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > > > or distro-specific job like centos7 etc where we have < py3.6. > > > > > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > > > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > > > Testing your cloud with the latest Tempest is the best possible way. > > > > > > Going with option#1: > > > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > > > for all possible distro/py version. > > > > > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > > > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > > > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > > > on >py3.6 env(venv or separate node). > > > * Patch is up - https://review.opendev.org/#/c/704840/ > > > > > > 2.Modify Tempest tox env basepython to py3 > > > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > > > like fedora or future distro > > > *Patch is up- https://review.opendev.org/#/c/704688/2 > > > > > > 3. Use compatible Tempest & its plugin tag for distro having > > * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > > > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > > > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > > > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > > > > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > > > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > > > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > > > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > > > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > > > in-tree plugins for their stable branch testing. > > > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > > > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > > > in neutron-vpnaas case. This will be easy for future maintenance also. > > > > > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > > > > > [1] https://review.opendev.org/#/c/703476/ > > > [2] https://review.opendev.org/#/c/703011/ > > > [3] https://releases.openstack.org/rocky/#rocky-tempest > > > > > > -gmann > > > > > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > > > [2] https://review.opendev.org/#/c/703011/ > > > > [3] https://review.opendev.org/#/c/703012/ > > > > > > > > > > > > -gmanne > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From rosmaita.fossdev at gmail.com Fri Feb 7 02:40:02 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 6 Feb 2020 21:40:02 -0500 Subject: [cinder][nova] volume-local-cache proposal cross-project meeting Message-ID: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> Liang Fang's volume-local-cache spec has gotten stuck because the Cinder team doesn't want to approve something that won't get approved on the Nova side and vice-versa. We discussed the spec at this week's Cinder meeting and there's sufficient interest to continue with it. In order to keep things moving along, we'd like to have a cross-project video conference next week instead of waiting for the PTG. I think we can get everything covered in a half-hour. I've put together a doodle poll to find a reasonable day and time: https://doodle.com/poll/w8ttitucgyqwe5yc Please take the poll before 17:00 UTC on Saturday 8 February. I'll send out an announcement shortly after that letting you know what time's been selected. I'll send out connection info once the meeting's been set. The meeting will be recorded and interested parties who can't make it can always leave notes on the discussion etherpad. Info: - cinder spec: https://review.opendev.org/#/c/684556/ - nova spec: https://review.opendev.org/#/c/689070/ - etherpad: https://etherpad.openstack.org/p/volume-local-cache cheers, brian From chkumar246 at gmail.com Fri Feb 7 04:46:00 2020 From: chkumar246 at gmail.com (Chandan kumar) Date: Fri, 7 Feb 2020 10:16:00 +0530 Subject: [tripleo] no rechecks In-Reply-To: References: Message-ID: On Thu, Feb 6, 2020 at 3:12 PM Chandan kumar wrote: > > Hello, > > TripleO jobs are failing with following bug. > > upstream tripleo jobs using os_tempest failing with ERROR: No matching > distribution found for oslo.db===7.0.0 -> > https://bugs.launchpad.net/tripleo/+bug/1862134 > > Fix is in progress: https://review.opendev.org/#/c/706196/ > > We need this patch to land before it's safe to blindly recheck. > We appreciate your patience while we resolve the issues. > Gate is fixed now. Please go ahead with rechecks. Thanks, Chandan Kumar From agarwalvishakha18 at gmail.com Fri Feb 7 07:20:33 2020 From: agarwalvishakha18 at gmail.com (Vishakha Agarwal) Date: Fri, 7 Feb 2020 12:50:33 +0530 Subject: [keystone] Keystone Team Update - Week of 3 February 2020 Message-ID: # Keystone Team Update - Week of 3 February 2020 ## News ### User Support and Bug Duty Every week the duty is being rotated between the members. The person-in-charge for bug duty for current and upcoming week can be seen on the etherpad [1] [1] https://etherpad.openstack.org/p/keystone-l1-duty ## Open Specs Ussuri specs: https://bit.ly/2XDdpkU Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 13 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 27 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Ussuri Roadmap Stories - Groups in keystone SAML assertion https://tree.taiga.io/project/keystone-ussuri-roadmap/us/33 https://review.opendev.org/#/c/588211/ Add openstack_groups to assertion - Add support for modifying resource options to CLI tool https://tree.taiga.io/project/keystone-ussuri-roadmap/us/53 https://review.opendev.org/#/c/697444/ Adding options to user cli * Special Requests https://review.opendev.org/#/c/705887/ Drop py3.5 from tempest plugins ## Bugs This week we opened 2 new bugs and closed 3. Bugs opened (2) Bug #1861695 (keystoneauth:Undecided): Programming error choosing an endpoint. - Opened by Raphaël Droz https://bugs.launchpad.net/keystoneauth/+bug/1861695 Bug #1862035 (keystoneauth:Undecided): Make possible to pass TCP_USER_TIMEOUT in keystoneauth1 session - Opened by Anton Kurbatov https://bugs.launchpad.net/keystoneauth/+bug/1862035 Bugs closed (3) Bug #1756190 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1756190 Bug #1853839 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1853839 Bug #1856286 (python-keystoneclient:Undecided) https://bugs.launchpad.net/python-keystoneclient/+bug/1856286 ## Milestone Outlook https://releases.openstack.org/ussuri/schedule.html Next week is spec freeze. One month from now is feature proposal freeze, by which point all code for new features should be proposed and ready for review - no WIPs. This will give us 4 weeks to review. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From katonalala at gmail.com Fri Feb 7 08:22:12 2020 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 7 Feb 2020 09:22:12 +0100 Subject: [Neutron][segments] In-Reply-To: References: Message-ID: Hi, I assume you refer to this documentation (just to be sure that others have the same base): https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html Based on that (and my limited knowledge of the usecases of this feature): To avoid large l2 tenant networks and small tenant networks which the user has to know and select, routed provider nets "provide" the option to use a provider network (characterized by a segment: physical network - segmentation type - segmentation id) and you can add more segments (as in the example for example with new VLAN id, but on the same physnet on other compute) and add extra subnet range for it which is specific for that segment (i.e.: a group of computes). With this the user will have 1 network to boot VMs on, but openstack (neutron - nova - placement) will decide where to put the VM which segment and subnet to use. I mentioned placement: The feature should work in a way that based on information provided by neutron (during segment & subnet creation) placement knows how many free IP addresses are available on a segment, and based on this places VMs to hosts, that is not working as I know, see: https://review.opendev.org/656885 & https://review.opendev.org/665155 Regards Lajos Ignazio Cassano ezt írta (időpont: 2020. febr. 6., Cs, 21:04): > Hello, > I am reading about openstack neutron routed provider networks. > If I understood well, using segments I can create more subnets within the > same provider net vlan id. > Correct? > In documentation seems I must use different physical network for each > segment. > I am using only one physical network ...in other words I created an > openvswitch with a bridge (br-ex) with an interface on which I receive all > my vlans (trunk). > Why must I use different physical net? > Sorry, but my net skill is poor. > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From balazs.gibizer at est.tech Fri Feb 7 08:46:28 2020 From: balazs.gibizer at est.tech (=?iso-8859-1?Q?Bal=E1zs_Gibizer?=) Date: Fri, 7 Feb 2020 08:46:28 +0000 Subject: [cinder][nova] volume-local-cache proposal cross-project meeting In-Reply-To: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> References: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> Message-ID: <1581065185.279185.6@est.tech> On Thu, Feb 6, 2020 at 21:40, Brian Rosmaita wrote: > Liang Fang's volume-local-cache spec has gotten stuck because the > Cinder team doesn't want to approve something that won't get approved > on the Nova side and vice-versa. > > We discussed the spec at this week's Cinder meeting and there's > sufficient interest to continue with it. In order to keep things > moving along, we'd like to have a cross-project video conference next > week instead of waiting for the PTG. > > I think we can get everything covered in a half-hour. I've put > together a doodle poll to find a reasonable day and time: > https://doodle.com/poll/w8ttitucgyqwe5yc Something is wrong with the poll. I'm sitting in UTC+1 and it is offering choices like Monday 8:30 am - 9:00 am for me. But the description text says 13:30-14:00 UTC Monday-Thursday. T Monday, Wednesday, Thursday, 13:30 - 14:00 UTC works for me. The 02:30-03:00 UTC slot is just in the middle of my night. Cheers, gibi > > Please take the poll before 17:00 UTC on Saturday 8 February. I'll > send out an announcement shortly after that letting you know what > time's been selected. > > I'll send out connection info once the meeting's been set. The > meeting will be recorded and interested parties who can't make it can > always leave notes on the discussion etherpad. > > Info: > - cinder spec: https://review.opendev.org/#/c/684556/ > - nova spec: https://review.opendev.org/#/c/689070/ > - etherpad: https://etherpad.openstack.org/p/volume-local-cache > > cheers, > brian > From ignaziocassano at gmail.com Fri Feb 7 10:32:05 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 7 Feb 2020 11:32:05 +0100 Subject: [Neutron][segments] In-Reply-To: References: Message-ID: Many thanks, Layos Il giorno ven 7 feb 2020 alle ore 09:22 Lajos Katona ha scritto: > Hi, > > I assume you refer to this documentation (just to be sure that others have > the same base): > https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html > > Based on that (and my limited knowledge of the usecases of this feature): > To avoid large l2 tenant networks and small tenant networks which the user > has to know and select, > routed provider nets "provide" the option to use a provider network > (characterized by a segment: physical network - segmentation type - > segmentation id) > and you can add more segments (as in the example for example with new VLAN > id, but on the same physnet on other compute) and add extra subnet range > for it > which is specific for that segment (i.e.: a group of computes). > With this the user will have 1 network to boot VMs on, but openstack > (neutron - nova - placement) will decide where to put the VM which segment > and subnet to use. > > I mentioned placement: The feature should work in a way that based on > information provided by neutron (during segment & subnet creation) > placement knows > how many free IP addresses are available on a segment, and based on this > places VMs to hosts, that is not working as I know, see: > https://review.opendev.org/656885 & https://review.opendev.org/665155 > > Regards > Lajos > > Ignazio Cassano ezt írta (időpont: 2020. febr. > 6., Cs, 21:04): > >> Hello, >> I am reading about openstack neutron routed provider networks. >> If I understood well, using segments I can create more subnets within the >> same provider net vlan id. >> Correct? >> In documentation seems I must use different physical network for each >> segment. >> I am using only one physical network ...in other words I created an >> openvswitch with a bridge (br-ex) with an interface on which I receive all >> my vlans (trunk). >> Why must I use different physical net? >> Sorry, but my net skill is poor. >> Ignazio >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 7 12:19:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Feb 2020 06:19:04 -0600 Subject: [all] Nominations for the "W" release name In-Reply-To: References: Message-ID: <9c93db20-1912-cc6d-39da-1afb2feb4f52@gmx.com> We're down to the last few hours for naming the W release. Please add any ideas before the deadline. We will then have some time to discuss anything with the names and get the poll set up for election the following week. Thanks! Sean On 1/21/20 9:48 AM, Sean McGinnis wrote: > Hello all, > > We get to be a little proactive this time around and get the release > name chosen for the "W" release. Time to start thinking of good names > again! > > Process Changes > --------------- > > There are a couple of changes to be aware of with our naming process. In > the past, we had always based our naming criteria on something > geographically local to the Summit location. With the event changes to > no longer have two large Summit-type events per year, we have tweaked > our process to open things up and make it hopefully a little easier to > pick a good name that the community likes. > > There are a couple of significant changes. First, names can now be > proposed for anything that starts with the appropriate letter. It is no > longer tied to a specific geographic region. Second, in order to > simplify the process, the electorate for the poll will be the OpenStack > Technical Committee. Full details of the release naming process can be > found here: > > https://governance.openstack.org/tc/reference/release-naming.html > > Name Selection > -------------- > > With that, the nomination period for the "W" release name is now open. > Please add suitable names to: > > https://wiki.openstack.org/wiki/Release_Naming/W_Proposals > > We will accept nominations until February 7, 2020 at 23:59:59 UTC. We > will then have a brief period for any necessary discussions and to get > the poll set up, with the TC electorate voting starting by February 17, > 2020 and going no longer than February 23, 2020. > > Based on past timing with trademark and copyright reviews, we will > likely have an official release name by mid to late March. > > Happy naming! > > Sean > > From sean.mcginnis at gmx.com Fri Feb 7 13:50:17 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Feb 2020 07:50:17 -0600 Subject: [release] Release countdown for week R-13, February 10-14 Message-ID: <20200207135017.GA1488525@sm-workstation> Development Focus ----------------- The Ussuri-2 milestone is next week, on February 13! Ussuri-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/ussuri/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new 'Ussuri' deliverables need to have a deliverable file (https://opendev.org/openstack/releases/src/branch/master/deliverables/ussuri ) and need to have done a release by milestone-2. The following new deliverables have not had a release yet, and will not be included in Ussuri unless a release is requested for them in the coming week: - adjutant, adjutant-ui, python-adjutantclient (Adjutant) - barbican-ui (Barbican) - js-openstack-lib (OpenStackSDK) - ovn-octavia-provider (Neutron) - sushy-cli (Ironic) - tripleo-operator-ansible (TripleO) Changes proposing those deliverables for inclusion in Ussuri have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in Ussuri, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Ussuri-2 Milestone: February 13 (R-13 week) From sean.mcginnis at gmx.com Fri Feb 7 13:52:14 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 7 Feb 2020 07:52:14 -0600 Subject: [Release-job-failures] Release of openstack/mox3 for ref refs/tags/1.0.0 failed In-Reply-To: References: Message-ID: <20200207135214.GB1488525@sm-workstation> On Fri, Feb 07, 2020 at 01:16:38PM +0000, zuul at openstack.org wrote: > Build failed. > > - release-openstack-python https://zuul.opendev.org/t/openstack/build/720dc1833f464e2db76b90d1c8437883 : SUCCESS in 3m 48s > - announce-release https://zuul.opendev.org/t/openstack/build/d85b18ed6e2a47f98e96c1ad168ae98a : FAILURE in 4m 00s > - propose-update-constraints https://zuul.opendev.org/t/openstack/build/c2abc022840846c7a87b5804580492ee : SUCCESS in 3m 51s > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures There was a temporary SMTP failure in sending the release announcement for mox3. All other release activities were successful, so I think this error can safely be ignored. Sean From emilien at redhat.com Fri Feb 7 15:18:24 2020 From: emilien at redhat.com (Emilien Macchi) Date: Fri, 7 Feb 2020 10:18:24 -0500 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Thanks for joining, and the great questions, I hope you learned something, and that we can do it again soon. Here is the recording: https://bluejeans.com/s/vTSAY Slides: https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/edit On Wed, Feb 5, 2020 at 7:22 PM Emilien Macchi wrote: > Of course it'll be recorded and the link will be available for everyone. > > On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, > wrote: > >> Hi folks, >> >> On Friday I'll do a deep-dive on where we are with container tools. >> It's basically an update on the removal of Paunch, what will change etc. >> >> I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask >> questions or give feedback. >> >> https://bluejeans.com/6007759543 >> Link of the slides: >> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ >> -- >> Emilien Macchi >> > -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccamacho at redhat.com Fri Feb 7 15:23:24 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Fri, 7 Feb 2020 16:23:24 +0100 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Thanks Emilien for the session, I wasn't able to be present but I'll proceed to edit/publish it in the TripleO youtube channel. Thanks! On Fri, Feb 7, 2020 at 4:21 PM Emilien Macchi wrote: > Thanks for joining, and the great questions, I hope you learned something, > and that we can do it again soon. > > Here is the recording: > https://bluejeans.com/s/vTSAY > Slides: > https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/edit > > > > On Wed, Feb 5, 2020 at 7:22 PM Emilien Macchi wrote: > >> Of course it'll be recorded and the link will be available for everyone. >> >> On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, >> wrote: >> >>> Hi folks, >>> >>> On Friday I'll do a deep-dive on where we are with container tools. >>> It's basically an update on the removal of Paunch, what will change etc. >>> >>> I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask >>> questions or give feedback. >>> >>> https://bluejeans.com/6007759543 >>> Link of the slides: >>> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ >>> -- >>> Emilien Macchi >>> >> > > -- > Emilien Macchi > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gryf73 at gmail.com Fri Feb 7 15:25:02 2020 From: gryf73 at gmail.com (Roman Dobosz) Date: Fri, 7 Feb 2020 16:25:02 +0100 Subject: [kuryr] macvlan driver looking for an owner Message-ID: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> Hi, Recently we migrated most of the drivers and handler to OpenStackSDK, instead of neutron-client[1], and have a plan to drop neutron-client usage. One of the drivers - MACVLAN based interfaces for nested containers - wasn't migrated due to lack of confidence backed up with sufficient tempest tests. Therefore we are looking for a maintainer, who will take care about both - migration to the openstacksdk (which I can help with) and to provide appropriate environment and tests. In case there is no interest on continuing support for this driver we will deprecate it and remove from the source tree possibly in this release. [1] https://review.opendev.org/#/q/topic:bp/switch-to-openstacksdk -- Cheers, Roman Dobosz gryf at freenode From Albert.Braden at synopsys.com Fri Feb 7 19:15:58 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 7 Feb 2020 19:15:58 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' Message-ID: If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see "Loaded 2 Fernet keys" every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Fri Feb 7 20:13:57 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Fri, 7 Feb 2020 12:13:57 -0800 Subject: [manila][ptg] Victoria Cycle PTG in Vancouver - Planning Message-ID: Hello Zorillas and other interested Stackers, We've requested space to gather at the Open Infrastructure PTG event in Vancouver, BC between June 8th-11th 2020. This event gives us the opportunity to catch up in person (and via telepresence) and discuss the next set of goals for manila and its ecosystem of projects. As has been the norm, there's now a planning etherpad for us to collect topics and vote upon [1]. Depending on how much space/time we are budgeted, we will select and schedule these topics. Please take a look and add your topics to the etherpad. Thanks, Goutham [1] https://etherpad.openstack.org/p/vancouver-ptg-manila-planning From Albert.Braden at synopsys.com Fri Feb 7 22:25:46 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Fri, 7 Feb 2020 22:25:46 +0000 Subject: Virtio memory balloon driver In-Reply-To: <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> Message-ID: I opened a bug: https://bugs.launchpad.net/nova/+bug/1862425 -----Original Message----- From: Sean Mooney Sent: Wednesday, February 5, 2020 10:25 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5842-2DL5852&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=_kEGfZqTkPscjy0GJB2N_WBXRJPEt2400ADV12hhxR8&e= it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.perl.bot_p_njvgbm&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=5J3hH_mxdtOyNFqbW6j9yGiSyMXmhy3bXrmXRHkJ9I0&e= > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From ignaziocassano at gmail.com Sat Feb 8 14:55:21 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 15:55:21 +0100 Subject: [Queens][neutron] multiple networks with same vlan id Message-ID: Hello, I trying to create multiple networks with the same vlan id and different address but it is impossibile because neutron returns vlan id is already present. I tried with success on vcenter and it works. On openstack I can create more subnets with different adresses under the same vlan id, but instances receive static routes from dhcp for all subnets. Any workaround ? I am also trying with segments but they do not seem to fit my case. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 14:59:03 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 15:59:03 +0100 Subject: [quuens] neutron vxlan to vxlan Message-ID: Hello, I have an openstack installation and a vcenter installation with nsxv. Vcenter is not under openstack Any solution for vxlan to vxlan communication between them? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From donny at fortnebula.com Sat Feb 8 16:21:53 2020 From: donny at fortnebula.com (Donny Davis) Date: Sat, 8 Feb 2020 11:21:53 -0500 Subject: [Queens][neutron] multiple networks with same vlan id In-Reply-To: References: Message-ID: Are you trying to create a new network each time? There is probably some different terminology in openstack vs vmware A network in openstack defines the underlying L2 switching device/ vlan/ vxlan / gre .. etc You can add as many subnets to a network as you would like to, and host routes can be added for each subnet. On Sat, Feb 8, 2020 at 10:00 AM Ignazio Cassano wrote: > > Hello, I trying to create multiple networks with the same vlan id and different address but it is impossibile because neutron returns vlan id is already present. > I tried with success on vcenter and it works. > On openstack I can create more subnets with different adresses under the same vlan id, but instances receive static routes from dhcp for all subnets. > Any workaround ? > I am also trying with segments but they do not seem to fit my case. > Regards > Ignazio > > -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" From romain.chanu at univ-lyon1.fr Sat Feb 8 17:11:31 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Sat, 8 Feb 2020 17:11:31 +0000 Subject: [Queens][neutron] multiple networks with same vlan id In-Reply-To: References: Message-ID: <1581181900656.35220@univ-lyon1.fr> Hello, In the case of Openstack the vlan driver permits to isolate each project's layer 2. This way each project's L3 can overlaps without any problem. You can create many L3 segments within only one VLAN ID. I do not use vcenter but for old ESXi it was "possible to create" many vswitches whom contains same vlan ID, but this is very different and vmware misuses term of switch. Best regards, Romain . ________________________________ From: Ignazio Cassano Sent: Saturday, February 8, 2020 3:55 PM To: openstack-discuss Subject: [Queens][neutron] multiple networks with same vlan id Hello, I trying to create multiple networks with the same vlan id and different address but it is impossibile because neutron returns vlan id is already present. I tried with success on vcenter and it works. On openstack I can create more subnets with different adresses under the same vlan id, but instances receive static routes from dhcp for all subnets. Any workaround ? I am also trying with segments but they do not seem to fit my case. Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From romain.chanu at univ-lyon1.fr Sat Feb 8 17:29:42 2020 From: romain.chanu at univ-lyon1.fr (CHANU ROMAIN) Date: Sat, 8 Feb 2020 17:29:42 +0000 Subject: [quuens] neutron vxlan to vxlan In-Reply-To: References: Message-ID: <1581182992368.92191@univ-lyon1.fr> Hello, You could do it if you use linuxbridge driver, arp_responder has to be set to false. VMware's VNI has to be set in multicast mode, then if OS et NSX use same multicast IP VM should be able to communicate. If your ESX et OS's hypervisors are not conencted to same network, you have to enable PIM in your router. Anyway this configuration is a bit dirty and you should look a proper way to make it works over L3. A VPN inside each subnets with static routes is cleaner and less risky. Best regards, Romain ________________________________ From: Ignazio Cassano Sent: Saturday, February 8, 2020 3:59 PM To: openstack-discuss Subject: [quuens] neutron vxlan to vxlan Hello, I have an openstack installation and a vcenter installation with nsxv. Vcenter is not under openstack Any solution for vxlan to vxlan communication between them? Regards Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 17:48:45 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 18:48:45 +0100 Subject: [quuens] neutron vxlan to vxlan In-Reply-To: <1581182992368.92191@univ-lyon1.fr> References: <1581182992368.92191@univ-lyon1.fr> Message-ID: Thanks Roman. I think secondo way you suggested is better than first. Ignazio Il Sab 8 Feb 2020, 18:29 CHANU ROMAIN ha scritto: > Hello, > > > You could do it if you use linuxbridge driver, arp_responder has to be set > to false. VMware's VNI has to be set in multicast mode, then if OS et NSX > use same multicast IP VM should be able to communicate. > > > If your ESX et OS's hypervisors are not conencted to same network, you > have to enable PIM in your router. > > > Anyway this configuration is a bit dirty and you should look a proper way > to make it works over L3. A VPN inside each subnets with static routes is > cleaner and less risky. > > > Best regards, > > Romain > > > > ------------------------------ > *From:* Ignazio Cassano > *Sent:* Saturday, February 8, 2020 3:59 PM > *To:* openstack-discuss > *Subject:* [quuens] neutron vxlan to vxlan > > Hello, I have an openstack installation and a vcenter installation with > nsxv. > Vcenter is not under openstack > Any solution for vxlan to vxlan communication between them? > Regards > Ignazio > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 17:53:54 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 18:53:54 +0100 Subject: [Queens][neutron] multiple networks with same vlan id. In-Reply-To: References: Message-ID: Hello Donny, I do not want to see long routing tables on instances. An instance in a network with many subnets, receives routing tables for all subnets. Ignazio Il Sab 8 Feb 2020, 17:22 Donny Davis ha scritto: > Are you trying to create a new network each time? > There is probably some different terminology in openstack vs vmware > > A network in openstack defines the underlying L2 switching device/ > vlan/ vxlan / gre .. etc > > You can add as many subnets to a network as you would like to, and > host routes can be added for each subnet. > > On Sat, Feb 8, 2020 at 10:00 AM Ignazio Cassano > wrote: > > > > Hello, I trying to create multiple networks with the same vlan id and > different address but it is impossibile because neutron returns vlan id is > already present. > > I tried with success on vcenter and it works. > > On openstack I can create more subnets with different adresses under the > same vlan id, but instances receive static routes from dhcp for all subnets. > > Any workaround ? > > I am also trying with segments but they do not seem to fit my case. > > Regards > > Ignazio > > > > > > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 8 17:56:31 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 8 Feb 2020 18:56:31 +0100 Subject: [Queens][neutron] multiple networks with same vlan id In-Reply-To: <1581181900656.35220@univ-lyon1.fr> References: <1581181900656.35220@univ-lyon1.fr> Message-ID: Many thanks. Ignazio Il Sab 8 Feb 2020, 18:11 CHANU ROMAIN ha scritto: > Hello, > > > In the case of Openstack the vlan driver permits to isolate each project's > layer 2. This way each project's L3 can overlaps without any problem. You > can create many L3 segments within only one VLAN ID. > > > I do not use vcenter but for old ESXi it was "possible to create" many > vswitches whom contains same vlan ID, but this is very different and vmware > misuses term of switch. > > > Best regards, > > Romain > . > ------------------------------ > *From:* Ignazio Cassano > *Sent:* Saturday, February 8, 2020 3:55 PM > *To:* openstack-discuss > *Subject:* [Queens][neutron] multiple networks with same vlan id > > Hello, I trying to create multiple networks with the same vlan id and > different address but it is impossibile because neutron returns vlan id is > already present. > I tried with success on vcenter and it works. > On openstack I can create more subnets with different adresses under the > same vlan id, but instances receive static routes from dhcp for all subnets. > Any workaround ? > I am also trying with segments but they do not seem to fit my case. > Regards > Ignazio > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Feb 8 18:00:01 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 8 Feb 2020 18:00:01 +0000 Subject: [Queens][neutron] multiple networks with same vlan id. In-Reply-To: References: Message-ID: <20200208180001.gjegpvkw3b6mxg3h@yuggoth.org> On 2020-02-08 18:53:54 +0100 (+0100), Ignazio Cassano wrote: [...] > I do not want to see long routing tables on instances. An instance > in a network with many subnets, receives routing tables for all > subnets. [...] And as we've discussed recently on this list, if you're relying on DHCP there's a hard limit to the size of the route list it can provide in leases anyway, so the shorter the routing table per Neutron network, the better. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rosmaita.fossdev at gmail.com Sat Feb 8 18:28:22 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sat, 8 Feb 2020 13:28:22 -0500 Subject: [cinder][nova] volume-local-cache proposal cross-project meeting In-Reply-To: <1581065185.279185.6@est.tech> References: <8d222a38-71f3-1603-36cc-83dedec9f705@gmail.com> <1581065185.279185.6@est.tech> Message-ID: On 2/7/20 3:46 AM, Balázs Gibizer wrote: > > > On Thu, Feb 6, 2020 at 21:40, Brian Rosmaita > wrote: >> Liang Fang's volume-local-cache spec has gotten stuck because the >> Cinder team doesn't want to approve something that won't get approved >> on the Nova side and vice-versa. >> >> We discussed the spec at this week's Cinder meeting and there's >> sufficient interest to continue with it. In order to keep things >> moving along, we'd like to have a cross-project video conference next >> week instead of waiting for the PTG. >> >> I think we can get everything covered in a half-hour. I've put >> together a doodle poll to find a reasonable day and time: >> https://doodle.com/poll/w8ttitucgyqwe5yc > > Something is wrong with the poll. I'm sitting in UTC+1 and it is > offering choices like Monday 8:30 am - 9:00 am for me. But the > description text says 13:30-14:00 UTC Monday-Thursday. > T > Monday, Wednesday, Thursday, 13:30 - 14:00 UTC works for me. The > 02:30-03:00 UTC slot is just in the middle of my night. Thanks for letting me know. I get inconsistent TZ results from the interface, too. > > Cheers, > gibi > > >> >> Please take the poll before 17:00 UTC on Saturday 8 February. I'll >> send out an announcement shortly after that letting you know what >> time's been selected. >> >> I'll send out connection info once the meeting's been set. The >> meeting will be recorded and interested parties who can't make it can >> always leave notes on the discussion etherpad. >> >> Info: >> - cinder spec: https://review.opendev.org/#/c/684556/ >> - nova spec: https://review.opendev.org/#/c/689070/ >> - etherpad: https://etherpad.openstack.org/p/volume-local-cache >> >> cheers, >> brian >> > > From rosmaita.fossdev at gmail.com Sat Feb 8 18:46:23 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Sat, 8 Feb 2020 13:46:23 -0500 Subject: [cinder][nova] volume-local-cache meeting day/time set Message-ID: I picked the day/time with the most votes (9/10, sorry Rajat), but as gibi pointed out in the other thread, it may not have been obvious what you actually voted for. The good news is that the selected day is late in the week, so if you look at this time and say WTF?, let me know and we can scramble to try to find another day/time (though hopefully not--we're trying to find a time that works stretching from Minneapolis to Shanghai). Meeting time: 13:30-14:00 UTC Thursday 13 February 2020 Location: https://bluejeans.com/3228528973 Topic: volume-local-cache specs: cinder: https://review.opendev.org/#/c/684556/ nova: https://review.opendev.org/#/c/689070/ Etherpad: https://etherpad.openstack.org/p/volume-local-cache The meeting will be recorded. Feel free to leave comments on the etherpad if you can't make it but want something addressed. cheers, brian From rajatdhasmana at gmail.com Sat Feb 8 19:15:59 2020 From: rajatdhasmana at gmail.com (Rajat Dhasmana) Date: Sun, 9 Feb 2020 00:45:59 +0530 Subject: [cinder][nova] volume-local-cache meeting day/time set In-Reply-To: References: Message-ID: Yes, I was suspicious about the options as timings showed one as AM and the other PM. This timing certainly suits me. And thanks for the concern. Regards Rajat Dhasmana On Sun, Feb 9, 2020, 12:20 AM Brian Rosmaita wrote: > I picked the day/time with the most votes (9/10, sorry Rajat), but as > gibi pointed out in the other thread, it may not have been obvious what > you actually voted for. The good news is that the selected day is late > in the week, so if you look at this time and say WTF?, let me know and > we can scramble to try to find another day/time (though hopefully > not--we're trying to find a time that works stretching from Minneapolis > to Shanghai). > > Meeting time: 13:30-14:00 UTC Thursday 13 February 2020 > Location: https://bluejeans.com/3228528973 > Topic: volume-local-cache specs: > cinder: https://review.opendev.org/#/c/684556/ > nova: https://review.opendev.org/#/c/689070/ > Etherpad: https://etherpad.openstack.org/p/volume-local-cache > > The meeting will be recorded. Feel free to leave comments on the > etherpad if you can't make it but want something addressed. > > cheers, > brian > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From harishkumarivaturi at gmail.com Sat Feb 8 19:16:06 2020 From: harishkumarivaturi at gmail.com (HARISH KUMAR Ivaturi) Date: Sat, 8 Feb 2020 20:16:06 +0100 Subject: Regarding OpenStack Message-ID: Hi I am Harish Kumar, Master Student at BTH, Karlskrona, Sweden. I have started my Master thesis at BTH and my thesis topic is Performance evaluation of OpenStack with HTTP/3. My solutions will take some time (few months) and i would like to request you that could you add me as a contributor to your github repository , so after completing my thesis i could push my codes in that repository. You can contact me for further details. Thanks and Regards Harish Kumar -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnaser at vexxhost.com Sat Feb 8 20:27:28 2020 From: mnaser at vexxhost.com (Mohammed Naser) Date: Sat, 8 Feb 2020 21:27:28 +0100 Subject: Regarding OpenStack In-Reply-To: References: Message-ID: Hi Harish, Thanks for your interest, we use GitHub inside OpenStack simply to host a read-only mirror of our code, we use Gerrit to manage contributions, you can read more here: https://docs.openstack.org/contributors/ Regards, Mohammed On Sat, Feb 8, 2020 at 8:33 PM HARISH KUMAR Ivaturi wrote: > > Hi > I am Harish Kumar, Master Student at BTH, Karlskrona, Sweden. I have started my Master thesis at BTH and my thesis topic is Performance evaluation of OpenStack with HTTP/3. > My solutions will take some time (few months) and i would like to request you that could you add me as a contributor to your github repository , so after completing my thesis i could push my codes in that repository. > > You can contact me for further details. > > Thanks and Regards > Harish Kumar -- Mohammed Naser — vexxhost ----------------------------------------------------- D. 514-316-8872 D. 800-910-1726 ext. 200 E. mnaser at vexxhost.com W. https://vexxhost.com From mesutaygn at gmail.com Sat Feb 8 22:09:50 2020 From: mesutaygn at gmail.com (=?UTF-8?B?bWVzdXQgYXlnw7xu?=) Date: Sun, 9 Feb 2020 01:09:50 +0300 Subject: openstack-dpdk installation Message-ID: Hey, I am working over dpdk installation with openstack ocata release. How can I use the neutron-openvswitch-agent-dpdk or anything else instead of neutron-openvswitch-agent? Which can I choose the correct packages with os dpdk versions? I couldn't use the devstack versions!! Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Feb 9 00:39:42 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 08 Feb 2020 18:39:42 -0600 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> Message-ID: <1702763455f.f46f54ac427417.3931200914462905314@ghanshyammann.com> ---- On Thu, 06 Feb 2020 18:23:51 -0600 Ghanshyam Mann wrote ---- > ---- On Wed, 05 Feb 2020 14:28:28 -0600 Ghanshyam Mann wrote ---- > > ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- > > > I am writing at top now for easy ready. > > > > > > * Gate status: > > > - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for > > - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. > > > - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. > > > - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates > > > the tempest tox env with master u-c needs to be fixed[2]. > > > > While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for > > stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems > > to cap the upper-constraint like it was proposed by chandan[1]. > > > > NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain > > such cap only for Tempest & its plugins till stable/rocky EOL. > > Updates: > ====== > Instead of capping requirement, Tempest role run-tempest fix on master branch work fine because of Zuul pickup the job definition > and playbooks from master branch. But there is another issue occurring here. grenade job fails and blocks the fixes on stable branches > to merge because old branch has to be fixed first. So we need to merge the stable/ocata fix first and then stab;e/pike and so on. All the fixes are merged now and good to recheck on stable branch jobs. I am working on bug#1862240 now. - https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) -gmann > > New bug: > ======= > Another bug on py2 jobs even on stable/train etc. - https://bugs.launchpad.net/tempest/+bug/1862240 > > Tempest tox env moved to py3 fail the py2 jobs on stable branch jobs who are still using deprecated tox env 'all-plugin'. > plugins are installed on py2 in py2 jobs and tempest create tox env with py3 and so does 'all-plugin' which is sitepackage=True > try to find plugins on py3 and fail. > The solution is to replace the 'all-plugin' tox env to 'all'. I am fixing it for designate. But due to grenade job nature, we need to > reverse the fixes here, first, merge the stable/stein and then stable/train. > > Stay tuned for updates and more bugs. The stable branches gate is still not green, I will update the status on ML about gate staus. > > -gmann > > > > > Background on this issue: > > -------------------------------- > > > > Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on > > stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with > > stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. > > > > I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. > > ------------------------------------------- > > > > [1] https://review.opendev.org/#/c/705685/ > > [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 > > [3] https://review.opendev.org/#/c/705089 > > [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml > > > > -gmann > > > > > > > - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. > > > > > > * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED > > > - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. > > > > > > [1] https://review.opendev.org/#/c/705089/ > > > [2] https://review.opendev.org/#/c/705870/ > > > > > > -gmann > > > > > > ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- > > > > ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- > > > > > Hello Everyone, > > > > > > > > > > This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement > > > > > of 'EOLing python2 drama' in subject :). > > > > > > > > > > neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 > > > > > is py3 only and so does u-c on the master has been updated to 2.0.0. > > > > > > > > > > All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin > > > > > is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying > > > > > to install the latest neutron-lib and failing. > > > > > > > > > > This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the > > > > > py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop > > > > > from master and kepe testing py2 on stable bracnhes. > > > > > > > > > > We have two way to fix this: > > > > > > > > > > 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. > > > > > For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and > > > > > use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. > > > > > > > > > > 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. > > > > > I am trying this first[2] and testing[3]. > > > > > > > > > > > > > I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. > > > > > > > > Tried option#2: > > > > We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the > > > > bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro > > > > job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 > > > > and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial > > > > or distro-specific job like centos7 etc where we have < py3.6. > > > > > > > > Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our > > > > CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. > > > > Testing your cloud with the latest Tempest is the best possible way. > > > > > > > > Going with option#1: > > > > IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working > > > > for all possible distro/py version. > > > > > > > > 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). > > > > * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. > > > > * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest > > > > on >py3.6 env(venv or separate node). > > > > * Patch is up - https://review.opendev.org/#/c/704840/ > > > > > > > > 2.Modify Tempest tox env basepython to py3 > > > > * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 > > > > like fedora or future distro > > > > *Patch is up- https://review.opendev.org/#/c/704688/2 > > > > > > > > 3. Use compatible Tempest & its plugin tag for distro having > > > * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. > > > > * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to > > > > that branch can be used. For example Tempest 19.0.0 for rocky[3]. > > > > * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ > > > > > > > > 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): > > > > We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought > > > > this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky > > > > jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which > > > > moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using > > > > in-tree plugins for their stable branch testing. > > > > We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these > > > > stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins > > > > in neutron-vpnaas case. This will be easy for future maintenance also. > > > > > > > > Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. > > > > > > > > [1] https://review.opendev.org/#/c/703476/ > > > > [2] https://review.opendev.org/#/c/703011/ > > > > [3] https://releases.openstack.org/rocky/#rocky-tempest > > > > > > > > -gmann > > > > > > > > > [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 > > > > > [2] https://review.opendev.org/#/c/703011/ > > > > > [3] https://review.opendev.org/#/c/703012/ > > > > > > > > > > > > > > > -gmanne > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From gmann at ghanshyammann.com Sun Feb 9 00:39:50 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sat, 08 Feb 2020 18:39:50 -0600 Subject: [all] Gate status: Stable/ocata|pike|queens|rocky is broken: Avoid recheck In-Reply-To: <17015ec15d8.113d69dac313692.3141092730262956226@ghanshyammann.com> References: <17015ec15d8.113d69dac313692.3141092730262956226@ghanshyammann.com> Message-ID: <170276364b7.ec3c9d0f427418.6540164577624982390@ghanshyammann.com> All the stable branch gate is up now, you can recheck. Keep reporting the bug on #openstack-qa if you find new one about py2 drop things. -gmann ---- On Wed, 05 Feb 2020 09:15:58 -0600 Ghanshyam Mann wrote ---- > Hello Everyone, > > Stable/ocata, pike, queens, rocky gate is broken now due to Temepst dependency require >=py3.6. I summarized the situation in ML[1]. > > Do not recheck on failed patches of those branches until the job is explicitly disabling Tempest. Fixes are in progress, I will update the status here once fixes are merged. > > [1]http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012371.html > > -gmann > From skaplons at redhat.com Sun Feb 9 09:16:04 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Sun, 9 Feb 2020 10:16:04 +0100 Subject: [qa][stable][tempest-plugins]: Tempest & plugins py2 jobs failure for stable branches (1860033: the EOLing python2 drama) In-Reply-To: <1702763455f.f46f54ac427417.3931200914462905314@ghanshyammann.com> References: <16fb1aa4aae.10e957b6324515.5822370422740200537@ghanshyammann.com> <16ff38609c1.c06eebfb73294.1918371195388980302@ghanshyammann.com> <17012cca945.b113db24280518.3646733150502017843@ghanshyammann.com> <170170a3018.1280c6f73325272.3271584581395571123@ghanshyammann.com> <1701d080d83.b9502bd2375409.2945175823684568351@ghanshyammann.com> <1702763455f.f46f54ac427417.3931200914462905314@ghanshyammann.com> Message-ID: <23DBE6DD-F6DA-4DEC-9C64-BF98558965BA@redhat.com> Hi, > On 9 Feb 2020, at 01:39, Ghanshyam Mann wrote: > > ---- On Thu, 06 Feb 2020 18:23:51 -0600 Ghanshyam Mann wrote ---- >> ---- On Wed, 05 Feb 2020 14:28:28 -0600 Ghanshyam Mann wrote ---- >>> ---- On Tue, 04 Feb 2020 18:42:47 -0600 Ghanshyam Mann wrote ---- >>>> I am writing at top now for easy ready. >>>> >>>> * Gate status: >>>> - All the stable branch gate till stable/rocky is blocked due to Tempest dependency (oslo today) dropping support for >>> - Stable/stein and stable/train gates are all good as they are on Bionic with py3.6. >>>> - The tempest master gate is also blocked due to stable/rocky jobs with the same reason mentioned above. >>>> - I am working on fixes. Devstack side installation of Tempest is working[1] with stable u-c but "run-tempest" role which recreates >>>> the tempest tox env with master u-c needs to be fixed[2]. >>> >>> While testing the Tempest fix, I realized that it will fix the future Tempest release, not the old tags which are used for >>> stable branch testing. I gave many thoughts on this but cannot find the best way to solve this. Only option seems >>> to cap the upper-constraint like it was proposed by chandan[1]. >>> >>> NOTE: we need to do those cap for all such dependencies of Tempest & its plugins require py>=3.6. We need to maintain >>> such cap only for Tempest & its plugins till stable/rocky EOL. >> >> Updates: >> ====== >> Instead of capping requirement, Tempest role run-tempest fix on master branch work fine because of Zuul pickup the job definition >> and playbooks from master branch. But there is another issue occurring here. grenade job fails and blocks the fixes on stable branches >> to merge because old branch has to be fixed first. So we need to merge the stable/ocata fix first and then stab;e/pike and so on. > > All the fixes are merged now and good to recheck on stable branch jobs. I am working on bug#1862240 now. > > - https://review.opendev.org/#/q/topic:fix-stable-gate+(status:open+OR+status:merged) Thx. I see that neutron-tempest-plugin’s rocky jobs now works fine: https://review.opendev.org/#/c/706451/2 finally \o/ > > -gmann > >> >> New bug: >> ======= >> Another bug on py2 jobs even on stable/train etc. - https://bugs.launchpad.net/tempest/+bug/1862240 >> >> Tempest tox env moved to py3 fail the py2 jobs on stable branch jobs who are still using deprecated tox env 'all-plugin'. >> plugins are installed on py2 in py2 jobs and tempest create tox env with py3 and so does 'all-plugin' which is sitepackage=True >> try to find plugins on py3 and fail. >> The solution is to replace the 'all-plugin' tox env to 'all'. I am fixing it for designate. But due to grenade job nature, we need to >> reverse the fixes here, first, merge the stable/stein and then stable/train. >> >> Stay tuned for updates and more bugs. The stable branches gate is still not green, I will update the status on ML about gate staus. >> >> -gmann >> >>> >>> Background on this issue: >>> -------------------------------- >>> >>> Devstack install the Tempest and its plugins in venv using master u-c from tox.ini[2], With Tempest pin on >>> stable branch, devstack can use the stable branch u-c[3] which is all set and Tempest is installed in venv with >>> stable branch u-c. But while running the tests, Tempest roles run-tempest recreate the tox env and using master u-c. >>> >>> I am fixing that in run-tempest roles[4] but that cannot be fixed for Tempest old tags so stable branch testing still broken. >>> ------------------------------------------- >>> >>> [1] https://review.opendev.org/#/c/705685/ >>> [2] https://opendev.org/openstack/tempest/src/commit/bc9fe8eca801f54915ff3eafa418e6e18ac2df63/tox.ini#L14 >>> [3] https://review.opendev.org/#/c/705089 >>> [4] https://review.opendev.org/#/c/705870/22/roles/run-tempest/tasks/main.yaml >>> >>> -gmann >>> >>> >>>> - Once stable/rocky is green, I will push the fixes on stable/queens|pike|ocata. >>>> >>>> * https://bugs.launchpad.net/tempest/+bug/1861308 - FIXED >>>> - Tempest fix for >py3.6 is merged and if your 3rd party CI or a distro with >py3.6 should work fine now. You can re-run such jobs. >>>> >>>> [1] https://review.opendev.org/#/c/705089/ >>>> [2] https://review.opendev.org/#/c/705870/ >>>> >>>> -gmann >>>> >>>> ---- On Wed, 29 Jan 2020 16:57:25 -0600 Ghanshyam Mann wrote ---- >>>>> ---- On Thu, 16 Jan 2020 22:02:05 -0600 Ghanshyam Mann wrote ---- >>>>>> Hello Everyone, >>>>>> >>>>>> This is regarding bug: https://bugs.launchpad.net/tempest/+bug/1860033. Using Radosław's fancy statement >>>>>> of 'EOLing python2 drama' in subject :). >>>>>> >>>>>> neutron tempest plugin job on stable/rocky started failing as neutron-lib dropped the py2. neutron-lib 2.0.0 >>>>>> is py3 only and so does u-c on the master has been updated to 2.0.0. >>>>>> >>>>>> All tempest and its plugin uses the master u-c for stable branch testing which is the valid way because of master Tempest & plugin >>>>>> is being used to test the stable branches which need u-c from master itself. These failed jobs also used master u-c[1] which is trying >>>>>> to install the latest neutron-lib and failing. >>>>>> >>>>>> This is not just neutron tempest plugin issue but for all Tempest plugins jobs. Any lib used by Tempest or plugins can drop the >>>>>> py2 now and leads to this failure. Its just neutron-lib raised the flag first before I plan to hack on Tempest & plugins jobs for py2 drop >>>>>> from master and kepe testing py2 on stable bracnhes. >>>>>> >>>>>> We have two way to fix this: >>>>>> >>>>>> 1. Separate out the testing of python2 jobs with python2 supported version of Tempest plugins and with respective u-c. >>>>>> For example, test all python2 job with tempest plugin train version (or any latest version if any which support py2) and >>>>>> use u-c from stable/train. This will cap the Tempest & plugins with respective u-c for stable branches testing. >>>>>> >>>>>> 2. Second option is to install the tempest and plugins in py3 env on py2 jobs also. This should be an easy and preferred way. >>>>>> I am trying this first[2] and testing[3]. >>>>>> >>>>> >>>>> I am summarizing what Tempest and its plugins should be doing/done for these incompatible issues. >>>>> >>>>> Tried option#2: >>>>> We tried to install the py3.6 (from ppa which is not the best solution) in Tempest venv on ubuntu Xenail to fix the >>>>> bug like 1860033 [1]. This needs Tempest to bump the py version for tox env t 3.6[2]. But that broke the distro >>>>> job where py > 3.6 was available like fedora (Bug 1861308). This can be fixed by making basepython as pythion3 >>>>> and more hack for example to set the python alias on such distro. It can be stable common jobs running on Xenial >>>>> or distro-specific job like centos7 etc where we have < py3.6. >>>>> >>>>> Overall this option did not work well as this need lot of hacks depends on the distro. I am dropping this option for our >>>>> CI/CD. But you can try this on your production cloud testing where you do not need to handle multiple distro cases. >>>>> Testing your cloud with the latest Tempest is the best possible way. >>>>> >>>>> Going with option#1: >>>>> IMO, this is a workable option with the current situation. Below is plan to make Tempest and its plugins working >>>>> for all possible distro/py version. >>>>> >>>>> 1. Drop py3.5 from Tempest (also from its plugins if anyone officially supports). >>>>> * Tempest and its plugin's dependencies are becoming python-requires >=3.6 so Tempest and plugins itself cannot support py3.5. >>>>> * 'Tempest cannot support py3.5' means cannot run Tempest/plugins on py3.5 env. But still, you can test py3.5 cloud from Tempest >>>>> on >py3.6 env(venv or separate node). >>>>> * Patch is up - https://review.opendev.org/#/c/704840/ >>>>> >>>>> 2.Modify Tempest tox env basepython to py3 >>>>> * Let's not pin Tempest for py3.6. Any python version >=py3.6 should be working fine for distro does not have py3.6 >>>>> like fedora or future distro >>>>> *Patch is up- https://review.opendev.org/#/c/704688/2 >>>>> >>>>> 3. Use compatible Tempest & its plugin tag for distro having >>>> * Tempest 23.0.0 is the last version to support py2 or py3.5. This tag can be used to test py2 or ppy3.5 jobs. >>>>> * If 23.0.0 is not compatible with stable branch u-c or any tempest plugins tag then Tempest tag corresponding to >>>>> that branch can be used. For example Tempest 19.0.0 for rocky[3]. >>>>> * We have used gerrit style way to pin Tempest in past but we are trying tag name now - https://review.opendev.org/#/c/704899/ >>>>> >>>>> 4. Stable jobs using in-tree tempest plugins (neutron-vpnaas case): >>>>> We have few cases like neutron-vpnaas stable/rocky where in-tree plugin is used for stable testing. amotoki brought >>>>> this yesterday. neutron-vpnaas tempest plugin has been moved to neutron-tempest-plugin now but stable/rocky >>>>> jobs still use in-tree plugin which is causing issues due to incompatible py version on devstack and Tempest tox env(which >>>>> moved to py3). These jobs use tox -e all-plugins for in-tree plugins. This issue is not just neutron-vpnaas but any project still using >>>>> in-tree plugins for their stable branch testing. >>>>> We can solve this by pinning Tempest also + few more hack (which I am sure will be required). But best and easy way to fix these >>>>> stable branch jobs are to migrate them to use tox ''all' env with separate-repo plugins. For example neutron-tempest-plugins >>>>> in neutron-vpnaas case. This will be easy for future maintenance also. >>>>> >>>>> Anything stable/stein onwards is all good till now so we will keep using master Tempest/Plugins for their testing. >>>>> >>>>> [1] https://review.opendev.org/#/c/703476/ >>>>> [2] https://review.opendev.org/#/c/703011/ >>>>> [3] https://releases.openstack.org/rocky/#rocky-tempest >>>>> >>>>> -gmann >>>>> >>>>>> [1] https://zuul.opendev.org/t/openstack/build/fb8a928ed3614e09a9a3cf4637f2f6c2/log/job-output.txt#33040 >>>>>> [2] https://review.opendev.org/#/c/703011/ >>>>>> [3] https://review.opendev.org/#/c/703012/ >>>>>> >>>>>> >>>>>> -gmanne — Slawek Kaplonski Senior software engineer Red Hat From smooney at redhat.com Sun Feb 9 16:56:35 2020 From: smooney at redhat.com (Sean Mooney) Date: Sun, 09 Feb 2020 16:56:35 +0000 Subject: openstack-dpdk installation In-Reply-To: References: Message-ID: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> On Sun, 2020-02-09 at 01:09 +0300, mesut aygün wrote: > Hey, > > I am working over dpdk installation with openstack ocata release. > How can I use the neutron-openvswitch-agent-dpdk or anything else instead > of neutron-openvswitch-agent? in octa you can use the standared neutorn openvswitch agent with ovs-dpdk the only config option you need to set is [ovs]/datapath_type=netdev in the ml2_conf.ini on each of the compute nodes. https://github.com/openstack/neutron/blob/9aa9a097e6c9f765a59eb572a5a816169d83f2cd/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L107-L112 addtionally when installing ovs you need to configure it for dpdk. most distros now compile in support for dpdk into ovs so all you have to do to use it is configure it rahter then recompile ovs or install an addtional package. there is some upstream docs on how to do that here http://docs.openvswitch.org/en/latest/intro/install/dpdk/ the clip note version is you need to allcoated hugepages on the host for ovs-dpdk to use and addtional hugepages for your vms. then you need to bind the nic you intend to use with dpdk to the vfio-pci driver. next you need to define some config options to test ovs about that and what cores dpdk should use. sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:pmd-cpu-mask=$OVS_PMD_CORE_MASK other_config:dpdk- init=True other_config:dpdk-lcore-mask=$OVS_CORE_MASK \ other_config:dpdk-mem-channels=4 other_config:dpdk-socket-mem=$OVS_SOCKET_MEM other_config:dpdk-hugepage- dir=$OVS_HUGEPAGE_MOUNT \ other_config:dpdk-extra=" --proc-type primary $pciAddressWhitelist " [ -n "$OVS_VHOST_USER_SOCKET_DIR" ] && sudo ovs-vsctl --no-wait set Open_vSwitch . other_config:vhost-sock- dir=$OVS_VHOST_USER_SOCKET_DIR finally when you create the ovs bridges you need to set them to ues the netdev datapath although neutron will also do this its just a good habit to do it when first creating them once done you can add the physical nics to the br-ex or your provider bridge as type dpdk. ovs-vsctl --may-exist add-port $bridge $nic -- set Interface $nic type=dpdk options:dpdk-devargs=$addr with all that said that is how you install this manually if you are compiling form source. i would recomend looking at your openstack installer of choice to see if they have support or your disto vendors documentation as many installers have native support for ovs-dpdk and will automate it for you. > > Which can I choose the correct packages with os dpdk versions? > > > > I couldn't use the devstack versions!! > > Best regards From gmann at ghanshyammann.com Sun Feb 9 17:56:12 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Feb 2020 11:56:12 -0600 Subject: [nova] nova stable pike|queens|rocky is still failing on Tempest u-c: Do not recheck Message-ID: <1702b18377c.1077afa08433529.9050049755587344932@ghanshyammann.com> After fixing the stable/pike|queens|rocky integrated jobs[1], on using the stable u-c for pinned Tempest run, nova-live-migration fail on these stable branch on nova. I have proposed the fixes[2], wait for recheck till those are merged. [1] https://review.opendev.org/#/q/topic:fix-stable-gate+status:merged [2] https://review.opendev.org/#/q/topic:fix-stable-gate+status:open+projects:openstack/nova -gmann From mesutaygn at gmail.com Sun Feb 9 18:47:20 2020 From: mesutaygn at gmail.com (=?UTF-8?B?bWVzdXQgYXlnw7xu?=) Date: Sun, 9 Feb 2020 21:47:20 +0300 Subject: openstack-dpdk installation In-Reply-To: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> References: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> Message-ID: Hi Sean I did not prefer the vendor distro. I installed the native ocata release from scrath which i configured on myself. I downloaded dpdk relase from source and compiled it and also binded the interface with dpdk through ovs. While i was installing the neutron-ovs-agent, i couldnt start ovs db with neutron-ovs. Neutron ovs config files is not working with compiled ovs. How can i do that? Thank you for helping On Sun, 9 Feb 2020, 19:56 Sean Mooney, wrote: > On Sun, 2020-02-09 at 01:09 +0300, mesut aygün wrote: > > Hey, > > > > I am working over dpdk installation with openstack ocata release. > > How can I use the neutron-openvswitch-agent-dpdk or anything else > instead > > of neutron-openvswitch-agent? > in octa you can use the standared neutorn openvswitch agent with ovs-dpdk > the only config option you need to set is [ovs]/datapath_type=netdev in > the ml2_conf.ini on each of the compute nodes. > > > > https://github.com/openstack/neutron/blob/9aa9a097e6c9f765a59eb572a5a816169d83f2cd/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L107-L112 > > addtionally when installing ovs you need to configure it for dpdk. > most distros now compile in support for dpdk into ovs so all you have to > do to use it is configure it rahter then > recompile ovs or install an addtional package. > > there is some upstream docs on how to do that here > http://docs.openvswitch.org/en/latest/intro/install/dpdk/ > > the clip note version is you need to allcoated hugepages on the host for > ovs-dpdk to use and addtional hugepages for > your vms. then you need to bind the nic you intend to use with dpdk to the > vfio-pci driver. > next you need to define some config options to test ovs about that and > what cores dpdk should use. > > sudo ovs-vsctl --no-wait set Open_vSwitch . > other_config:pmd-cpu-mask=$OVS_PMD_CORE_MASK other_config:dpdk- > init=True other_config:dpdk-lcore-mask=$OVS_CORE_MASK \ > other_config:dpdk-mem-channels=4 > other_config:dpdk-socket-mem=$OVS_SOCKET_MEM other_config:dpdk-hugepage- > dir=$OVS_HUGEPAGE_MOUNT \ > other_config:dpdk-extra=" --proc-type primary $pciAddressWhitelist > " > [ -n "$OVS_VHOST_USER_SOCKET_DIR" ] && sudo ovs-vsctl --no-wait > set Open_vSwitch . other_config:vhost-sock- > dir=$OVS_VHOST_USER_SOCKET_DIR > > finally when you create the ovs bridges you need to set them to ues the > netdev datapath although neutron will also do > this its just a good habit to do it when first creating them once done you > can add the physical nics to the br-ex or > your provider bridge as type dpdk. > > ovs-vsctl --may-exist add-port $bridge $nic -- set Interface $nic > type=dpdk options:dpdk-devargs=$addr > > with all that said that is how you install this manually if you are > compiling form source. > i would recomend looking at your openstack installer of choice to see if > they have support or your disto vendors > documentation as many installers have native support for ovs-dpdk and will > automate it for you. > > > > > Which can I choose the correct packages with os dpdk versions? > > > > > > > > I couldn't use the devstack versions!! > > > > Best regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Sun Feb 9 19:24:12 2020 From: smooney at redhat.com (Sean Mooney) Date: Sun, 09 Feb 2020 19:24:12 +0000 Subject: openstack-dpdk installation In-Reply-To: References: <96353e482f3dfb7134a8143657784c8664927ecc.camel@redhat.com> Message-ID: On Sun, 2020-02-09 at 21:47 +0300, mesut aygün wrote: > Hi Sean > > > > I did not prefer the vendor distro. I installed the native ocata release > from scrath which i configured on myself. > > > I downloaded dpdk relase from source and compiled it and also binded the > interface with dpdk through ovs. > > While i was installing the neutron-ovs-agent, i couldnt start ovs db with > neutron-ovs. Neutron ovs config files is not working with compiled ovs. How > can i do that? > i would look at https://opendev.org/x/networking-ovs-dpdk for reference. specifcily the devstack plugin. i have not had time to update this problely for a year or two si it still has some legacy hacks like running ovs-dpdk under screen to mimic how devstack used to run services. i need to rewrite that at some point to use systemd but that plugins automates the process of compileing and instaling ovs-dpdk and the relevent ovsdb configuration and neutron/nova config file updates. if you are having issue with starting the ovs-db it is unrelated to openstack. but if you ment to say you wer having issue start the ovs neutron agent because it could not connect to the ovs db then its like that neutron is trying to connect over tcp and you have not exposed the ovs db over tcp as teh default is to use a unix socket, or vise vera the neutron agent can be configured to use ovs-vsctl instead of the python binding and connect via the unix socket and you may have only exposed it via tcp. i wont really have much time to debug this with you unfortunetly but hopefully networking-ovs-dpdk will help you figure out what you missed. i would focuse on ensureing ovs-dpdk is working propperly first before tryign to start teh neutron agent. > Thank you for helping > > > > > On Sun, 9 Feb 2020, 19:56 Sean Mooney, wrote: > > > On Sun, 2020-02-09 at 01:09 +0300, mesut aygün wrote: > > > Hey, > > > > > > I am working over dpdk installation with openstack ocata release. > > > How can I use the neutron-openvswitch-agent-dpdk or anything else > > > > instead > > > of neutron-openvswitch-agent? > > > > in octa you can use the standared neutorn openvswitch agent with ovs-dpdk > > the only config option you need to set is [ovs]/datapath_type=netdev in > > the ml2_conf.ini on each of the compute nodes. > > > > > > > > https://github.com/openstack/neutron/blob/9aa9a097e6c9f765a59eb572a5a816169d83f2cd/neutron/conf/plugins/ml2/drivers/ovs_conf.py#L107-L112 > > > > addtionally when installing ovs you need to configure it for dpdk. > > most distros now compile in support for dpdk into ovs so all you have to > > do to use it is configure it rahter then > > recompile ovs or install an addtional package. > > > > there is some upstream docs on how to do that here > > http://docs.openvswitch.org/en/latest/intro/install/dpdk/ > > > > the clip note version is you need to allcoated hugepages on the host for > > ovs-dpdk to use and addtional hugepages for > > your vms. then you need to bind the nic you intend to use with dpdk to the > > vfio-pci driver. > > next you need to define some config options to test ovs about that and > > what cores dpdk should use. > > > > sudo ovs-vsctl --no-wait set Open_vSwitch . > > other_config:pmd-cpu-mask=$OVS_PMD_CORE_MASK other_config:dpdk- > > init=True other_config:dpdk-lcore-mask=$OVS_CORE_MASK \ > > other_config:dpdk-mem-channels=4 > > other_config:dpdk-socket-mem=$OVS_SOCKET_MEM other_config:dpdk-hugepage- > > dir=$OVS_HUGEPAGE_MOUNT \ > > other_config:dpdk-extra=" --proc-type primary $pciAddressWhitelist > > " > > [ -n "$OVS_VHOST_USER_SOCKET_DIR" ] && sudo ovs-vsctl --no-wait > > set Open_vSwitch . other_config:vhost-sock- > > dir=$OVS_VHOST_USER_SOCKET_DIR > > > > finally when you create the ovs bridges you need to set them to ues the > > netdev datapath although neutron will also do > > this its just a good habit to do it when first creating them once done you > > can add the physical nics to the br-ex or > > your provider bridge as type dpdk. > > > > ovs-vsctl --may-exist add-port $bridge $nic -- set Interface $nic > > type=dpdk options:dpdk-devargs=$addr > > > > with all that said that is how you install this manually if you are > > compiling form source. > > i would recomend looking at your openstack installer of choice to see if > > they have support or your disto vendors > > documentation as many installers have native support for ovs-dpdk and will > > automate it for you. > > > > > > > > Which can I choose the correct packages with os dpdk versions? > > > > > > > > > > > > I couldn't use the devstack versions!! > > > > > > Best regards > > > > From gmann at ghanshyammann.com Sun Feb 9 22:55:47 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 09 Feb 2020 16:55:47 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-14 Update (1 week left to complete) Message-ID: <1702c2a80ef.12943337f436559.6195463107891828702@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-14 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * 1 week left to finish the work. * This week was very dramatic and full of failure as expected or more than that :). ** With that, I am expecting more issues to occur, dropping py2 from every repo asap is suggested so that we can stabilize the things well before m-3. * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. * Oslo dropped the py2, which caused multiple failures on 1. tempest testing on stable branches 2. projects still running py2 jobs on the master. ** we did not opt to cap the oslo lib for References: <86b5b5b7-8f0c-9bc7-6275-cce1c353cd48@linaro.org> <449b1a03-2066-bea1-0a53-91dc59a3d58c@linaro.org> Message-ID: On Wed, Feb 5, 2020 at 5:36 AM Mark Goddard wrote: > > On Sun, 2 Feb 2020 at 21:06, Neal Gompa wrote: > > > > On Wed, Jan 29, 2020 at 9:37 AM Mark Goddard wrote: > > > > > > On Wed, 29 Jan 2020 at 11:31, Alfredo Moralejo Alonso > > > wrote: > > > > > > > > > > > > > > > > On Tue, Jan 28, 2020 at 5:53 PM Mark Goddard wrote: > > > >> > > > >> On Tue, 28 Jan 2020 at 15:18, Mark Goddard wrote: > > > >> > > > > >> > On Mon, 27 Jan 2020 at 09:18, Radosław Piliszek > > > >> > wrote: > > > >> > > > > > >> > > I know it was for masakari. > > > >> > > Gaëtan had to grab crmsh from opensuse: > > > >> > > http://download.opensuse.org/repositories/network:/ha-clustering:/Stable/CentOS_CentOS-7/ > > > >> > > > > > >> > > -yoctozepto > > > >> > > > > >> > Thanks Wes for getting this discussion going. I've been looking at > > > >> > CentOS 8 today and trying to assess where we are. I created an > > > >> > Etherpad to track status: > > > >> > https://etherpad.openstack.org/p/kolla-centos8 > > > >> > > > > > > > > uwsgi and etcd are now available in rdo dependencies repo. Let me know if you find some issue with it. > > > > > > I found them, thanks. > > > > > > > > > > >> > > > >> We are seeing an odd DNF error sometimes. DNF exits 141 with no error > > > >> code when installing packages. It often happens on the rabbitmq and > > > >> grafana images. There is a prompt about importing GPG keys prior to > > > >> the error. > > > >> > > > >> Example: https://4eff4bb69c321960be39-770d619687de1bce0976465c40e4e9ca.ssl.cf2.rackcdn.com/693544/33/check/kolla-ansible-centos8-source-mariadb/93a8351/primary/logs/build/000_FAILED_kolla-toolbox.log > > > >> > > > >> Related bug report? https://github.com/containers/libpod/issues/4431 > > > >> > > > >> Anyone familiar with it? > > > >> > > > > > > > > Didn't know about this issue. > > > > > > > > BTW, there is rabbitmq-server in RDO dependencies repo if you are interested in using it from there instead of rabbit repo. > > > > > > It seems to be due to the use of a GPG check on the repo (as opposed > > > to packages). DNF doesn't use keys imported via rpm --import for this > > > (I'm not sure what it uses), and prompts to add the key. This breaks > > > without a terminal. More explanation here: > > > https://review.opendev.org/#/c/704782. > > > > > > > librepo has its own keyring for repo signature verification. > > Thanks Neal. Any pointers on how to add keys to it? > There's no direct way other than to make sure that your repo files include the GPG public key in the gpgkey= entry, and that repo_gpgcheck=1 is enabled. DNF will automatically tell librepo to do the right thing here. Ideally, in the future, all of it will use the rpm keyring, but it doesn't for now... -- 真実はいつも一つ!/ Always, there's only one truth! From Tushar.Patil at nttdata.com Mon Feb 10 08:40:39 2020 From: Tushar.Patil at nttdata.com (Patil, Tushar) Date: Mon, 10 Feb 2020 08:40:39 +0000 Subject: [heat][tacker] After Stack is Created, will it change nested stack Id? In-Reply-To: <72fa408d-072a-7f72-91b1-caba3a7d1d6a@redhat.com> References: , <72fa408d-072a-7f72-91b1-caba3a7d1d6a@redhat.com> Message-ID: Hi Zane, Thank you very much for the detailed explanation. >> Obviously if you scale down the AutoScalingGroup and then scale it back >> up again, you'll end up with the different grandchild stack there. Understood. In such case, tacker handles the scaling up/down alarms so it can query parent stack to get nested child stack ids along with it's resources. >>The only other time it gets replaced is if you use the "mark unhealthy" >> command on the template resource in the child stack (i.e. the >> autoscaling group stack), or on the AutoScalingGroup resource itself in >> the parent stack. That's not the case as of now. So I'm not worried about it at the moment. Regards, tpatil ________________________________________ From: Zane Bitter Sent: Friday, February 7, 2020 5:40 AM To: openstack-discuss at lists.openstack.org Subject: Re: [heat][tacker] After Stack is Created, will it change nested stack Id? Hi Tushar, Great question. On 4/02/20 2:53 am, Patil, Tushar wrote: > Hi All, > > In tacker project, we are using heat API to create stack. > Consider a case where we want to add OS::Heat::AutoScalingGroup in which there are two servers and the desired capacity is set to 2. OK, so the scaled unit is a stack containing two servers and two ports. > So internally heat will create two nested stacks and add following resources to it:- > > child stack1 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port > > child stack2 > VDU1 - OS::Nova::Server > CP1 - OS::Neutron::Port > VDU2 - OS::Nova::Server > CP2- OS::Neutron::Port In fact, Heat will create 3 nested stacks - one child stack for the AutoScalingGroup that contains two Template resources, which each have a grandchild stack (the ones you list above). I'm sure you know this, but I mention it because it makes what I'm about to say below clearer. > Now, as part of tacker heal API, we want to heal VDU1 from child stack2. To do this, we will mark the status of the resources from "child stack2" as unhealthy and then update "child stack2" stack. > > Since VDU1 resource is present in the two nested stacks, I want to keep the nested stack id information in tacker so that after the stack is updated, I can pull physical resource id of the resources from the nested child stack directly. That's entirely reasonable. > My question is after the stack is created for the first time, will it ever change the nested child stack id? Short answer: no. Long answer: yes ;) In general normal updates and such will never result in the (grand)child stack ID changing. Even if a resource inside the stack fails (so the stack gets left in UPDATE_FAILED state), the next update will just try to update it again in-place. Obviously if you scale down the AutoScalingGroup and then scale it back up again, you'll end up with the different grandchild stack there. The only other time it gets replaced is if you use the "mark unhealthy" command on the template resource in the child stack (i.e. the autoscaling group stack), or on the AutoScalingGroup resource itself in the parent stack. If you do this a whole new replacement (grand)child stack will get replaced. Marking only the resources within the grandchild stack (e.g. VDU1) will *not* cause the stack to be replaced, so you should be OK. In code: https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/stack_resource.py#L106-L135 Hope that helps. Feel free to ask if you need more clarification. cheers, Zane. Disclaimer: This email and any attachments are sent in strictest confidence for the sole use of the addressee and may contain legally privileged, confidential, and proprietary data. If you are not the intended recipient, please advise the sender by replying promptly to this email and then delete and destroy this email and any attachments without any further use, copying or forwarding. From ileixe at gmail.com Mon Feb 10 09:36:08 2020 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Mon, 10 Feb 2020 18:36:08 +0900 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? Message-ID: Hi stacker, I'm trying to use ironic routed network from stein. I do not sure how the normal workflow works in nova/neutron/ironic side for routed network, so I sent this mail to get more information. >From what I understand for ironic routed network, what I did - Enable segment plugin in Neutron - Add ironic 'node' - Add ironic port with physical network - network-baremetal plugin reports neutron from all ironic nodes. - It sent 'physical_network' and 'ironic node uuid' to Neutron - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') - Add segment for the subnet. At last step, I encountered the strange. In detail, - Neutron subnet update callback call nova inventory registration - Neutron ask placement to create resource provider for segment - Neutron ask nova to create aggregate for segment - Neutron request placement to associate nova aggregate to resource provider. - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova aggregate Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny to register the host to aggregate emitting the exception like below Returning 404 to user: Compute host 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ What's strange for me is why neutron ask 'ironic node uuid' for host when nova aggregate only look for host from HostMapping which came from 'host' in compute_nodes. I could not find the code related to how 'ironic node uuid' can be registered in nova aggregate. Please someone who knows what's going on shed light on me. Thanks. From thierry at openstack.org Mon Feb 10 10:05:17 2020 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 10 Feb 2020 11:05:17 +0100 Subject: [largescale-sig] Next meeting: Feb 12, 9utc Message-ID: Hi everyone, The Large Scale SIG will have a meeting this week on Wednesday, Feb 12 at 9 UTC[1] in #openstack-meeting on IRC: [1] https://www.timeanddate.com/worldclock/fixedtime.html?iso=20200212T09 As always, the agenda for our meeting is available at: https://etherpad.openstack.org/p/large-scale-sig-meeting Feel free to add topics to it. We'll start with a show-and-tell by Stig (oneswig), followed by a status update on the various TODOs we had from last meeting, in particular: - Reviewing oslo.metrics draft at https://review.opendev.org/#/c/704733/ and comment so that we can iterate on it - Reading page on golden signals at https://landing.google.com/sre/sre-book/chapters/monitoring-distributed-systems/#xref_monitoring_golden-signals Talk to you all on Wednesday, -- Thierry Carrez From skaplons at redhat.com Mon Feb 10 11:15:19 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Mon, 10 Feb 2020 12:15:19 +0100 Subject: [neutron] 11.02.2020 team meeting cancelled Message-ID: Hi, I will not be able to chair our tomorrow’s team meeting. So lets cancel it and see You all on Monday 17.02.2020. — Slawek Kaplonski Senior software engineer Red Hat From katonalala at gmail.com Mon Feb 10 12:10:12 2020 From: katonalala at gmail.com (Lajos Katona) Date: Mon, 10 Feb 2020 13:10:12 +0100 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: Hi, Actually I know only the "ironicless" usecases of routed provider networks. As I know neutron has the hostname from the agent (even from config or if not defined from socket.gethostname()). For a similar feature (which uses placement guaranteed minimum bandwidth) there was a change that come from a bug with perhaps Ironic usecases: https://review.opendev.org/696600 the bug: https://bugs.launchpad.net/neutron/+bug/1853840 The solution was to introduce a new config option called resource_provider_hypervisors ( https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py ) Without having experience with ironic based on your description your problem with routed provider nets is similar and the config option should be used to make segments plugin use that for resource provider / host aggregate creation. Regards Lajos 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): > Hi stacker, > > I'm trying to use ironic routed network from stein. > > I do not sure how the normal workflow works in nova/neutron/ironic > side for routed network, so I sent this mail to get more information. > From what I understand for ironic routed network, what I did > > - Enable segment plugin in Neutron > - Add ironic 'node' > - Add ironic port with physical network > - network-baremetal plugin reports neutron from all ironic nodes. > - It sent 'physical_network' and 'ironic node uuid' to Neutron > - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') > - Add segment for the subnet. > > At last step, I encountered the strange. In detail, > - Neutron subnet update callback call nova inventory registration > - Neutron ask placement to create resource provider for segment > - Neutron ask nova to create aggregate for segment > - Neutron request placement to associate nova aggregate to resource > provider. > - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova > aggregate > > Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny > to register the host to aggregate emitting the exception like below > > Returning 404 to user: Compute host > 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ > > What's strange for me is why neutron ask 'ironic node uuid' for host > when nova aggregate only look for host from HostMapping which came > from 'host' in compute_nodes. > > I could not find the code related to how 'ironic node uuid' can be > registered in nova aggregate. > > > Please someone who knows what's going on shed light on me. > > Thanks. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmail.com Mon Feb 10 15:41:02 2020 From: sean.mcginnis at gmail.com (Sean McGinnis) Date: Mon, 10 Feb 2020 09:41:02 -0600 Subject: [Release-job-failures] release-post job for openstack/releases for ref refs/heads/master failed In-Reply-To: References: Message-ID: <80d27a9d-3a38-99c8-6d51-ceeacba58b55@gmail.com> On 2/10/20 9:08 AM, zuul at openstack.org wrote: > Build failed. > > - tag-releases https://zuul.opendev.org/t/openstack/build/8796e6c046a84688b3dd9dc756f4a15d : FAILURE in 6m 13s > - publish-tox-docs-static https://zuul.opendev.org/t/openstack/build/None : SKIPPED > > _______________________________________________ > Release-job-failures mailing list > Release-job-failures at lists.openstack.org > http://lists.openstack.org/cgi-bin/mailman/listinfo/release-job-failures This failure was actually on the removal of a deliverable file for something that is no longer active. This can be safely ignored. Sean From nate.johnston at redhat.com Mon Feb 10 16:23:53 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Mon, 10 Feb 2020 11:23:53 -0500 Subject: [neutron] Bug deputy report Feb 3-10 Message-ID: <20200210162353.fm22k6t5c2byztn5@firewall> Nate bug deputy notes - 2020-02-03 to 2020-02-10 ------------------------------------------------ It was a very quiet week - almost half of these were filed today, this report was looking very short as recently as yesterday. Thanks to amotoki for the next shift as bug deputy. Untriaged: - "Neutron try to register invalid host to nova aggregate for ironic routed network" * URL: https://bugs.launchpad.net/bugs/1862611 * Version: stable/stein * lajoskatona responded to the bug but I was not able to reproduce it; I don't have an ironic setup to verify the bug report. Critical: - "Neutron incorrectly selects subnet" * URL: https://bugs.launchpad.net/bugs/1862374 * Marked Incomplete by haleyb but there is still a significant chance there is a real issue here, it might just be in the SDK as opposed to neutron. Waiting for reported to respond. High: - "Fullstack tests failing due to "hang" neutron-server process" (slaweq) * URL: https://bugs.launchpad.net/bugs/1862178 * Unassigned - "Fullstack tests failing due to problem with connection to the fake placement service" * URL: https://bugs.launchpad.net/bugs/1862177 * Assigned: lajoskatona * Fix proposed: https://review.opendev.org/706500 - "Sometimes VMs can't get IP when spawned concurrently" * URL: https://bugs.launchpad.net/bugs/1862315 * Version: stable/stein * Assigned: obondarev - "[OVN] Reduce the number of tables watched by MetadataProxyHandler" * URL: https://bugs.launchpad.net/bugs/1862648 * Assigned: lucasgomes Medium: - "[OVN] functional test test_virtual_port_delete_parents is unstable" * URL: https://bugs.launchpad.net/bugs/1862618 * Assigned: sapana45 Invalid: - "placement in neutron_lib could not process keystone exceptions" * URL: https://bugs.launchpad.net/bugs/1862565 * Marked as duplicate of https://bugs.launchpad.net/neutron/+bug/1828543 (lajoskatona) RFE: - "Add RBAC for subnet pools" * URL: https://bugs.launchpad.net/bugs/1862032 * Cross-project with the nova team, nova has a patch up for their part. Last week's report by tidwellr: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012332.html From sshnaidm at redhat.com Mon Feb 10 16:47:45 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Mon, 10 Feb 2020 18:47:45 +0200 Subject: [ansible-sig][openstack-ansible][tripleo] A new meeting time and place for Openstack Ansible modules discussions Message-ID: Hi, all according to our poll about meeting time[1] the winner is: Tuesday 15.00 - 16.00 UTC (3.00 PM - 4.00 PM UTC) Please be aware that we meet in different IRC channel - #openstack-ansible-sig Thanks for voting and waiting for you tomorrow in #openstack-ansible-sig at 15.00 UTC [1] https://xoyondo.com/dp/ITMGRZSvaZaONcz -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 10 16:52:10 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Feb 2020 10:52:10 -0600 Subject: [oslo] PTG Attendance Message-ID: <5db3510a-eac4-9567-b80b-86a19634537e@nemebean.com> Hi, As discussed in the meeting this morning, we need to decide if we want to request a room at the Vancouver PTG. Since I'm not expecting to be there this time someone else would need to organize it. I realize travel plans are not finalized for most people, but if you expect to be there and would like to have formal Oslo discussions please let me know ASAP. Thanks. -Ben From Albert.Braden at synopsys.com Mon Feb 10 17:01:10 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Mon, 10 Feb 2020 17:01:10 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' In-Reply-To: References: Message-ID: Can anyone help with this galera issue? If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see "Loaded 2 Fernet keys" every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Mon Feb 10 17:07:43 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 10 Feb 2020 18:07:43 +0100 Subject: [kuryr] macvlan driver looking for an owner In-Reply-To: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> References: <20200207162502.c62db919797e3826c3fe3f98@gmail.com> Message-ID: <53fd84f1c90c5deac82340662f9b284257ba798f.camel@redhat.com> On Fri, 2020-02-07 at 16:25 +0100, Roman Dobosz wrote: > Hi, > > Recently we migrated most of the drivers and handler to OpenStackSDK, > instead of neutron-client[1], and have a plan to drop neutron-client > usage. > > One of the drivers - MACVLAN based interfaces for nested containers - > wasn't migrated due to lack of confidence backed up with sufficient > tempest tests. Therefore we are looking for a maintainer, who will > take care about both - migration to the openstacksdk (which I can > help with) and to provide appropriate environment and tests. > > In case there is no interest on continuing support for this driver we > will deprecate it and remove from the source tree possibly in this > release. I think I'll give a shot and will try converting it to openstacksdk. We depend on revisions [2] feature and seems like openstacksdk lacks support for it, but maybe I'll be able to solve this. If I'll fail, I guess the driver will be on it's way out. [2] https://docs.openstack.org/api-ref/network/v2/#revisions > [1] https://review.opendev.org/#/q/topic:bp/switch-to-openstacksdk > From mdulko at redhat.com Mon Feb 10 17:12:55 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Mon, 10 Feb 2020 18:12:55 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core In-Reply-To: References: Message-ID: <1be6e0a2cb18fcd473663c0f5ed93b800f046d45.camel@redhat.com> A little bit overdue, but I now added Maysa to the group in Gerrit. Congratulations! On Mon, 2020-02-03 at 13:15 +0100, Luis Tomas Bolivar wrote: > Truly deserved! +2!! > > She has been doing an amazing work both implementing new features as well as chasing down bugs. > > On Mon, Feb 3, 2020 at 12:58 PM wrote: > > Hi, > > > > I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes > > project. > > > > Maysa shown numerous examples of diligent and valuable work in terms of > > code contribution (e.g. in network policy support), project maintenance > > and reviews [1]. > > > > Please express support or objections by replying to this email. > > Assuming that there will be no pushback, I'll proceed with granting > > Maysa core powers by the end of this week. > > > > Thanks, > > Michał > > > > [1] https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri > > > > > > From mdemaced at redhat.com Mon Feb 10 17:22:51 2020 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Mon, 10 Feb 2020 18:22:51 +0100 Subject: Nominating Maysa De Macedo Souza for kuryr-kubernetes core In-Reply-To: <1be6e0a2cb18fcd473663c0f5ed93b800f046d45.camel@redhat.com> References: <1be6e0a2cb18fcd473663c0f5ed93b800f046d45.camel@redhat.com> Message-ID: Thank you all. I'll do my best!! Maysa. On Mon, Feb 10, 2020 at 6:17 PM wrote: > A little bit overdue, but I now added Maysa to the group in Gerrit. > Congratulations! > > On Mon, 2020-02-03 at 13:15 +0100, Luis Tomas Bolivar wrote: > > Truly deserved! +2!! > > > > She has been doing an amazing work both implementing new features as > well as chasing down bugs. > > > > On Mon, Feb 3, 2020 at 12:58 PM wrote: > > > Hi, > > > > > > I'd like to nominate Maysa to be core reviewer in Kuryr-Kubernetes > > > project. > > > > > > Maysa shown numerous examples of diligent and valuable work in terms of > > > code contribution (e.g. in network policy support), project maintenance > > > and reviews [1]. > > > > > > Please express support or objections by replying to this email. > > > Assuming that there will be no pushback, I'll proceed with granting > > > Maysa core powers by the end of this week. > > > > > > Thanks, > > > Michał > > > > > > [1] > https://www.stackalytics.com/?module=kuryr-kubernetes&release=ussuri > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From eandersson at blizzard.com Mon Feb 10 17:54:22 2020 From: eandersson at blizzard.com (Erik Olof Gunnar Andersson) Date: Mon, 10 Feb 2020 17:54:22 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' In-Reply-To: References: , Message-ID: <55B274D3-3D46-4BB4-97DC-7E3996D5189B@blizzard.com> Database Connections are kept open for one hour. Timeouts on haproxy needs to reflect that. Set timeout to at least 60 minutes. Sent from my iPhone On Feb 10, 2020, at 9:04 AM, Albert Braden wrote:  Can anyone help with this galera issue? If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see “Loaded 2 Fernet keys” every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Feb 10 19:16:22 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 10 Feb 2020 19:16:22 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance Message-ID: Hi all, networking-calico is the code that integrates Project Calico [1] with Neutron. It has been an OpenStack project for several years, but we, i.e. its developers [2], would like now to remove it from OpenStack governance and instead manage it like the other Project Calico projects under https://github.com/projectcalico/. In case anyone has any concerns about this, please let me know very soon! Please could infra folk confirm that https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project is the correct procedure for winding up the OpenStack side of this transition? Many thanks, Neil [1] https://www.projectcalico.org/ [2] That means mostly me, with some help from my colleagues here at Tigera. We do have some other contributors but I will take care that they are also on board with this change. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Feb 10 19:41:17 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Feb 2020 19:41:17 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <20200210194116.kmwlqww52lp6pi72@yuggoth.org> On 2020-02-10 19:16:22 +0000 (+0000), Neil Jerram wrote: [...] > Please could infra folk confirm that > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > is the correct procedure for winding up the OpenStack side of this > transition? [...] Yes, that's our recommended procedure for closing down development on a repository in the OpenDev infrastructure. Make sure your README.rst in step 3 includes a reference to the new location for the benefit of folks who arrive at the old copy. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mihalis68 at gmail.com Mon Feb 10 19:54:40 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Mon, 10 Feb 2020 14:54:40 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: what impact on users of this code do you foresee? What are the reasons to remove it from openstack governance? On Mon, Feb 10, 2020 at 2:24 PM Neil Jerram wrote: > Hi all, > > networking-calico is the code that integrates Project Calico [1] with > Neutron. It has been an OpenStack project for several years, but we, i.e. > its developers [2], would like now to remove it from OpenStack governance > and instead manage it like the other Project Calico projects under > https://github.com/projectcalico/. > > In case anyone has any concerns about this, please let me know very soon! > > Please could infra folk confirm that > https://docs.openstack.org/infra/manual/drivers.html#retiring-a-project > is the correct procedure for winding up the OpenStack side of this > transition? > > Many thanks, > Neil > > [1] https://www.projectcalico.org/ > [2] That means mostly me, with some help from my colleagues here at > Tigera. We do have some other contributors but I will take care that they > are also on board with this change. > -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Mon Feb 10 20:00:41 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 10 Feb 2020 20:00:41 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: On Mon, Feb 10, 2020 at 7:54 PM Chris Morgan wrote: > what impact on users of this code do you foresee? What are the reasons to > remove it from openstack governance? > Thanks for asking. I would say somewhat easier maintenance and feature velocity (when needed), as the development processes will be aligned with those of other Calico/Tigera components. But please be assured that networking-calico and Calico for OpenStack will continue to be maintained and developed. Just with a different set of (still open) processes. Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Feb 10 20:10:41 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 10 Feb 2020 20:10:41 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> Actually, it looks like it's been officially removed from Neutron since Ocata, when https://review.openstack.org/399320 merged (so nearly 3.5 years already). As such it's just the code hosting which needs to be retired, and there doesn't seem to be any governance follow-up needed. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From neil at tigera.io Mon Feb 10 21:32:17 2020 From: neil at tigera.io (Neil Jerram) Date: Mon, 10 Feb 2020 21:32:17 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> References: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> Message-ID: On Mon, Feb 10, 2020 at 8:11 PM Jeremy Stanley wrote: > Actually, it looks like it's been officially removed from Neutron > since Ocata, when https://review.openstack.org/399320 merged (so > nearly 3.5 years already). As such it's just the code hosting which > needs to be retired, and there doesn't seem to be any governance > follow-up needed. > Thanks Jeremy, yes, that's correct, networking-calico has been a 'big tent' project but not 'Neutron stadium' for the last few years. I'll take care to leave a good pointer in the README.rst, as you advised. Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 10 22:52:11 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 10 Feb 2020 16:52:11 -0600 Subject: [oslo] Meeting Time Poll Message-ID: <7be71b4b-7c5f-3d99-dcff-65cb629b77a7@nemebean.com> Hello again, We have a few regular attendees of the Oslo meeting who have conflicts with the current meeting time. As a result, we would like to find a new time to hold the meeting. I've created a Doodle poll[0] for everyone to give their input on times. It's mostly limited to times that reasonably overlap the working day in the US and Europe since that's where most of our attendees are located (yes, I know, that's a self-fulfilling prophecy). If you attend the Oslo meeting, please fill out the poll so we can hopefully find a time that works better for everyone. Thanks! -Ben /me finally checks this one off the action items for next week :-) 0: https://doodle.com/poll/zmyhrhewtes6x9ty From amy at demarco.com Mon Feb 10 23:03:17 2020 From: amy at demarco.com (Amy Marrich) Date: Mon, 10 Feb 2020 17:03:17 -0600 Subject: UC Elections Reminder Message-ID: Hey All! Just a reminder that the nomination period for the upcoming User Committee elections is open. See the original email below! Thanks, Amy (spotz) The nomination period for the February User Committee elections is now open. Any individual member of the Foundation who is an Active User Contributor (AUC) can propose their candidacy (except the two sitting UC members elected in the previous election). Self-nomination is common; no third party nomination is required. Nominations can be made by sending an email to the user-committee at lists.openstack.org mailing-list[0], with the subject: “UC candidacy” by February 16, 23:59 UTC aa voting will begin on February 17. The email can include a description of the candidate platform. The candidacy is then confirmed by one of the election officials, after verification of the electorate status of the candidate. Criteria for AUC status can be found at https://superuser.openstack.org/articles/auc-community/. If you are still not sure of your status and would like to verify in advance please email myself(amy at demarco.com) and Rain Leander(rleander at redhat.com) as we are serving as the Election Officials. Thanks, Amy Marrich (spotz) 0 - Please make sure you are subscribed to this list before sending in your nomination. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Tue Feb 11 00:22:18 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Tue, 11 Feb 2020 01:22:18 +0100 Subject: [tripleo] missing centos-8 rpms for kolla builds In-Reply-To: References: Message-ID: On 1/24/20 7:20 PM, Wesley Hayutin wrote: > Greetings, > > I know the ceph repo is in progress. the ceph [1] package and storage8-centos-nautilus-* repo [2] are now done not installable yet because we can't build the centos-release-ceph-nautilus package for el8 yet, but we're trying to solve that in centos-devel [3] 1. https://cbs.centos.org/koji/buildinfo?buildID=28564 2. https://cbs.centos.org/koji/builds?tagID=1891 3. https://lists.centos.org/pipermail/centos-devel/2020-February/036544.html -- Giulio Fidente GPG KEY: 08D733BA From ileixe at gmail.com Tue Feb 11 02:13:12 2020 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Tue, 11 Feb 2020 11:13:12 +0900 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: The spec you sent is similar to my problem in that agent sent the 'host' which does not compatible with nova 'host'. But one more things which confuse me is for routed network (ironicless / + ironic) where the resource providers "IPV4_ADDRESS" used in nova side? I could not find any code in nova/placement and from the past conversation (https://review.opendev.org/#/c/656885/), now I suspect it does not implemented. How then nova choose right compute node in a segment? Am i missing something or 'resource_provider_hypervisor' you mentioned are now used for general routed network? Thanks 2020년 2월 10일 (월) 오후 9:10, Lajos Katona 님이 작성: > > Hi, > > Actually I know only the "ironicless" usecases of routed provider networks. > As I know neutron has the hostname from the agent (even from config or if not defined from socket.gethostname()). > For a similar feature (which uses placement guaranteed minimum bandwidth) there was a change that come from a > bug with perhaps Ironic usecases: > https://review.opendev.org/696600 > the bug: https://bugs.launchpad.net/neutron/+bug/1853840 > > The solution was to introduce a new config option called resource_provider_hypervisors > (https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py ) > Without having experience with ironic based on your description your problem with routed provider nets is > similar and the config option should be used to make segments plugin use that for resource provider / host aggregate creation. > > Regards > Lajos > > 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): >> >> Hi stacker, >> >> I'm trying to use ironic routed network from stein. >> >> I do not sure how the normal workflow works in nova/neutron/ironic >> side for routed network, so I sent this mail to get more information. >> From what I understand for ironic routed network, what I did >> >> - Enable segment plugin in Neutron >> - Add ironic 'node' >> - Add ironic port with physical network >> - network-baremetal plugin reports neutron from all ironic nodes. >> - It sent 'physical_network' and 'ironic node uuid' to Neutron >> - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') >> - Add segment for the subnet. >> >> At last step, I encountered the strange. In detail, >> - Neutron subnet update callback call nova inventory registration >> - Neutron ask placement to create resource provider for segment >> - Neutron ask nova to create aggregate for segment >> - Neutron request placement to associate nova aggregate to resource provider. >> - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova aggregate >> >> Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny >> to register the host to aggregate emitting the exception like below >> >> Returning 404 to user: Compute host >> 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ >> >> What's strange for me is why neutron ask 'ironic node uuid' for host >> when nova aggregate only look for host from HostMapping which came >> from 'host' in compute_nodes. >> >> I could not find the code related to how 'ironic node uuid' can be >> registered in nova aggregate. >> >> >> Please someone who knows what's going on shed light on me. >> >> Thanks. >> From whayutin at redhat.com Tue Feb 11 05:24:25 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 10 Feb 2020 22:24:25 -0700 Subject: [tripleo] CI is RED py27 related Message-ID: Greetings, Most of the jobs went RED a few minutes ago. Again it's related to python27. Nothing is going to pass CI until this is fixed. See: https://etherpad.openstack.org/p/ruckroversprint21 We'll update the list when we have the required patches in. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Mon Feb 10 21:24:11 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Mon, 10 Feb 2020 14:24:11 -0700 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> References: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> Message-ID: Top post... OK.. so I'm going to propose we experiment with the following and if we don't like it, we can change it. I'm breaking the squads into two groups, active squads and moderately active squads. *My expectation is that the active squads are posting at the very least reviews that need attention.* #topic Active Squad status ci #link https://hackmd.io/IhMCTNMBSF6xtqiEd9Z0Kw?both validations #link https://etherpad.openstack.org/p/tripleo-validations-squad-status ceph-integration #link https://etherpad.openstack.org/p/tripleo-integration-squad-status transformation #link https://etherpad.openstack.org/p/tripleo-ansible-agenda mistral-to-ansible #link https://etherpad.openstack.org/p/tripleo-mistral-to-ansible I've added ironic integration to moderately active as it's a new request. I have no expectations for the moderately active bunch :) #topic Moderately Active Squads Ironic-integration https://etherpad.openstack.org/p/tripleo-ironic-integration-squad-status upgrade #link https://etherpad.openstack.org/p/tripleo-upgrade-squad-status edge #link https://etherpad.openstack.org/p/tripleo-edge-squad-status networking #link https://etherpad.openstack.org/p/tripleo-networking-squad-status Let's see how this plays out.. Thanks all!! On Wed, Feb 5, 2020 at 9:58 AM wrote: > Is there a need for Ironic integration one? > > > > *From:* Alan Bishop > *Sent:* Tuesday, February 4, 2020 2:20 PM > *To:* Francesco Pantano > *Cc:* John Fulton; Wesley Hayutin; OpenStack Discuss; Phil Weeks > *Subject:* Re: [tripleo] rework of triple squads and the tripleo mtg. > > > > [EXTERNAL EMAIL] > > > > On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano > wrote: > > > > > > On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > > On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > > > Currently we have the following squads.. > > 1. upgrades > > 2. edge > > 3. integration > > 4. validations > > 5. networking > > 6. transformation > > 7. ci > > > > A reasonable update could include the following.. > > > > 1. validations > > 2. transformation > > 3. mistral-to-ansible > > 4. CI > > 5. Ceph / Integration?? maybe just Ceph? > > I'm fine with "Ceph". The original intent of going from "Ceph > Integration" to the more generic "Integration" was that it could > include anyone using external-deploy-steps to deploy non-openstack > projects with TripleO (k8s, skydive, etc). Though those things > happened, we didn't really get anyone else to join the squad or update > our etherpad so I'm fine with renaming it to Ceph. We're still active > but our etherpad was getting old. I updated it just now. > > > Gulio? Francesco? Alan? > > > Agree here and I also updated the etherpad as well [1] w/ > our current status and the open topics we still have on > ceph side. > Not sure if we want to use "Integration" since the topics > couldn't be only ceph related but can involve other storage > components. > > Giulio, Alan, wdyt? > > > > "Ceph integration" makes the most sense to me, but I'm fine with just > naming it "Ceph" as we all know what that means. > > > > Alan > > > > > > > John > > > 6. others?? > > > > The squads should reflect major current efforts by the TripleO team IMHO. > > > > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > > > Thanks all!!! > > > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > > > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status > > > > -- > > Francesco Pantano > GPG KEY: F41BD75C > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Tue Feb 11 08:32:11 2020 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 11 Feb 2020 09:32:11 +0100 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: Hi, For routed prov. nets I am not sure how it was originally designed, and what worked at that time, but I have some insight to min guaranteed bandwidth feature. For that neutron (based on agent config) creates placement stuff like resource providers and inventories (there the available bandwidth is the thing saved to placement RP inventories). When a port is created with QoS minimum bandwidth rule the port will have an extra field (see: https://docs.openstack.org/api-ref/network/v2/index.html?expanded=show-port-details-detail#show-port-details resource_request field) When nova fetch the port for booting a VM read this field and when asking placement for hosts which are available, in the request this information will be included and placement will do the search with this extra resource need. So this is how it should work, but as I know now routed provider nets doesn't have this in place, so nova can't do the scheduling based on ipv4_address needs. The info is there in placement, but during boot the neutron-nova pair doesn't know how to use it. Sorry for the kitchen language :-) Regards Lajos 양유석 ezt írta (időpont: 2020. febr. 11., K, 3:13): > The spec you sent is similar to my problem in that agent sent the > 'host' which does not compatible with nova 'host'. > > But one more things which confuse me is for routed network (ironicless > / + ironic) where the resource providers "IPV4_ADDRESS" used in nova > side? > I could not find any code in nova/placement and from the past > conversation (https://review.opendev.org/#/c/656885/), now I suspect > it does not implemented. > > How then nova choose right compute node in a segment? Am i missing > something or 'resource_provider_hypervisor' you mentioned are now used > for general routed network? > > Thanks > > 2020년 2월 10일 (월) 오후 9:10, Lajos Katona 님이 작성: > > > > Hi, > > > > Actually I know only the "ironicless" usecases of routed provider > networks. > > As I know neutron has the hostname from the agent (even from config or > if not defined from socket.gethostname()). > > For a similar feature (which uses placement guaranteed minimum > bandwidth) there was a change that come from a > > bug with perhaps Ironic usecases: > > https://review.opendev.org/696600 > > the bug: https://bugs.launchpad.net/neutron/+bug/1853840 > > > > The solution was to introduce a new config option called > resource_provider_hypervisors > > ( > https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py > ) > > Without having experience with ironic based on your description your > problem with routed provider nets is > > similar and the config option should be used to make segments plugin use > that for resource provider / host aggregate creation. > > > > Regards > > Lajos > > > > 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): > >> > >> Hi stacker, > >> > >> I'm trying to use ironic routed network from stein. > >> > >> I do not sure how the normal workflow works in nova/neutron/ironic > >> side for routed network, so I sent this mail to get more information. > >> From what I understand for ironic routed network, what I did > >> > >> - Enable segment plugin in Neutron > >> - Add ironic 'node' > >> - Add ironic port with physical network > >> - network-baremetal plugin reports neutron from all ironic nodes. > >> - It sent 'physical_network' and 'ironic node uuid' to Neutron > >> - It make 'segmenthostmapping' entry with ('node uuid', > 'segment_id') > >> - Add segment for the subnet. > >> > >> At last step, I encountered the strange. In detail, > >> - Neutron subnet update callback call nova inventory registration > >> - Neutron ask placement to create resource provider for segment > >> - Neutron ask nova to create aggregate for segment > >> - Neutron request placement to associate nova aggregate to resource > provider. > >> - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova > aggregate > >> > >> Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny > >> to register the host to aggregate emitting the exception like below > >> > >> Returning 404 to user: Compute host > >> 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ > >> > >> What's strange for me is why neutron ask 'ironic node uuid' for host > >> when nova aggregate only look for host from HostMapping which came > >> from 'host' in compute_nodes. > >> > >> I could not find the code related to how 'ironic node uuid' can be > >> registered in nova aggregate. > >> > >> > >> Please someone who knows what's going on shed light on me. > >> > >> Thanks. > >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From artem.goncharov at gmail.com Tue Feb 11 08:41:44 2020 From: artem.goncharov at gmail.com (Artem Goncharov) Date: Tue, 11 Feb 2020 09:41:44 +0100 Subject: [ospurge] looking for project owners / considering adoption In-Reply-To: <533e6243-037e-bd06-5c9b-98c3316a47ab@citynetwork.eu> References: <342983ed-1d22-8f3a-3335-f153512ec2b2@catalyst.net.nz> <576E74EB-ED80-497F-9706-482FE0433208@gmail.com> <2ca832bb-4b71-b775-160a-e1868dcb21d2@citynetwork.eu> <533e6243-037e-bd06-5c9b-98c3316a47ab@citynetwork.eu> Message-ID: <2A9BD8D7-0F47-4B85-ABDB-366A32B8577B@gmail.com> Hi, I am thinking to submit (not found possibility so far) a forum-like session for that again in Vancouver, where I would present current status of implementation and we can plan further steps. Unfortunately I have still no confirmation from my employer, that I will be allowed to go. Any ideas/objections? Regards, Artem > On 3. Nov 2019, at 02:34, Tobias Rydberg wrote: > > Hi, > > Sounds really good Artem! Will you be at the session at the Summit? If not, I will bring the information from you to the session... > > Cheers, > Tobias > > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > On 2019-11-02 16:26, Artem Goncharov wrote: >> Hi Tobby, >> >> As I mentioned, if Monty does not start work, I will start it in few weeks latest (mid November). I need this now in my project, therefore I will be definitely able to spend time on implementation in both SDK and OSC. >> >> P.S. mailing this to you, since I will not be on the Summit. >> >> Regards, >> Artem >> >>> On 2. Nov 2019, at 09:19, Tobias Rydberg > wrote: >>> >>> Hi, >>> >>> A Forum session is planned for this topic, Monday 11:40. Suites perfect to continue the discussions there as well. >>> >>> https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24407/project-resource-cleanup-followup >>> BR, >>> Tobias >>> >>> Tobias Rydberg >>> Senior Developer >>> Twitter & IRC: tobberydberg >>> >>> www.citynetwork.eu | www.citycloud.com >>> >>> INNOVATION THROUGH OPEN IT INFRASTRUCTURE >>> ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED >>> On 2019-10-30 15:43, Artem Goncharov wrote: >>>> Hi Adam, >>>> >>>> Since I need this now as well I will start working on implementation how it was agreed (in SDK and in OSC) during last summit by mid of November. There is no need for discussing this further, it just need to be implemented. Sad that we got no progress in half a year. >>>> >>>> Regards, >>>> Artem (gtema). >>>> >>>>> On 30. Oct 2019, at 14:26, Adam Harwell > wrote: >>>>> >>>>> That's too bad that you won't be at the summit, but I think there may still be some discussion planned about this topic. >>>>> >>>>> Yeah, I understand completely about priorities and such internally. Same for me... It just happens that this IS priority work for us right now. :) >>>>> >>>>> >>>>> On Tue, Oct 29, 2019, 07:48 Adrian Turjak > wrote: >>>>> My apologies I missed this email. >>>>> >>>>> Sadly I won't be at the summit this time around. There may be some public cloud focused discussions, and some of those often have this topic come up. Also if Monty from the SDK team is around, I'd suggest finding him and having a chat. >>>>> >>>>> I'll help if I can but we are swamped with internal work and I can't dedicate much time to do upstream work that isn't urgent. :( >>>>> >>>>> On 17/10/19 8:48 am, Adam Harwell wrote: >>>>>> That's interesting -- we have already started working to add features and improve ospurge, and it seems like a plenty useful tool for our needs, but I think I agree that it would be nice to have that functionality built into the sdk. I might be able to help with both, since one is immediately useful and we (like everyone) have deadlines to meet, and the other makes sense to me as a possible future direction that could be more widely supported. >>>>>> >>>>>> Will you or someone else be hosting and discussion about this at the Shanghai summit? I'll be there and would be happy to join and discuss. >>>>>> >>>>>> --Adam >>>>>> >>>>>> On Tue, Oct 15, 2019, 22:04 Adrian Turjak > wrote: >>>>>> I tried to get a community goal to do project deletion per project, but >>>>>> we ended up deciding that a community goal wasn't ideal unless we did >>>>>> build a bulk delete API in each service: >>>>>> https://review.opendev.org/#/c/639010/ >>>>>> https://etherpad.openstack.org/p/community-goal-project-deletion >>>>>> https://etherpad.openstack.org/p/DEN-Deletion-of-resources >>>>>> https://etherpad.openstack.org/p/DEN-Train-PublicCloudWG-brainstorming >>>>>> >>>>>> What we decided on, but didn't get a chance to work on, was building >>>>>> into the OpenstackSDK OS-purge like functionality, as well as reporting >>>>>> functionality (of all project resources to be deleted). That way we >>>>>> could have per project per resource deletion logic, and all of that >>>>>> defined in the SDK. >>>>>> >>>>>> I was up for doing some of the work, but ended up swamped with internal >>>>>> work and just didn't drive or push for the deletion work upstream. >>>>>> >>>>>> If you want to do something useful, don't pursue OS-Purge, help us add >>>>>> that official functionality to the SDK, and then we can push for bulk >>>>>> deletion APIs in each project to make resource deletion more pleasant. >>>>>> >>>>>> I'd be happy to help with the work, and Monty on the SDK team will most >>>>>> likely be happy to as well. :) >>>>>> >>>>>> Cheers, >>>>>> Adrian >>>>>> >>>>>> On 1/10/19 11:48 am, Adam Harwell wrote: >>>>>> > I haven't seen much activity on this project in a while, and it's been >>>>>> > moved to opendev/x since the opendev migration... Who is the current >>>>>> > owner of this project? Is there anyone who actually is maintaining it, >>>>>> > or would mind if others wanted to adopt the project to move it forward? >>>>>> > >>>>>> > Thanks, >>>>>> > --Adam Harwell >>>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Feb 11 09:47:42 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 09:47:42 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: <20200210201040.3psgsqnd6wretqh5@yuggoth.org> Message-ID: On Mon, Feb 10, 2020 at 9:32 PM Neil Jerram wrote: > On Mon, Feb 10, 2020 at 8:11 PM Jeremy Stanley wrote: > >> Actually, it looks like it's been officially removed from Neutron >> since Ocata, when https://review.openstack.org/399320 merged (so >> nearly 3.5 years already). As such it's just the code hosting which >> needs to be retired, and there doesn't seem to be any governance >> follow-up needed. >> > > Thanks Jeremy, yes, that's correct, networking-calico has been a 'big > tent' project but not 'Neutron stadium' for the last few years. > > I'll take care to leave a good pointer in the README.rst, as you advised. > FYI the project-config change is up at https://review.opendev.org/#/c/707086/. (There isn't any mention of "networking-calico" in the requirements repo, so that step was a no-op.) Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Tue Feb 11 09:56:41 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Tue, 11 Feb 2020 10:56:41 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <97420918.593071.1581052477180@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> Message-ID: <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> Hi, So from this run you need the kuryr-controller logs. Apparently the pod never got annotated with an information about the VIF. Thanks, Michał On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > Hi mdulko, > Thanks for your support. > > As you mention i removed readinessProbe and > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > Attached kubelet and kuryr-cni logs. > > > > Regards, > Veera. > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > Hm, nothing too troubling there too, besides Kubernetes not answering > on /healthz endpoint. Are those full logs, including the moment you > tried spawning a container there? It seems like you only pasted the > fragments with tracebacks regarding failures to read /healthz endpoint > of kube-apiserver. That is another problem you should investigate - > that causes Kuryr pods to restart. > > At first I'd disable the healthchecks (remove readinessProbe and > livenessProbe from Kuryr pod definitions) and try to get fresh set of > logs. > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > Hi mdulko, > > Please find kuryr-cni logs > > http://paste.openstack.org/show/789209/ > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > Hi, > > > > The logs you provided doesn't seem to indicate any issues. Please > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > Thanks, > > Michał > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > Hi, > > > I am trying to run kubelet in arm64 platform > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > 2. Generated kuryr-cni-arm64 container. > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > My master node in x86 installed successfully using devstack > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > Please help me to fix the issue > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > Veera. > > > > > > From dtantsur at redhat.com Tue Feb 11 10:13:59 2020 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Tue, 11 Feb 2020 11:13:59 +0100 Subject: [tripleo] rework of triple squads and the tripleo mtg. In-Reply-To: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> References: <5eced1107d4641fd9605c7b3bae9789e@AUSX13MPS308.AMER.DELL.COM> Message-ID: Late to the party, but still: what's the proposed goal of the ironic integration squad? On Wed, Feb 5, 2020 at 6:00 PM wrote: > Is there a need for Ironic integration one? > > > > *From:* Alan Bishop > *Sent:* Tuesday, February 4, 2020 2:20 PM > *To:* Francesco Pantano > *Cc:* John Fulton; Wesley Hayutin; OpenStack Discuss; Phil Weeks > *Subject:* Re: [tripleo] rework of triple squads and the tripleo mtg. > > > > [EXTERNAL EMAIL] > > > > On Tue, Feb 4, 2020 at 11:22 AM Francesco Pantano > wrote: > > > > > > On Tue, Feb 4, 2020 at 5:45 PM John Fulton wrote: > > On Tue, Feb 4, 2020 at 11:13 AM Wesley Hayutin > wrote: > > > > Greetings, > > > > As mentioned at the previous tripleo meeting [1], we're going to revisit > the current tripleo squads and the expectations for those squads at the > tripleo meeting. > > > > Currently we have the following squads.. > > 1. upgrades > > 2. edge > > 3. integration > > 4. validations > > 5. networking > > 6. transformation > > 7. ci > > > > A reasonable update could include the following.. > > > > 1. validations > > 2. transformation > > 3. mistral-to-ansible > > 4. CI > > 5. Ceph / Integration?? maybe just Ceph? > > I'm fine with "Ceph". The original intent of going from "Ceph > Integration" to the more generic "Integration" was that it could > include anyone using external-deploy-steps to deploy non-openstack > projects with TripleO (k8s, skydive, etc). Though those things > happened, we didn't really get anyone else to join the squad or update > our etherpad so I'm fine with renaming it to Ceph. We're still active > but our etherpad was getting old. I updated it just now. > > > Gulio? Francesco? Alan? > > > Agree here and I also updated the etherpad as well [1] w/ > our current status and the open topics we still have on > ceph side. > Not sure if we want to use "Integration" since the topics > couldn't be only ceph related but can involve other storage > components. > > Giulio, Alan, wdyt? > > > > "Ceph integration" makes the most sense to me, but I'm fine with just > naming it "Ceph" as we all know what that means. > > > > Alan > > > > > > > John > > > 6. others?? > > > > The squads should reflect major current efforts by the TripleO team IMHO. > > > > For the meetings, I would propose we use this time and space to give > context to current reviews in progress and solicit feedback. It's also a > good time and space to discuss any upstream blockers for those reviews. > > > > Let's give this one week for comments etc.. Next week we'll update the > etherpad list and squads. The etherpad list will be a decent way to > communicate which reviews need attention. > > > > Thanks all!!! > > > > [1] > http://eavesdrop.openstack.org/meetings/tripleo/2020/tripleo.2020-01-07-14.00.log.html > > > > > > [1] https://etherpad.openstack.org/p/tripleo-integration-squad-status > > > > -- > > Francesco Pantano > GPG KEY: F41BD75C > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ileixe at gmail.com Tue Feb 11 12:05:23 2020 From: ileixe at gmail.com (=?UTF-8?B?7JaR7Jyg7ISd?=) Date: Tue, 11 Feb 2020 21:05:23 +0900 Subject: [neutron][ironic][nova][routed_network] what's normal workflow when ironic host added to nova from neutron? In-Reply-To: References: Message-ID: Thanks, from my understanding about QoS feature it looks like using pre-created port which enable nova scheduling. Now, I do not avoid thinking routed network does not implemented yet since nova does not know the necessary information. Comment from a guy in nova irc channel make my think more stronger. It was so confused since neutron doc says it works well though.. 2020년 2월 11일 (화) 오후 5:32, Lajos Katona 님이 작성: > > Hi, > > For routed prov. nets I am not sure how it was originally designed, and what worked at that time, > but I have some insight to min guaranteed bandwidth feature. > > For that neutron (based on agent config) creates placement stuff like resource providers and inventories > (there the available bandwidth is the thing saved to placement RP inventories). > When a port is created with QoS minimum bandwidth rule the port will have an extra field > (see: https://docs.openstack.org/api-ref/network/v2/index.html?expanded=show-port-details-detail#show-port-details resource_request field) > When nova fetch the port for booting a VM read this field and when asking placement for hosts which are > available, in the request this information will be included and placement will do the search with this extra resource need. > So this is how it should work, but as I know now routed provider nets doesn't have this in place, so nova can't do the scheduling > based on ipv4_address needs. > The info is there in placement, but during boot the neutron-nova pair doesn't know how to use it. > > Sorry for the kitchen language :-) > > Regards > Lajos > > 양유석 ezt írta (időpont: 2020. febr. 11., K, 3:13): >> >> The spec you sent is similar to my problem in that agent sent the >> 'host' which does not compatible with nova 'host'. >> >> But one more things which confuse me is for routed network (ironicless >> / + ironic) where the resource providers "IPV4_ADDRESS" used in nova >> side? >> I could not find any code in nova/placement and from the past >> conversation (https://review.opendev.org/#/c/656885/), now I suspect >> it does not implemented. >> >> How then nova choose right compute node in a segment? Am i missing >> something or 'resource_provider_hypervisor' you mentioned are now used >> for general routed network? >> >> Thanks >> >> 2020년 2월 10일 (월) 오후 9:10, Lajos Katona 님이 작성: >> > >> > Hi, >> > >> > Actually I know only the "ironicless" usecases of routed provider networks. >> > As I know neutron has the hostname from the agent (even from config or if not defined from socket.gethostname()). >> > For a similar feature (which uses placement guaranteed minimum bandwidth) there was a change that come from a >> > bug with perhaps Ironic usecases: >> > https://review.opendev.org/696600 >> > the bug: https://bugs.launchpad.net/neutron/+bug/1853840 >> > >> > The solution was to introduce a new config option called resource_provider_hypervisors >> > (https://review.opendev.org/#/c/696600/3/neutron/conf/plugins/ml2/drivers/mech_sriov/agent_common.py ) >> > Without having experience with ironic based on your description your problem with routed provider nets is >> > similar and the config option should be used to make segments plugin use that for resource provider / host aggregate creation. >> > >> > Regards >> > Lajos >> > >> > 양유석 ezt írta (időpont: 2020. febr. 10., H, 10:42): >> >> >> >> Hi stacker, >> >> >> >> I'm trying to use ironic routed network from stein. >> >> >> >> I do not sure how the normal workflow works in nova/neutron/ironic >> >> side for routed network, so I sent this mail to get more information. >> >> From what I understand for ironic routed network, what I did >> >> >> >> - Enable segment plugin in Neutron >> >> - Add ironic 'node' >> >> - Add ironic port with physical network >> >> - network-baremetal plugin reports neutron from all ironic nodes. >> >> - It sent 'physical_network' and 'ironic node uuid' to Neutron >> >> - It make 'segmenthostmapping' entry with ('node uuid', 'segment_id') >> >> - Add segment for the subnet. >> >> >> >> At last step, I encountered the strange. In detail, >> >> - Neutron subnet update callback call nova inventory registration >> >> - Neutron ask placement to create resource provider for segment >> >> - Neutron ask nova to create aggregate for segment >> >> - Neutron request placement to associate nova aggregate to resource provider. >> >> - (Bug?) Neutron add hosts came from 'segmenthostmapping' to nova aggregate >> >> >> >> Since 'segmenthostmapping' has 'ironic node uuid' for host, nova deny >> >> to register the host to aggregate emitting the exception like below >> >> >> >> Returning 404 to user: Compute host >> >> 27004f76-2606-4e4a-980e-a385a01f04de could not be found. __call__ >> >> >> >> What's strange for me is why neutron ask 'ironic node uuid' for host >> >> when nova aggregate only look for host from HostMapping which came >> >> from 'host' in compute_nodes. >> >> >> >> I could not find the code related to how 'ironic node uuid' can be >> >> registered in nova aggregate. >> >> >> >> >> >> Please someone who knows what's going on shed light on me. >> >> >> >> Thanks. >> >> From trang.le at berkeley.edu Tue Feb 11 10:24:36 2020 From: trang.le at berkeley.edu (Trang Le) Date: Tue, 11 Feb 2020 02:24:36 -0800 Subject: [UX] Contributing to OpenStack's User Interface Message-ID: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Dear OpenStack Discussion Team, I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. All the best, Trang Trang Le UC Berkeley Extension - Professional Diploma - UX/UI Design Smith College - Bachelors of Arts - Mathematics and Statistics Phone: +1 (650) 300 9007 Github: https://github.com/trangreyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdopiera at redhat.com Tue Feb 11 14:51:10 2020 From: rdopiera at redhat.com (Radek Dopieralski) Date: Tue, 11 Feb 2020 15:51:10 +0100 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Hello Trang Le, I assume that by OpenStack User Interface you mean the Horizon Dashboard. In that case, this documentation should get you started: https://docs.openstack.org/horizon/latest/contributor/index.html Regards, Radomir Dopieralski On Tue, Feb 11, 2020 at 3:04 PM Trang Le wrote: > Dear OpenStack Discussion Team, > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > All the best, > Trang > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Tue Feb 11 16:29:19 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Tue, 11 Feb 2020 10:29:19 -0600 Subject: [TC] W release naming Message-ID: Hello TC (and other interested parties), This is a reminder that we have this week set aside for the 'W' release naming for any necessary discussion, campaigning, or other activities before the official polling starts. https://governance.openstack.org/tc/reference/release-naming.html#polls We have a set of names collected from the community. There may be some trademark concerns, but I think we can leave that for the Foundation review after the election polling completes unless anyone has a strong reason to exclude any now. If so, please state so here before I create the poll for next week. https://wiki.openstack.org/wiki/Release_Naming/W_Proposals As a reminder for everyone, this naming poll is the first that follows our new process of having the electorate being members of the Technical Committee. More details can be found in the governance documentation for release naming: https://governance.openstack.org/tc/reference/release-naming.html#release-naming-process I will prepare the poll to send out next Monday. Since we have a limited electorate this time, if we collect all votes ahead of the published deadline I will check in with the TC if there is any need to wait, and if not close the poll early and get the results ready to publish. We can then move ahead with the legal review that is required before we can officially declare a winner. Thanks! Sean From Albert.Braden at synopsys.com Tue Feb 11 16:30:45 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 11 Feb 2020 16:30:45 +0000 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Hi Trang, This document has been useful for me as I work on becoming an Openstack contributor. https://docs.openstack.org/infra/manual/developers.html From: Trang Le Sent: Tuesday, February 11, 2020 2:25 AM To: openstack-discuss at lists.openstack.org Subject: [UX] Contributing to OpenStack's User Interface Dear OpenStack Discussion Team, I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. All the best, Trang Trang Le UC Berkeley Extension - Professional Diploma - UX/UI Design Smith College - Bachelors of Arts - Mathematics and Statistics Phone: +1 (650) 300 9007 Github: https://github.com/trangreyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 11 16:39:50 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 11 Feb 2020 17:39:50 +0100 Subject: [TC] W release naming In-Reply-To: References: Message-ID: Sean McGinnis wrote: > [...] > We have a set of names collected from the community. There may be some > trademark concerns, but I think we can leave that for the Foundation > review after the election polling completes unless anyone has a strong > reason to exclude any now. If so, please state so here before I create > the poll for next week. > > https://wiki.openstack.org/wiki/Release_Naming/W_Proposals > [...] That's a lot of good names, and before voting I'd like to get wide feedback from the community... So if there is any name you strongly like or dislike, please follow up here ! The TC is also supposed to discuss potential cultural sensibility and try to avoid those names, so if you see anything that could be considered culturally offensive for some human groups, let me know or reply on this thread. Personally I could see how 'Wodewick' could be perceived as a joke on speech-impaired people, and 'Whiskey'/'Whisky' could be seen as promoting the alcohol-drinking culture in open source events. Also 'Wuhan' is likely to not be neutral -- either seen as a positive supportive move for our friends in China struggling with the virus, or as a bit of a weird choice, but I'm not sure which. In summary: please voice concerns and preferences here, before the vote starts! -- Thierry Carrez (ttx) From whayutin at redhat.com Tue Feb 11 17:07:14 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Tue, 11 Feb 2020 10:07:14 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: Message-ID: On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin wrote: > Greetings, > > Most of the jobs went RED a few minutes ago. > Again it's related to python27. Nothing is going to pass CI until this is > fixed. > > See: https://etherpad.openstack.org/p/ruckroversprint21 > > We'll update the list when we have the required patches in. > Thanks > Patches are up.. - https://review.opendev.org/707204 - https://review.opendev.org/707054 - https://review.opendev.org/#/c/707062/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Feb 11 17:16:08 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 11 Feb 2020 18:16:08 +0100 Subject: [TC] W release naming In-Reply-To: References: Message-ID: I agree with your opinion on those 3 names, Thierry. All other are fine. -yoctozepto wt., 11 lut 2020 o 17:50 Thierry Carrez napisał(a): > > Personally I could see how 'Wodewick' could be perceived as a joke on > speech-impaired people, and 'Whiskey'/'Whisky' could be seen as > promoting the alcohol-drinking culture in open source events. Also > 'Wuhan' is likely to not be neutral -- either seen as a positive > supportive move for our friends in China struggling with the virus, or > as a bit of a weird choice, but I'm not sure which. > > In summary: please voice concerns and preferences here, before the vote > starts! > > -- > Thierry Carrez (ttx) > From gmann at ghanshyammann.com Tue Feb 11 17:23:21 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 11 Feb 2020 11:23:21 -0600 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: Message-ID: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin wrote ---- > > > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin wrote: > > Patches are up..https://review.opendev.org/707204 > https://review.opendev.org/707054 > https://review.opendev.org/#/c/707062/ Do we still have py27 jobs on tripleo CI ? If so I think it is time to drop it completely as the deadline is 13th Feb. Because there might be more incompatible dependencies as py2 drop from them is speedup. NOTE: stable branch testing till rocky use the stable u-c now to avoid these issues. -gmann > > Greetings, > > Most of the jobs went RED a few minutes ago.Again it's related to python27. Nothing is going to pass CI until this is fixed. > See: https://etherpad.openstack.org/p/ruckroversprint21 > We'll update the list when we have the required patches in.Thanks From david.comay at gmail.com Tue Feb 11 18:36:27 2020 From: david.comay at gmail.com (David Comay) Date: Tue, 11 Feb 2020 13:36:27 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance Message-ID: Neil, > networking-calico is the code that integrates Project Calico [1] with > Neutron. It has been an OpenStack project for several years, but we, i.e. > its developers [2], would like now to remove it from OpenStack governance > and instead manage it like the other Project Calico projects under > https://github.com/projectcalico/. My primary concern which isn't really governance would be around making sure the components in `networking-calico` are kept in-sync with the parent classes it inherits from Neutron itself. Is there a plan to keep these in-sync together going forward? -------------- next part -------------- An HTML attachment was scrubbed... URL: From colleen at gazlene.net Tue Feb 11 18:42:10 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 11 Feb 2020 10:42:10 -0800 Subject: [keystone] YVR PTG planning Message-ID: Hi all, It's time to start planning the next PTG. Unlike Shanghai, I'm tentatively assuming that we will be able to hold this one in person. I've created an etherpad to start tracking it: https://etherpad.openstack.org/p/yvr-ptg-keystone Please add your name to the etherpad if you have a hunch you'll be attending and participating in keystone-related discussions, and please also add your topic ideas to the list. Colleen From colleen at gazlene.net Tue Feb 11 18:51:05 2020 From: colleen at gazlene.net (Colleen Murphy) Date: Tue, 11 Feb 2020 10:51:05 -0800 Subject: [ptl][keystone] cmurphy afk week of February 17 Message-ID: <4f8d428d-249b-4376-9347-a2c2be711965@www.fastmail.com> I will be on vacation the week of February 17 and will not be computering. Kristi Nikolla (knikolla) has graciously agreed to act as stand-in PTL for keystone during that time and will chair next week's meeting. If necessary I will be reachable by email or IRC but with some delay (hours or days). Colleen From Albert.Braden at synopsys.com Tue Feb 11 18:54:34 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 11 Feb 2020 18:54:34 +0000 Subject: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' In-Reply-To: <55B274D3-3D46-4BB4-97DC-7E3996D5189B@blizzard.com> References: , <55B274D3-3D46-4BB4-97DC-7E3996D5189B@blizzard.com> Message-ID: Erik, thank you! I did this, and made some other changes based on your other email, and my errors are gone. I’m still working on isolating the specific changes that fixed the problem, but it definitely was one of your ideas. Many thanks! From: Erik Olof Gunnar Andersson Sent: Monday, February 10, 2020 9:54 AM To: Albert Braden Cc: openstack-discuss at lists.openstack.org Subject: Re: Galera and haproxy fail; keystone error 'Lost connection to MySQL server during query' Database Connections are kept open for one hour. Timeouts on haproxy needs to reflect that. Set timeout to at least 60 minutes. Sent from my iPhone On Feb 10, 2020, at 9:04 AM, Albert Braden > wrote:  Can anyone help with this galera issue? If I run galera without haproxy and point everything to 1 server, it seems to work fine, but we want to load-balance it with haproxy. After setting up haproxy we had lots of errors in nova, neutron and keystone, but I got rid of those by setting haproxy timeout values to 10 minutes. The remaining errors are in keystone. In /var/log/keystone/keystone-wsgi-public.log I see “Loaded 2 Fernet keys” every 5 minutes and this is frequently accompanied by a mysql error 'Lost connection to MySQL server during query' I tried changing various config items in haproxy, galera and keystone, but nothing seems to help. How can I fix these errors? Errors: https://f.perl.bot/p/pbq26k https://f.perl.bot/p/1tnh78 https://f.perl.bot/p/fuxmwo Haproxy config: https://f.perl.bot/p/gu2lil Mysql timeout values: https://f.perl.bot/p/i6l7tn Keystone config (minus commented lines): https://f.perl.bot/p/o6fdht -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 11 19:07:57 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 11 Feb 2020 19:07:57 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <7d4b95227ae3378c6ff04548927837e8dabf7792.camel@redhat.com> On Tue, 2020-02-11 at 13:36 -0500, David Comay wrote: > Neil, > > > networking-calico is the code that integrates Project Calico [1] with > > Neutron. It has been an OpenStack project for several years, but we, i.e. > > its developers [2], would like now to remove it from OpenStack governance > > and instead manage it like the other Project Calico projects under > > https://github.com/projectcalico/. > > My primary concern which isn't really governance would be around making > sure the components in `networking-calico` are kept in-sync with the parent > classes it inherits from Neutron itself. Is there a plan to keep these > in-sync together going forward? networking-calico should not be inheriting form neutron. netuon-lib is fine but the networking-* project should not import form neturon directly. From neil at tigera.io Tue Feb 11 19:31:28 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 19:31:28 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: Hi David, On Tue, Feb 11, 2020 at 6:43 PM David Comay wrote: > Neil, > > > networking-calico is the code that integrates Project Calico [1] with > > Neutron. It has been an OpenStack project for several years, but we, > i.e. > > its developers [2], would like now to remove it from OpenStack governance > > and instead manage it like the other Project Calico projects under > > https://github.com/projectcalico/. > > My primary concern which isn't really governance would be around making > sure the components in `networking-calico` are kept in-sync with the parent > classes it inherits from Neutron itself. Is there a plan to keep these > in-sync together going forward? > Thanks for this question. I think the answer is that it will be a planned effort, from now on, for us to support new OpenStack versions. From Kilo through to Rocky we have aimed (and managed, so far as I know) to maintain a unified networking-calico codebase that works with all of those versions. However our code does not support Python 3, and OpenStack master now requires Python 3, so we have to invest work in order to have even the possibility of working with Train and later. More generally, it has been frustrating, over the last 2 years or so, to track OpenStack master as the CI requires, because breaking changes (in other OpenStack code) are made frequently and we get hit by them when trying to fix or enhance something (typically unrelated) in networking-calico. With that in mind, my plan from now on is: - Continue to stay in touch with our users and customers, so we know what OpenStack versions they want us to support. - As we fix and enhance Calico-specific things, continue CI against the versions that we say we test with. (Currently that means Queens and Rocky - https://docs.projectcalico.org/getting-started/openstack/requirements) - As and when needed, work to support new versions. (Where the first package of work here will be Python 3 support.) WDYT? Does that sounds sensible? Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Feb 11 19:36:53 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 11 Feb 2020 19:36:53 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: <20200211193652.bhzd4uxb64ggufpa@yuggoth.org> On 2020-02-11 19:31:28 +0000 (+0000), Neil Jerram wrote: > Thanks for this question. I think the answer is that it will be a planned > effort, from now on, for us to support new OpenStack versions. From Kilo > through to Rocky we have aimed (and managed, so far as I know) to maintain > a unified networking-calico codebase that works with all of those > versions. However our code does not support Python 3, and OpenStack master > now requires Python 3, so we have to invest work in order to have even the > possibility of working with Train and later. [...] It's probably known to all involved in the conversation here, but just for clarity, the two releases immediately following Rocky (that is, Stein and Train) are still supposed to support Python 2.7. Only as of the Ussuri release (due out in a few more months) will OpenStack be Python3-only. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From neil at tigera.io Tue Feb 11 19:37:53 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 19:37:53 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: <7d4b95227ae3378c6ff04548927837e8dabf7792.camel@redhat.com> References: <7d4b95227ae3378c6ff04548927837e8dabf7792.camel@redhat.com> Message-ID: On Tue, Feb 11, 2020 at 7:08 PM Sean Mooney wrote: > On Tue, 2020-02-11 at 13:36 -0500, David Comay wrote: > > Neil, > > > > > networking-calico is the code that integrates Project Calico [1] with > > > Neutron. It has been an OpenStack project for several years, but we, > i.e. > > > its developers [2], would like now to remove it from OpenStack > governance > > > and instead manage it like the other Project Calico projects under > > > https://github.com/projectcalico/. > > > > My primary concern which isn't really governance would be around making > > sure the components in `networking-calico` are kept in-sync with the > parent > > classes it inherits from Neutron itself. Is there a plan to keep these > > in-sync together going forward? > networking-calico should not be inheriting form neutron. > netuon-lib is fine but the networking-* project should not import form > neturon directly. Right, mostly. I think we still inherit from some DHCP agent code that hasn't been lib-ified yet, but otherwise I think it's neutron-lib as you say. (It's difficult to be sure because our code is also written to work with older versions when there was more neutron and less neutron-lib, but that's an orthogonal point.) Best wishes, Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at tigera.io Tue Feb 11 19:43:38 2020 From: neil at tigera.io (Neil Jerram) Date: Tue, 11 Feb 2020 19:43:38 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: <20200211193652.bhzd4uxb64ggufpa@yuggoth.org> References: <20200211193652.bhzd4uxb64ggufpa@yuggoth.org> Message-ID: On Tue, Feb 11, 2020 at 7:37 PM Jeremy Stanley wrote: > On 2020-02-11 19:31:28 +0000 (+0000), Neil Jerram wrote: > > Thanks for this question. I think the answer is that it will be a > planned > > effort, from now on, for us to support new OpenStack versions. From Kilo > > through to Rocky we have aimed (and managed, so far as I know) to > maintain > > a unified networking-calico codebase that works with all of those > > versions. However our code does not support Python 3, and OpenStack > master > > now requires Python 3, so we have to invest work in order to have even > the > > possibility of working with Train and later. > [...] > > It's probably known to all involved in the conversation here, but > just for clarity, the two releases immediately following Rocky (that > is, Stein and Train) are still supposed to support Python 2.7. Only > as of the Ussuri release (due out in a few more months) will > OpenStack be Python3-only. > Sorry; thanks for this correction. (So I should have said "... even the possibility of working with Ussuri and later.") Neil -------------- next part -------------- An HTML attachment was scrubbed... URL: From flux.adam at gmail.com Tue Feb 11 21:17:45 2020 From: flux.adam at gmail.com (Adam Harwell) Date: Tue, 11 Feb 2020 13:17:45 -0800 Subject: [ospurge] looking for project owners / considering adoption In-Reply-To: <2A9BD8D7-0F47-4B85-ABDB-366A32B8577B@gmail.com> References: <342983ed-1d22-8f3a-3335-f153512ec2b2@catalyst.net.nz> <576E74EB-ED80-497F-9706-482FE0433208@gmail.com> <2ca832bb-4b71-b775-160a-e1868dcb21d2@citynetwork.eu> <533e6243-037e-bd06-5c9b-98c3316a47ab@citynetwork.eu> <2A9BD8D7-0F47-4B85-ABDB-366A32B8577B@gmail.com> Message-ID: Sounds like we can keep the tradition going! But seriously, I'm disappointed that I haven't been able to help look at this at all, since I've had a major priority shift since the summit when I volunteered to help. Such is the way of sponsored Openstack development, I guess. I should be able to attend another meeting in Vancouver if you do schedule one. --Adam On Tue, Feb 11, 2020, 00:41 Artem Goncharov wrote: > Hi, > > I am thinking to submit (not found possibility so far) a forum-like > session for that again in Vancouver, where I would present current status > of implementation and we can plan further steps. > Unfortunately I have still no confirmation from my employer, that I will > be allowed to go. > > Any ideas/objections? > > Regards, > Artem > > > On 3. Nov 2019, at 02:34, Tobias Rydberg > wrote: > > Hi, > > Sounds really good Artem! Will you be at the session at the Summit? If > not, I will bring the information from you to the session... > > Cheers, > Tobias > > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > On 2019-11-02 16:26, Artem Goncharov wrote: > > Hi Tobby, > > As I mentioned, if Monty does not start work, I will start it in few weeks > latest (mid November). I need this now in my project, therefore I will be > definitely able to spend time on implementation in both SDK and OSC. > > P.S. mailing this to you, since I will not be on the Summit. > > Regards, > Artem > > On 2. Nov 2019, at 09:19, Tobias Rydberg > wrote: > > Hi, > > A Forum session is planned for this topic, Monday 11:40. Suites perfect to > continue the discussions there as well. > > > https://www.openstack.org/summit/shanghai-2019/summit-schedule/events/24407/project-resource-cleanup-followup > > BR, > Tobias > > Tobias Rydberg > Senior Developer > Twitter & IRC: tobberydberg > www.citynetwork.eu | www.citycloud.com > > INNOVATION THROUGH OPEN IT INFRASTRUCTURE > ISO 9001, 14001, 27001, 27015 & 27018 CERTIFIED > > On 2019-10-30 15:43, Artem Goncharov wrote: > > Hi Adam, > > Since I need this now as well I will start working on implementation how > it was agreed (in SDK and in OSC) during last summit by mid of November. > There is no need for discussing this further, it just need to be > implemented. Sad that we got no progress in half a year. > > Regards, > Artem (gtema). > > On 30. Oct 2019, at 14:26, Adam Harwell wrote: > > That's too bad that you won't be at the summit, but I think there may > still be some discussion planned about this topic. > > Yeah, I understand completely about priorities and such internally. Same > for me... It just happens that this IS priority work for us right now. :) > > > On Tue, Oct 29, 2019, 07:48 Adrian Turjak wrote: > >> My apologies I missed this email. >> >> Sadly I won't be at the summit this time around. There may be some public >> cloud focused discussions, and some of those often have this topic come up. >> Also if Monty from the SDK team is around, I'd suggest finding him and >> having a chat. >> >> I'll help if I can but we are swamped with internal work and I can't >> dedicate much time to do upstream work that isn't urgent. :( >> On 17/10/19 8:48 am, Adam Harwell wrote: >> >> That's interesting -- we have already started working to add features and >> improve ospurge, and it seems like a plenty useful tool for our needs, but >> I think I agree that it would be nice to have that functionality built into >> the sdk. I might be able to help with both, since one is immediately useful >> and we (like everyone) have deadlines to meet, and the other makes sense to >> me as a possible future direction that could be more widely supported. >> >> Will you or someone else be hosting and discussion about this at the >> Shanghai summit? I'll be there and would be happy to join and discuss. >> >> --Adam >> >> On Tue, Oct 15, 2019, 22:04 Adrian Turjak >> wrote: >> >>> I tried to get a community goal to do project deletion per project, but >>> we ended up deciding that a community goal wasn't ideal unless we did >>> build a bulk delete API in each service: >>> https://review.opendev.org/#/c/639010/ >>> https://etherpad.openstack.org/p/community-goal-project-deletion >>> https://etherpad.openstack.org/p/DEN-Deletion-of-resources >>> https://etherpad.openstack.org/p/DEN-Train-PublicCloudWG-brainstorming >>> >>> What we decided on, but didn't get a chance to work on, was building >>> into the OpenstackSDK OS-purge like functionality, as well as reporting >>> functionality (of all project resources to be deleted). That way we >>> could have per project per resource deletion logic, and all of that >>> defined in the SDK. >>> >>> I was up for doing some of the work, but ended up swamped with internal >>> work and just didn't drive or push for the deletion work upstream. >>> >>> If you want to do something useful, don't pursue OS-Purge, help us add >>> that official functionality to the SDK, and then we can push for bulk >>> deletion APIs in each project to make resource deletion more pleasant. >>> >>> I'd be happy to help with the work, and Monty on the SDK team will most >>> likely be happy to as well. :) >>> >>> Cheers, >>> Adrian >>> >>> On 1/10/19 11:48 am, Adam Harwell wrote: >>> > I haven't seen much activity on this project in a while, and it's been >>> > moved to opendev/x since the opendev migration... Who is the current >>> > owner of this project? Is there anyone who actually is maintaining it, >>> > or would mind if others wanted to adopt the project to move it forward? >>> > >>> > Thanks, >>> > --Adam Harwell >>> >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Tue Feb 11 23:07:22 2020 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 11 Feb 2020 18:07:22 -0500 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: <5B0D5C6D-E7CD-49C1-A5BA-B9B744499F6F@gmail.com> Thank you so much for taking lead on it, openstack is great piece of software but missing some really good UI interface. Sent from my iPhone > On Feb 11, 2020, at 9:00 AM, Trang Le wrote: > > Dear OpenStack Discussion Team, > > I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. > > All the best, > Trang > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > -------------- next part -------------- An HTML attachment was scrubbed... URL: From licanwei_cn at 163.com Wed Feb 12 02:31:56 2020 From: licanwei_cn at 163.com (licanwei) Date: Wed, 12 Feb 2020 10:31:56 +0800 (GMT+08:00) Subject: [Watcher]IRC meeting at 8:00 UTC today Message-ID: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> licanwei_cn 邮箱:licanwei_cn at 163.com 签名由 网易邮箱大师 定制 | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 29936982-8cf0-4d64-9d3e-4df171e58e00.jpg Type: image/jpeg Size: 4126 bytes Desc: not available URL: From veeraready at yahoo.co.in Wed Feb 12 09:38:18 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Wed, 12 Feb 2020 09:38:18 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> Message-ID: <2135735478.1625599.1581500298473@mail.yahoo.com> Hi Mdulko,Below are log files: Controller Log:http://paste.openstack.org/show/789457/cni : http://paste.openstack.org/show/789456/kubelet : http://paste.openstack.org/show/789453/ Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) Please let me know the issue Regards, Veera. On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: Hi, So from this run you need the kuryr-controller logs. Apparently the pod never got annotated with an information about the VIF. Thanks, Michał On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > Hi mdulko, > Thanks for your support. > > As you mention i removed readinessProbe and > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > Attached kubelet and kuryr-cni logs. > > > > Regards, > Veera. > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > Hm, nothing too troubling there too, besides Kubernetes not answering > on /healthz endpoint. Are those full logs, including the moment you > tried spawning a container there? It seems like you only pasted the > fragments with tracebacks regarding failures to read /healthz endpoint > of kube-apiserver. That is another problem you should investigate - > that causes Kuryr pods to restart. > > At first I'd disable the healthchecks (remove readinessProbe and > livenessProbe from Kuryr pod definitions) and try to get fresh set of > logs. > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > Hi mdulko, > > Please find kuryr-cni logs > > http://paste.openstack.org/show/789209/ > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > Hi, > > > > The logs you provided doesn't seem to indicate any issues. Please > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > Thanks, > > Michał > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > Hi, > > > I am trying to run kubelet in arm64 platform > > > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > 2.    Generated kuryr-cni-arm64 container. > > > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > My master node in x86 installed successfully using devstack > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > Please help me to fix the issue > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > Veera. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From info at dantalion.nl Wed Feb 12 09:58:50 2020 From: info at dantalion.nl (info at dantalion.nl) Date: Wed, 12 Feb 2020 10:58:50 +0100 Subject: [Watcher]IRC meeting at 8:00 UTC today In-Reply-To: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> References: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> Message-ID: <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> Hello Lican, Sorry but I am unable to attend at such short notice as in my timezone I won't be awake unless it is for the Watcher meeting. To be able to adjust my transit schedule I will have to know a day in advance. Hope to be there next time. Kind regards, Corne Lukken On 2/12/20 3:31 AM, licanwei wrote: > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > 签名由 网易邮箱大师 定制 > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > From pradeepantil at gmail.com Wed Feb 12 10:36:33 2020 From: pradeepantil at gmail.com (Pradeep Antil) Date: Wed, 12 Feb 2020 16:06:33 +0530 Subject: How to Migrate OpenStack Instance from one Tenant to another Message-ID: Hi Folks, I am using Mitaka OpenStack and VLAN as network type driver for tenant VMs. Initially i have provisioned all the VMs (Including Customer VMs) inside the admin. I want to segregate my internal VMs and Customer VMs, for this to accomplish i have created different tenant names and now i want customer VMs from admin tenant to Customer tenant, On Google , i have a found a script which migrates the VMs but that script doesn't update network and security group details, Can anyone help me and suggest what steps needs to executed to update network , security group and attached Volumes ..? Below is the exact script: #!/bin/bash for i do if [ "$i" != "$1" ]; then echo "moving instance id " $i " to project id" $1; mysql -uroot -s -N < From stig.openstack at telfer.org Wed Feb 12 10:43:24 2020 From: stig.openstack at telfer.org (Stig Telfer) Date: Wed, 12 Feb 2020 10:43:24 +0000 Subject: [scientific-sig] No Scientific SIG meeting today Message-ID: <64733F3E-5484-4E36-955E-20DB62DF484D@telfer.org> Hi All - Unfortunately there will not be a Scientific SIG IRC meeting today, owing to other commitments and availability. Apologies, Stig From rdopiera at redhat.com Wed Feb 12 11:05:40 2020 From: rdopiera at redhat.com (Radek Dopieralski) Date: Wed, 12 Feb 2020 12:05:40 +0100 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: <5B0D5C6D-E7CD-49C1-A5BA-B9B744499F6F@gmail.com> References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> <5B0D5C6D-E7CD-49C1-A5BA-B9B744499F6F@gmail.com> Message-ID: I would like to note here that everybody is equally welcome to contribute, not just Trang Le. If you have ideas for improving the software, Satish, we would gladly welcome your patches. On Wed, Feb 12, 2020 at 12:14 AM Satish Patel wrote: > Thank you so much for taking lead on it, openstack is great piece of > software but missing some really good UI interface. > > Sent from my iPhone > > On Feb 11, 2020, at 9:00 AM, Trang Le wrote: > > Dear OpenStack Discussion Team, > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > All the best, > Trang > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > -- Radomir Dopieralski -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Wed Feb 12 11:57:41 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Wed, 12 Feb 2020 12:57:41 +0100 Subject: [TC] W release naming In-Reply-To: References: Message-ID: On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: > Personally I could see how 'Wodewick' could be perceived as a joke on > speech-impaired people I agree that at first sight this looks very bad. Which should be taken into consideration. However, an extra fact/trivia for Wodewick (for those who didn't know, like I was, as it's not mentioned on the wiki): Terry Jones (director of the movie) had himself rhotacism. So IMO, the presence of the (bad?) joke in the movie itself is a proof of Jones' openness. Regards, Jean-Philippe Evrard (evrardjp) From mdulko at redhat.com Wed Feb 12 12:01:17 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Wed, 12 Feb 2020 13:01:17 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <2135735478.1625599.1581500298473@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> Message-ID: The controller logs are from an hour later than the CNI one. The issues seems not to be present. Isn't your controller still restarting? If so try to use -p option on `kubectl logs` to get logs from previous run. On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > Hi Mdulko, > Below are log files: > > Controller Log:http://paste.openstack.org/show/789457/ > cni : http://paste.openstack.org/show/789456/ > kubelet : http://paste.openstack.org/show/789453/ > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > Please let me know the issue > > > > Regards, > Veera. > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > Hi, > > So from this run you need the kuryr-controller logs. Apparently the pod > never got annotated with an information about the VIF. > > Thanks, > Michał > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > Hi mdulko, > > Thanks for your support. > > > > As you mention i removed readinessProbe and > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > on /healthz endpoint. Are those full logs, including the moment you > > tried spawning a container there? It seems like you only pasted the > > fragments with tracebacks regarding failures to read /healthz endpoint > > of kube-apiserver. That is another problem you should investigate - > > that causes Kuryr pods to restart. > > > > At first I'd disable the healthchecks (remove readinessProbe and > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > logs. > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Please find kuryr-cni logs > > > http://paste.openstack.org/show/789209/ > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > Hi, > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > Thanks, > > > Michał > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > Hi, > > > > I am trying to run kubelet in arm64 platform > > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > 2. Generated kuryr-cni-arm64 container. > > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > Please help me to fix the issue > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > From thierry at openstack.org Wed Feb 12 14:52:30 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 12 Feb 2020 15:52:30 +0100 Subject: [largescale-sig] Meeting summary and next actions Message-ID: Hi everyone, The Large Scale SIG held a meeting today. We had a small attendance, and blame holidays for that. But you can catch up with the summary and logs of the meeting at: http://eavesdrop.openstack.org/meetings/large_scale_sig/2020/large_scale_sig.2020-02-12-09.00.html We started with a presentation by oneswig of the integration of HAProxy telemetry via Monasca on a Kolla-Ansible-based deployment, and how that can help us derive latency information based on the API server called, the HTTP method and the HTTP code result across all HAProxy-fronted servers in a deployment. Slides are available at: http://www.stackhpc.com/resources/HAproxy-telemetry.pdf Regarding progress on our "Documenting large scale operations" goal, some doc links were proposed and all seem relevant to our collection. On the "Scaling within one cluster, and instrumentation of the bottlenecks" goal, the draft of the oslo.metrics spec got a first round of comments. masahito will refresh it based on those initial comments, but it would be good for SIG members to also weigh in at that early stage. In other news, we discussed the SIG's involvement in the "Large-scale Usage of Open Source Infrastructure Software" track at the OpenDev event in Vancouver in June. We plan to have SIG members directly involved and have discussions around our two goals for 2020. TODOs between now and next meeting: - masahito to update spec based on initial feedback - everyone to review and comment on https://review.opendev.org/#/c/704733/ The next meeting will happen on February 26, at 9:00 UTC on #openstack-meeting. Cheers, -- Thierry Carrez (ttx) From rosmaita.fossdev at gmail.com Wed Feb 12 15:38:44 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 12 Feb 2020 10:38:44 -0500 Subject: [cinder][ptg] Victoria (Vancouver 2020) PTG headcount Message-ID: I need to get a rough headcount to submit to the PTG planning committee. Please take a minute to put your name on this etherpad if you think/hope/plan to attend (no commitment--we just want to increase the likelihood that we'll have enough seats for the Cinder team): https://etherpad.openstack.org/p/cinder-victoria-ptg-planning While you're on that etherpad, if you have a topic idea already, feel free to add it. I need the headcount info by 19:00 UTC on 28 February. (But do it now before you forget.) thanks, brian From david.comay at gmail.com Wed Feb 12 16:52:40 2020 From: david.comay at gmail.com (David Comay) Date: Wed, 12 Feb 2020 11:52:40 -0500 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: > >> My primary concern which isn't really governance would be around making >> sure the components in `networking-calico` are kept in-sync with the parent >> classes it inherits from Neutron itself. Is there a plan to keep these >> in-sync together going forward? >> > > Thanks for this question. I think the answer is that it will be a planned > effort, from now on, for us to support new OpenStack versions. From Kilo > through to Rocky we have aimed (and managed, so far as I know) to maintain > a unified networking-calico codebase that works with all of those > versions. However our code does not support Python 3, and OpenStack master > now requires Python 3, so we have to invest work in order to have even the > possibility of working with Train and later. More generally, it has been > frustrating, over the last 2 years or so, to track OpenStack master as the > CI requires, because breaking changes (in other OpenStack code) are made > frequently and we get hit by them when trying to fix or enhance something > (typically unrelated) in networking-calico. > I don't know the history here around `calico-dhcp-agent` but has there been previous efforts to propose integrating the changes made to it into `neutron-dhcp-agent`? It seems the best solution would be to make the functionality provided by the former into the latter rather than relying on parent classes from the former. I suspect there are details here on why that might be difficult but it seems solving that would be helpful in the long-term. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zbitter at redhat.com Wed Feb 12 17:07:26 2020 From: zbitter at redhat.com (Zane Bitter) Date: Wed, 12 Feb 2020 12:07:26 -0500 Subject: [TC] W release naming In-Reply-To: References: Message-ID: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> On 12/02/20 6:57 am, Jean-Philippe Evrard wrote: > On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: >> Personally I could see how 'Wodewick' could be perceived as a joke on >> speech-impaired people > > I agree that at first sight this looks very bad. Which should be taken > into consideration. However, an extra fact/trivia for Wodewick (for > those who didn't know, like I was, as it's not mentioned on the wiki): > Terry Jones (director of the movie) had himself rhotacism. Also another fun fact: if you're explaining, you're losing ;) > So IMO, the presence of the (bad?) joke in the movie itself is a proof > of Jones' openness. TIL, thanks! Another reason to love the movie, while also not choosing this as the release name. From gmann at ghanshyammann.com Wed Feb 12 17:24:39 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 12 Feb 2020 11:24:39 -0600 Subject: [qa][ptg] QA PTG, Vancouver Planning Message-ID: <1703a6e6b61.108eda5ad575788.8856924692503612142@ghanshyammann.com> Hello Everyone, I have started the Vancouver PTG planning for QA[1]. The very first step is to reserve the space based on number of attendees. For that, please write your name in the etherpad. I know it might not be confirmed yet or the travel process has not started, still, you can write your probability of attending. It is not necessary to be 100% in QA but if you are planning to spend some time in QA PTG, still write your name with comment. [1] https://etherpad.openstack.org/p/qa-victoria-ptg -gmann From Albert.Braden at synopsys.com Wed Feb 12 17:52:03 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 12 Feb 2020 17:52:03 +0000 Subject: [TC] W release naming In-Reply-To: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> References: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> Message-ID: A few years ago I would have thought that this was nonsense. I still do, but nonsense is the order of the day. We have to be careful to not give the professionally offended an opportunity. -----Original Message----- From: Zane Bitter Sent: Wednesday, February 12, 2020 9:07 AM To: openstack-discuss at lists.openstack.org Subject: Re: [TC] W release naming On 12/02/20 6:57 am, Jean-Philippe Evrard wrote: > On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: >> Personally I could see how 'Wodewick' could be perceived as a joke on >> speech-impaired people > > I agree that at first sight this looks very bad. Which should be taken > into consideration. However, an extra fact/trivia for Wodewick (for > those who didn't know, like I was, as it's not mentioned on the wiki): > Terry Jones (director of the movie) had himself rhotacism. Also another fun fact: if you're explaining, you're losing ;) > So IMO, the presence of the (bad?) joke in the movie itself is a proof > of Jones' openness. TIL, thanks! Another reason to love the movie, while also not choosing this as the release name. From nate.johnston at redhat.com Wed Feb 12 17:56:43 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 12 Feb 2020 12:56:43 -0500 Subject: [TC] W release naming In-Reply-To: References: Message-ID: <20200212175643.fjdsn2lr2tq3lbcx@firewall> On Wed, Feb 12, 2020 at 12:57:41PM +0100, Jean-Philippe Evrard wrote: > On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: > > Personally I could see how 'Wodewick' could be perceived as a joke on > > speech-impaired people > > I agree that at first sight this looks very bad. Which should be taken > into consideration. However, an extra fact/trivia for Wodewick (for > those who didn't know, like I was, as it's not mentioned on the wiki): > Terry Jones (director of the movie) had himself rhotacism. > > So IMO, the presence of the (bad?) joke in the movie itself is a proof > of Jones' openness. One thing that the Internet - social media particularly but not exclusively - does exceptionally well is to present ideas, concepts, or occurrences with all of the surrounding context stripped away. I think that the only way to be successful is to select names that stand solid without any context and judged from multiple cultural perspectives. I personally object to Wuhan for this reason. Even if we were to say that we choose the name to honor the victims of this tragedy, that context would be stripped away in the transmission and there would be plenty of people who would come to the conclusion that we were making light of what is happening, or worse that companies that build products based on OpenStack would be making profits from that name. These are terrible thoughts, for sure, but regrettably we have to look at how people could percieve it, not how we mean it to be. Nate From neil at tigera.io Wed Feb 12 18:03:37 2020 From: neil at tigera.io (Neil Jerram) Date: Wed, 12 Feb 2020 18:03:37 +0000 Subject: [infra][neutron] Removing networking-calico from OpenStack governance In-Reply-To: References: Message-ID: On Wed, Feb 12, 2020 at 4:52 PM David Comay wrote: > > >>> My primary concern which isn't really governance would be around making >>> sure the components in `networking-calico` are kept in-sync with the parent >>> classes it inherits from Neutron itself. Is there a plan to keep these >>> in-sync together going forward? >>> >> >> Thanks for this question. I think the answer is that it will be a >> planned effort, from now on, for us to support new OpenStack versions. >> From Kilo through to Rocky we have aimed (and managed, so far as I know) to >> maintain a unified networking-calico codebase that works with all of those >> versions. However our code does not support Python 3, and OpenStack master >> now requires Python 3, so we have to invest work in order to have even the >> possibility of working with Train and later. More generally, it has been >> frustrating, over the last 2 years or so, to track OpenStack master as the >> CI requires, because breaking changes (in other OpenStack code) are made >> frequently and we get hit by them when trying to fix or enhance something >> (typically unrelated) in networking-calico. >> > > I don't know the history here around `calico-dhcp-agent` but has there > been previous efforts to propose integrating the changes made to it into > `neutron-dhcp-agent`? It seems the best solution would be to make the > functionality provided by the former into the latter rather than relying on > parent classes from the former. I suspect there are details here on why > that might be difficult but it seems solving that would be helpful in the > long-term. > No efforts that I know of. The difference is that calico-dhcp-agent is driven by information in the Calico etcd datastore, where neutron-dhcp-agent is driven via a message queue from the Neutron server. I think it has improved since, but when we originated calico-dhcp-agent a few years ago, the message queue wasn't scaling very well to hundreds of nodes. We can certainly keep reintegrating in mind as a possibility. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 12 18:22:56 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 12 Feb 2020 10:22:56 -0800 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Also the contributor guide: https://docs.openstack.org/contributors/ And if you have any questions about getting started let me or anyone else in the First Contact SIG[1] know! -Kendall [1] https://wiki.openstack.org/wiki/First_Contact_SIG On Tue, Feb 11, 2020 at 8:31 AM Albert Braden wrote: > Hi Trang, > > > > This document has been useful for me as I work on becoming an Openstack > contributor. > > > > https://docs.openstack.org/infra/manual/developers.html > > > > *From:* Trang Le > *Sent:* Tuesday, February 11, 2020 2:25 AM > *To:* openstack-discuss at lists.openstack.org > *Subject:* [UX] Contributing to OpenStack's User Interface > > > > Dear OpenStack Discussion Team, > > > > I am Trang Le, a student at UC Berkeley Extension interested in > contributing to OpenStack’s UX/UI. I am currently pursuing a Professional > Diploma in UX/UI and would love to contribute to an open-source project and > work with experienced engineers. Before, I have also worked at Fujitsu > Vietnam in the open-source team, where I learned a lot about OpenStack > through training. Let me know if my message could be of interest to you, > and I would be happy to discuss further. > > > > All the best, > > Trang > > > > Trang Le > UC Berkeley Extension - Professional Diploma - UX/UI Design > Smith College - Bachelors of Arts - Mathematics and Statistics > Phone: +1 (650) 300 9007 > Github: https://github.com/trangreyle > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Albert.Braden at synopsys.com Wed Feb 12 18:34:31 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Wed, 12 Feb 2020 18:34:31 +0000 Subject: [UX] Contributing to OpenStack's User Interface In-Reply-To: References: <38699C49-3EB0-4B21-B27C-6A46FC8DB8B8@berkeley.edu> Message-ID: Also, when I get stuck somewhere in the signup process, I’ve found help on IRC, on the Freenode network, in #openstack-mentoring From: Kendall Nelson Sent: Wednesday, February 12, 2020 10:23 AM To: Albert Braden Cc: Trang Le ; openstack-discuss at lists.openstack.org Subject: Re: [UX] Contributing to OpenStack's User Interface Also the contributor guide: https://docs.openstack.org/contributors/ And if you have any questions about getting started let me or anyone else in the First Contact SIG[1] know! -Kendall [1] https://wiki.openstack.org/wiki/First_Contact_SIG On Tue, Feb 11, 2020 at 8:31 AM Albert Braden > wrote: Hi Trang, This document has been useful for me as I work on becoming an Openstack contributor. https://docs.openstack.org/infra/manual/developers.html From: Trang Le > Sent: Tuesday, February 11, 2020 2:25 AM To: openstack-discuss at lists.openstack.org Subject: [UX] Contributing to OpenStack's User Interface Dear OpenStack Discussion Team, I am Trang Le, a student at UC Berkeley Extension interested in contributing to OpenStack’s UX/UI. I am currently pursuing a Professional Diploma in UX/UI and would love to contribute to an open-source project and work with experienced engineers. Before, I have also worked at Fujitsu Vietnam in the open-source team, where I learned a lot about OpenStack through training. Let me know if my message could be of interest to you, and I would be happy to discuss further. All the best, Trang Trang Le UC Berkeley Extension - Professional Diploma - UX/UI Design Smith College - Bachelors of Arts - Mathematics and Statistics Phone: +1 (650) 300 9007 Github: https://github.com/trangreyle -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Wed Feb 12 18:50:46 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Wed, 12 Feb 2020 12:50:46 -0600 Subject: [TC] W release naming In-Reply-To: <20200212175643.fjdsn2lr2tq3lbcx@firewall> References: <20200212175643.fjdsn2lr2tq3lbcx@firewall> Message-ID: On 2/12/2020 11:56 AM, Nate Johnston wrote: > On Wed, Feb 12, 2020 at 12:57:41PM +0100, Jean-Philippe Evrard wrote: >> On Tue, 2020-02-11 at 17:39 +0100, Thierry Carrez wrote: >>> Personally I could see how 'Wodewick' could be perceived as a joke on >>> speech-impaired people >> I agree that at first sight this looks very bad. Which should be taken >> into consideration. However, an extra fact/trivia for Wodewick (for >> those who didn't know, like I was, as it's not mentioned on the wiki): >> Terry Jones (director of the movie) had himself rhotacism. >> >> So IMO, the presence of the (bad?) joke in the movie itself is a proof >> of Jones' openness. > One thing that the Internet - social media particularly but not exclusively - > does exceptionally well is to present ideas, concepts, or occurrences with all > of the surrounding context stripped away. I think that the only way to be > successful is to select names that stand solid without any context and judged > from multiple cultural perspectives. > > I personally object to Wuhan for this reason. Even if we were to say that we > choose the name to honor the victims of this tragedy, that context would be > stripped away in the transmission and there would be plenty of people who would > come to the conclusion that we were making light of what is happening, or worse > that companies that build products based on OpenStack would be making profits > from that name. These are terrible thoughts, for sure, but regrettably we have > to look at how people could percieve it, not how we mean it to be. > > Nate > Nate, I am in agreement that the name needs to hold up without context and under scrutiny from multiple perspectives.  So, Wuhan is not a good choice.  It also is dangerous to use a name associated with a still evolving situation. All, I think we need to choose a name that doesn't require explanation.  I think that Wodewick fails that test. Jay From whayutin at redhat.com Wed Feb 12 19:39:54 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 12 Feb 2020 12:39:54 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> References: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> Message-ID: On Tue, Feb 11, 2020 at 10:23 AM Ghanshyam Mann wrote: > > ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin < > whayutin at redhat.com> wrote ---- > > > > > > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin > wrote: > > > > Patches are up..https://review.opendev.org/707204 > > https://review.opendev.org/707054 > > https://review.opendev.org/#/c/707062/ > > Do we still have py27 jobs on tripleo CI ? If so I think it is time to > drop it > completely as the deadline is 13th Feb. Because there might be more > incompatible > dependencies as py2 drop from them is speedup. > > NOTE: stable branch testing till rocky use the stable u-c now to avoid > these issues. > > -gmann > > > > > > Greetings, > > > > Most of the jobs went RED a few minutes ago.Again it's related to > python27. Nothing is going to pass CI until this is fixed. > > See: https://etherpad.openstack.org/p/ruckroversprint21 > > We'll update the list when we have the required patches in.Thanks > > OK.. folks, Thanks for your patience! The final patch to restore sanity [1] is in the gate. Most everything was working since yesterday unless you were patching tripleo-common. This is the final item we've identified. Be safe out there! CI == Green [1] https://review.opendev.org/#/c/707330/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Feb 12 22:10:56 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 12 Feb 2020 17:10:56 -0500 Subject: [operators][cinder] RSD and Swordfish drivers being abandoned Message-ID: <0eedacda-ae16-b64c-57d7-32f169955c45@gmail.com> Greetings operators, The current maintainers of the Intel Rack Scale Design (RSD) driver announced at this week's Cinder meeting that the driver will be marked as UNSUPPORTED in Ussuri and that the driver will no longer be maintained. The Swordfish driver, which was under active development, but not yet merged to Cinder, is also being abandoned as well as the os-brick modifications to support NVMeoF. Finally, the third-party CI system for the RSD driver is being dismantled. We want you to be aware of these changes, but more optimistically, we are hoping that there would be some interest in the community in picking these up. If you are or know of such a person, please contact us right away so that we can get a technology transfer going before the current development group has been dispersed to other projects. For more information, please contact: Ivens Zambrano (ivens.zambrano at intel.com / IRC:IvensZambrano) David Shaughnessy (david.shaughnessy at intel.com / IRC: davidsha) thanks, brian From whayutin at redhat.com Thu Feb 13 03:51:15 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 12 Feb 2020 20:51:15 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> Message-ID: On Wed, Feb 12, 2020 at 12:39 PM Wesley Hayutin wrote: > > > On Tue, Feb 11, 2020 at 10:23 AM Ghanshyam Mann > wrote: > >> >> ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin < >> whayutin at redhat.com> wrote ---- >> > >> > >> > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin >> wrote: >> > >> > Patches are up..https://review.opendev.org/707204 >> > https://review.opendev.org/707054 >> > https://review.opendev.org/#/c/707062/ >> >> Do we still have py27 jobs on tripleo CI ? If so I think it is time to >> drop it >> completely as the deadline is 13th Feb. Because there might be more >> incompatible >> dependencies as py2 drop from them is speedup. >> >> NOTE: stable branch testing till rocky use the stable u-c now to avoid >> these issues. >> >> -gmann >> >> >> > >> > Greetings, >> > >> > Most of the jobs went RED a few minutes ago.Again it's related to >> python27. Nothing is going to pass CI until this is fixed. >> > See: https://etherpad.openstack.org/p/ruckroversprint21 >> > We'll update the list when we have the required patches in.Thanks >> >> > OK.. folks, > Thanks for your patience! > The final patch to restore sanity [1] is in the gate. Most everything was > working since yesterday unless you were patching tripleo-common. This is > the final item we've identified. > > Be safe out there! > CI == Green > > Well.. a new upstream centos-7 image broke the last patch before it merged. So.. now there are two patches that need to merge. https://review.opendev.org/#/c/707525 https://review.opendev.org/#/c/707330/ Thank you lord for bounty of work you have provided. We are blessed. > > [1] https://review.opendev.org/#/c/707330/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Thu Feb 13 06:52:53 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 13 Feb 2020 06:52:53 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> Message-ID: <1317590151.1993706.1581576773856@mail.yahoo.com> Thanks mdulko,Issue is in openvswitch, iam getting following error in switchd logs./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message Do we need to patch openvswitch to support above flow? My ovs version [root at node-2088 ~]# ovs-vsctl --versionovs-vsctl (Open vSwitch) 2.11.0DB Schema 7.16.1 Regards, Veera. On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: The controller logs are from an hour later than the CNI one. The issues seems not to be present. Isn't your controller still restarting? If so try to use -p option on `kubectl logs` to get logs from previous run. On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > Hi Mdulko, > Below are log files: > > Controller Log:http://paste.openstack.org/show/789457/ > cni : http://paste.openstack.org/show/789456/ > kubelet : http://paste.openstack.org/show/789453/ > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > Please let me know the issue > > > > Regards, > Veera. > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > Hi, > > So from this run you need the kuryr-controller logs. Apparently the pod > never got annotated with an information about the VIF. > > Thanks, > Michał > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > Hi mdulko, > > Thanks for your support. > > > > As you mention i removed readinessProbe and > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > Regards, > > Veera. > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > on /healthz endpoint. Are those full logs, including the moment you > > tried spawning a container there? It seems like you only pasted the > > fragments with tracebacks regarding failures to read /healthz endpoint > > of kube-apiserver. That is another problem you should investigate - > > that causes Kuryr pods to restart. > > > > At first I'd disable the healthchecks (remove readinessProbe and > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > logs. > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Please find kuryr-cni logs > > > http://paste.openstack.org/show/789209/ > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > Hi, > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > Thanks, > > > Michał > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > Hi, > > > > I am trying to run kubelet in arm64 platform > > > > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > 2.    Generated kuryr-cni-arm64 container. > > > > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > Please help me to fix the issue > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Thu Feb 13 07:12:42 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Thu, 13 Feb 2020 07:12:42 +0000 (UTC) Subject: [openstack-dev][kuryr] Kubelet error in ARM64 References: <1417617907.1999997.1581577962230.ref@mail.yahoo.com> Message-ID: <1417617907.1999997.1581577962230@mail.yahoo.com> Hi I am getting following error in kubelet in arm64 platform Feb 13 07:16:32 node-2088 kubelet[16776]: E0213 07:16:32.493458   16776 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache/index0/size: no such file or directoryFeb 13 07:16:32 node-2088 kubelet[16776]: I0213 07:16:32.493600   16776 gce.go:44] Error while reading product_name: open /sys/class/dmi/id/product_name: no such file or directory Help me in fixing above issue. Regards, Veera. -------------- next part -------------- An HTML attachment was scrubbed... URL: From guoyongxhzhf at 163.com Thu Feb 13 08:20:28 2020 From: guoyongxhzhf at 163.com (guoyongxhzhf at 163.com) Date: Thu, 13 Feb 2020 16:20:28 +0800 Subject: [keystone][middleware] why can not middle-ware support redis? Message-ID: <289D1C6884E043999D843F93BED4332E@guoyongPC> In my environment, there is a redis. I want the keystone client in nova to cache token and use redis as a cache server. But after reading keystone middle-ware code, now keystone middle-ware just support swift and memcached server. Can I just modify keystone middleware code to use dogpile.cache.redis directly? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Thu Feb 13 09:01:48 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 13 Feb 2020 10:01:48 +0100 Subject: [openstack-dev][kuryr] Kubelet error in ARM64 In-Reply-To: <1417617907.1999997.1581577962230@mail.yahoo.com> References: <1417617907.1999997.1581577962230.ref@mail.yahoo.com> <1417617907.1999997.1581577962230@mail.yahoo.com> Message-ID: <7dcd512afa1e1c879fe66a74908f3a0c7adeeedb.camel@redhat.com> This is clearly a Kubernetes issue, you should try Kubernetes help resources [1]. [1] https://kubernetes.io/docs/tasks/debug-application-cluster/troubleshooting/#help-my-question-isn-t-covered-i-need-help-now On Thu, 2020-02-13 at 07:12 +0000, VeeraReddy wrote: > Hi > I am getting following error in kubelet in arm64 platform > > Feb 13 07:16:32 node-2088 kubelet[16776]: E0213 07:16:32.493458 16776 machine.go:288] failed to get cache information for node 0: open /sys/devices/system/cpu/cpu0/cache/index0/size: no such file or directory > Feb 13 07:16:32 node-2088 kubelet[16776]: I0213 07:16:32.493600 16776 gce.go:44] Error while reading product_name: open /sys/class/dmi/id/product_name: no such file or directory > > Help me in fixing above issue. > > > Regards, > Veera. From licanwei_cn at 163.com Thu Feb 13 09:01:52 2020 From: licanwei_cn at 163.com (licanwei) Date: Thu, 13 Feb 2020 17:01:52 +0800 (GMT+08:00) Subject: [Watcher]IRC meeting at 8:00 UTC today In-Reply-To: <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> References: <6840d2b1.a616.170373d1b23.Coremail.licanwei_cn@163.com> <07ec26af-3230-2960-eaa2-9f5e169547e2@dantalion.nl> Message-ID: <7190d46f.2b59.1703dc876c2.Coremail.licanwei_cn@163.com> Hi, next time i will send the mail one day in advance. hope to meet you next time! Thanks licanwei | | licanwei_cn | | 邮箱:licanwei_cn at 163.com | 签名由 网易邮箱大师 定制 On 02/12/2020 17:58, info at dantalion.nl wrote: Hello Lican, Sorry but I am unable to attend at such short notice as in my timezone I won't be awake unless it is for the Watcher meeting. To be able to adjust my transit schedule I will have to know a day in advance. Hope to be there next time. Kind regards, Corne Lukken On 2/12/20 3:31 AM, licanwei wrote: > > > > > licanwei_cn > 邮箱:licanwei_cn at 163.com > 签名由 网易邮箱大师 定制 > > | | > licanwei_cn > | > | > 邮箱:licanwei_cn at 163.com > | > > 签名由 网易邮箱大师 定制 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Thu Feb 13 09:07:06 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Thu, 13 Feb 2020 10:07:06 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <1317590151.1993706.1581576773856@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> <1317590151.1993706.1581576773856@mail.yahoo.com> Message-ID: <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> We're using 2.9.5 on the x86_64 gate and that works fine. I'm not sure if downgrading could help. This is a Neutron issue and I don't have much experience on such a low level. You can try asking on IRC, e.g. on #openstack-neutron. On Thu, 2020-02-13 at 06:52 +0000, VeeraReddy wrote: > Thanks mdulko, > Issue is in openvswitch, iam getting following error in switchd logs > ./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message > > Do we need to patch openvswitch to support above flow? > > My ovs version > > [root at node-2088 ~]# ovs-vsctl --version > ovs-vsctl (Open vSwitch) 2.11.0 > DB Schema 7.16.1 > > > > Regards, > Veera. > > > On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: > > > The controller logs are from an hour later than the CNI one. The issues > seems not to be present. > > Isn't your controller still restarting? If so try to use -p option on > `kubectl logs` to get logs from previous run. > > On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > > Hi Mdulko, > > Below are log files: > > > > Controller Log:http://paste.openstack.org/show/789457/ > > cni : http://paste.openstack.org/show/789456/ > > kubelet : http://paste.openstack.org/show/789453/ > > > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > > > Please let me know the issue > > > > > > > > Regards, > > Veera. > > > > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > > > > Hi, > > > > So from this run you need the kuryr-controller logs. Apparently the pod > > never got annotated with an information about the VIF. > > > > Thanks, > > Michał > > > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Thanks for your support. > > > > > > As you mention i removed readinessProbe and > > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > > on /healthz endpoint. Are those full logs, including the moment you > > > tried spawning a container there? It seems like you only pasted the > > > fragments with tracebacks regarding failures to read /healthz endpoint > > > of kube-apiserver. That is another problem you should investigate - > > > that causes Kuryr pods to restart. > > > > > > At first I'd disable the healthchecks (remove readinessProbe and > > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > > logs. > > > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > > Hi mdulko, > > > > Please find kuryr-cni logs > > > > http://paste.openstack.org/show/789209/ > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > > > > Hi, > > > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > > > Thanks, > > > > Michał > > > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > > Hi, > > > > > I am trying to run kubelet in arm64 platform > > > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > > 2. Generated kuryr-cni-arm64 container. > > > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > > > Please help me to fix the issue > > > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > Veera. > > > > > > > > > > > > > > > > > > > > From merlin.blom at bertelsmann.de Thu Feb 13 09:49:17 2020 From: merlin.blom at bertelsmann.de (merlin.blom at bertelsmann.de) Date: Thu, 13 Feb 2020 09:49:17 +0000 Subject: [neutron][metering] Dublicated Neutron Meter Rules in different projects kills metering Message-ID: I want to use Neutron Meter with gnocchi to report the egress bandwidth used for public traffic. So I created neutron meter labels and neutron meter rules to include all ipv4 traffic: +-------------------+------------------------------------------------------- ---------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------- ---------------------------------------------+ | direction | egress | | id | f2c9b9a8-0af3-40a5-a718-6e841bad111d | | is_excluded | False | | location | cloud='', project.domain_id='default', project.domain_name=, | | | project.id='80120067cd7949908e44dce45aeb7712', project.name='billing', region_name='xxx', | | | zone= | | metering_label_id | d0068fc8-4a3e-4108-aa11-e3c171d4d1e1 | | name | None | | project_id | None | | remote_ip_prefix | 0.0.0.0/0 | +-------------------+------------------------------------------------------- ---------------------------------------------+ And excluded all private nets: +-------------------+------------------------------------------------------- ---------------------------------------------+ | Field | Value | +-------------------+------------------------------------------------------- ---------------------------------------------+ | direction | egress | | id | 838c9631-665b-42b6-b1e9-539983a38573 | | is_excluded | True | | location | cloud='', project.domain_id='default', project.domain_name=, | | | project.id='80120067cd7949908e44dce45aeb7712', project.name='billing', region_name='xxx', | | | zone= | | metering_label_id | 435652e6-e985-4351-a31a-954bace9eea0 | | name | None | | project_id | None | | remote_ip_prefix | 10.0.0.0/8 | +-------------------+------------------------------------------------------- ---------------------------------------------+ It works fine for just one project but if I apply it to all projects it fails and no measures are recorded in gnocchi. The neutron-metering-agent.log shows the following warning: Feb 13 09:14:18 xxx_host neutron-metering-agent: 2020-02-13 09:14:09.648 4732 WARNING neutron.agent.linux.iptables_manager [req-4c38f1f5-2db4-4d4a-9c1f-9585b1b50427 65c6d4bdcbc7469a910f6361b7f70f27 80120067cd7949908e44dce45aeb7712 - - -] Duplicate iptables rule detected. This may indicate a bug in the iptables rule generation code. Line: -A neutron-meter-r-28155d45-d16 -s 10.0.0.0/8 -o qg-c61bafef-ea -j RETURN I would expect that it is possible to have similar rules for different projects. What do you think? Is it part of the rule creation code? In the iptables_manager code the function is criticized: https://github.com/openstack/neutron/blob/86e4f141159072421a19080455caba1b0e fef776/neutron/agent/linux/iptables_manager.py # TODO(kevinbenton): remove this function and the next one. They are # just oversized brooms to sweep bugs under the rug!!! We generate the # rules and we shouldn't be generating duplicates. def _weed_out_duplicates(line): if line in seen_lines: thing = 'chain' if line.startswith(':') else 'rule' LOG.warning("Duplicate iptables %(thing)s detected. This " "may indicate a bug in the iptables " "%(thing)s generation code. Line: %(line)s", {'thing': thing, 'line': line}) return False seen_lines.add(line) # Leave it alone return True https://bugs.launchpad.net/neutron/+bug/1863068 Merlin Blom -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5195 bytes Desc: not available URL: From lyarwood at redhat.com Thu Feb 13 09:51:02 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 13 Feb 2020 09:51:02 +0000 Subject: [nova][cinder] What should the behaviour of extend_volume be with attached encrypted volumes? Message-ID: <20200213095102.qrvvdqbxa22jceo7@lyarwood.usersys.redhat.com> Hello all, The following bug was raised recently regarding a failure to extend attached encrypted volumes: Failing to extend an attached encrypted volume https://bugs.launchpad.net/nova/+bug/1861071 I've worked up a series below that resolves this for LUKSv1 volumes by taking the LUKSv1 header into account before calling Libvirt to resize the block device within the instance: https://review.opendev.org/#/q/topic:bug/1861071 This results in the instance visable block device being resized to a size just smaller than that requested through Cinder's API. My question to the list is if that behaviour is acceptable given the same call to extend an attached unencrypted volume *will* grow the instance visable block device to the requested size? Many thanks in advance, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 From ignaziocassano at gmail.com Thu Feb 13 10:19:06 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 11:19:06 +0100 Subject: [queens]neutron][metadata] configuration Message-ID: Hello everyone, in my installation of Queees I am using many provider networks. I don't use openstack router but only dhcp. I would like my instances to reach the metadata agent without the 169.154.169.254 route, so I would like the provider networks to directly reach the metadata agent on the internal api vip. How can I get this configuration? Thank you Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Feb 13 10:38:57 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 13 Feb 2020 11:38:57 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: Message-ID: Hi, In case of such isolated networks, You can configure neutron to serve metadata in dhcp namespace and that it will set route to 169.254.169.254 via dhcp port’s IP address. Please check config options: [1] and [2] [1] https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.enable_isolated_metadata [2] https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.force_metadata > On 13 Feb 2020, at 11:19, Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many provider networks. I don't use openstack router but only dhcp. > I would like my instances to reach the metadata agent without the 169.154.169.254 route, so I would like the provider networks to directly reach the metadata agent on the internal api vip. > How can I get this configuration? > Thank you > Ignazio — Slawek Kaplonski Senior software engineer Red Hat From ignaziocassano at gmail.com Thu Feb 13 11:15:34 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 12:15:34 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: Message-ID: Hello Slawek, I do not want to use metadata in dhcp namespace. It forces option 121 and I receive all subnet routes on my instances. If I use more than 10 subnets on same vlan , it does not work because I do not receive the 169.254.169.254 routing tables due to the following error: dnsmasq-dhcp[52165]: cannot send DHCP/BOOTP option 121: no space left in packet Ignazio Il giorno gio 13 feb 2020 alle ore 11:39 Slawek Kaplonski < skaplons at redhat.com> ha scritto: > Hi, > > In case of such isolated networks, You can configure neutron to serve > metadata in dhcp namespace and that it will set route to 169.254.169.254 > via dhcp port’s IP address. > Please check config options: [1] and [2] > > [1] > https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.enable_isolated_metadata > [2] > https://docs.openstack.org/neutron/latest/configuration/dhcp-agent.html#DEFAULT.force_metadata > > > On 13 Feb 2020, at 11:19, Ignazio Cassano > wrote: > > > > Hello everyone, in my installation of Queees I am using many provider > networks. I don't use openstack router but only dhcp. > > I would like my instances to reach the metadata agent without the > 169.154.169.254 route, so I would like the provider networks to directly > reach the metadata agent on the internal api vip. > > How can I get this configuration? > > Thank you > > Ignazio > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Thu Feb 13 12:40:51 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Thu, 13 Feb 2020 13:40:51 +0100 Subject: [CINDER] Distributed storage alternatives Message-ID: Hi all. we 'd like to explore storage back end alternatives to CEPH for Openstack I am aware of GlusterFS but what would you recommend for distributed storage like Ceph and specifically for block device provisioning? Of course must be: 1. *Reliable* 2. *Fast* 3. *Capable of good performance over WAN given a good network back end* Both open source and commercial technologies and ideas are welcome. Cheers -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Feb 13 13:09:37 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 13 Feb 2020 13:09:37 +0000 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: Message-ID: <20200213130936.3ainb4cift5euslw@yuggoth.org> On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > Hello everyone, in my installation of Queees I am using many > provider networks. I don't use openstack router but only dhcp. I > would like my instances to reach the metadata agent without the > 169.154.169.254 route, so I would like the provider networks to > directly reach the metadata agent on the internal api vip. How can > I get this configuration? Have you tried using configdrive instead of the metadata service? It's generally more reliable. The main downside is that it doesn't change while the instance is running, so if you're wanting to use this to update routes for active instances between reboots then I suppose it wouldn't solve your problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Thu Feb 13 13:29:56 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 14:29:56 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: <20200213130936.3ainb4cift5euslw@yuggoth.org> References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: We are going to try it. Ignazio Il giorno gio 13 feb 2020 alle ore 14:16 Jeremy Stanley ha scritto: > On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many > > provider networks. I don't use openstack router but only dhcp. I > > would like my instances to reach the metadata agent without the > > 169.154.169.254 route, so I would like the provider networks to > > directly reach the metadata agent on the internal api vip. How can > > I get this configuration? > > Have you tried using configdrive instead of the metadata service? > It's generally more reliable. The main downside is that it doesn't > change while the instance is running, so if you're wanting to use > this to update routes for active instances between reboots then I > suppose it wouldn't solve your problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jean-philippe at evrard.me Thu Feb 13 13:52:53 2020 From: jean-philippe at evrard.me (Jean-Philippe Evrard) Date: Thu, 13 Feb 2020 14:52:53 +0100 Subject: [TC] W release naming In-Reply-To: References: <12b9b497-1249-a220-a70f-937a12cce1a0@redhat.com> Message-ID: <4c614eaf7a21acc6d3453835d4e87518463d36be.camel@evrard.me> On Wed, 2020-02-12 at 17:52 +0000, Albert Braden wrote: > A few years ago I would have thought that this was nonsense. I still > do, but nonsense is the order of the day. We have to be careful to > not give the professionally offended an opportunity. Agreed. > > -----Original Message----- > From: Zane Bitter > Also another fun fact: if you're explaining, you're losing ;) Correct. I love the recursivity of this conversation. > TIL, thanks! Another reason to love the movie, while also not > choosing > this as the release name. Agreed. From ignaziocassano at gmail.com Thu Feb 13 15:31:40 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 16:31:40 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: <20200213130936.3ainb4cift5euslw@yuggoth.org> References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: Hello, config drive is the best solution for our situation. Thanks Ignazio Il giorno gio 13 feb 2020 alle ore 14:16 Jeremy Stanley ha scritto: > On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many > > provider networks. I don't use openstack router but only dhcp. I > > would like my instances to reach the metadata agent without the > > 169.154.169.254 route, so I would like the provider networks to > > directly reach the metadata agent on the internal api vip. How can > > I get this configuration? > > Have you tried using configdrive instead of the metadata service? > It's generally more reliable. The main downside is that it doesn't > change while the instance is running, so if you're wanting to use > this to update routes for active instances between reboots then I > suppose it wouldn't solve your problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From james.page at canonical.com Thu Feb 13 17:02:28 2020 From: james.page at canonical.com (James Page) Date: Thu, 13 Feb 2020 17:02:28 +0000 Subject: [charms] Peter Matulis -> charms-core Message-ID: Hi Team I'd like to proposed Peter Matulis for membership of the charms-core team. Although he's not be focussed on developing the codebase since he started contributing to the OpenStack Charms he's made a number of significant contributions to our documentation as well as regularly providing feedback on updates to README's and the charm deployment guide. I think he would make a valuable addition to the charms-core review team! Cheers James -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.ames at canonical.com Thu Feb 13 17:10:45 2020 From: david.ames at canonical.com (David Ames) Date: Thu, 13 Feb 2020 09:10:45 -0800 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: +1 Peter has made a significant impact on the quality of our documentation. Having Peter in the charms-core team will allow him to be more efficient in doing his work and continuing the positive impact on our documentation. -- David Ames On Thu, Feb 13, 2020 at 9:08 AM James Page wrote: > > Hi Team > > I'd like to proposed Peter Matulis for membership of the charms-core team. > > Although he's not be focussed on developing the codebase since he started contributing to the OpenStack Charms he's made a number of significant contributions to our documentation as well as regularly providing feedback on updates to README's and the charm deployment guide. > > I think he would make a valuable addition to the charms-core review team! > > Cheers > > James > From ignaziocassano at gmail.com Thu Feb 13 17:20:29 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Thu, 13 Feb 2020 18:20:29 +0100 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: Hello Alfredo, I think best opensource solution is ceph. As far as commercial solutions are concerned we are working with network appliance (netapp) and emc unity. Regards Ignazio Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha scritto: > Hi all. > we 'd like to explore storage back end alternatives to CEPH for Openstack > > I am aware of GlusterFS but what would you recommend for distributed > storage like Ceph and specifically for block device provisioning? > Of course must be: > > 1. *Reliable* > 2. *Fast* > 3. *Capable of good performance over WAN given a good network back end* > > Both open source and commercial technologies and ideas are welcome. > > Cheers > > -- > *Alfredo* > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Feb 13 17:38:30 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 13 Feb 2020 11:38:30 -0600 Subject: [nova] API updates week 20-7 Message-ID: <1703fa17316.cd28ef0113738.2723432885367961729@ghanshyammann.com> Hello Everyone, Please find the Nova API updates of this week. Please add if I missed any BPs/API related work. API Related BP : ============ COMPLETED: 1. Add image-precache-support spec: - https://blueprints.launchpad.net/nova/+spec/image-precache-support Code Ready for Review: ------------------------------ 1. Nova API policy improvement - Topic: https://review.opendev.org/#/q/topic:bp/policy-defaults-refresh+(status:open+OR+status:merged) - Weekly Progress: API till D is covered and ready for review. Fixed 5 bugs in policy while working on new defaults. - Review guide over ML - http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008504.html 2. Non-Admin user can filter their instance by availability zone: - Topic: https://review.opendev.org/#/q/topic:bp/non-admin-filter-instance-by-az+(status:open+OR+status:merged) - Weekly Progress: Code under review. efried needs to remove his -2 now as spec is merge 3. Boot from volume instance rescue - Topic: https://review.opendev.org/#/q/topic:bp/virt-bfv-instance-rescue+(status:open+OR+status:merged) - Weekly Progress: Code is in progress. Lee Yarwood has removed the WIP from patches so ready for review. 4. Add action event fault details -Topic: https://review.opendev.org/#/q/topic:bp/action-event-fault-details+(status:open+OR+status:merged) - Weekly Progress: Spec is merged and code is ready for review. Specs are merged and code in-progress: ------------------------------ ------------------ - None Spec Ready for Review or Action from Author: --------------------------------------------------------- 1. Support specifying vnic_type to boot a server -Spec: https://review.opendev.org/#/c/672400/ - Weekly Progress: Stephen summarized the comment about not in scope of nova. I will remove it from this list from next report. 2. Allow specify user to reset password -Spec: https://review.opendev.org/#/c/682302/5 - Weekly Progress: One +2 on this but other are disagree on this idea. More discussion on review. 3. Support re-configure deleted_on_termination in server -Spec: https://review.openstack.org/#/c/580336/ - Weekly Progress: This is old spec and there is no consensus on this till now. Others: 1. None Bugs: ==== I started fixing policy bugs while working on policy-defaults-refresh BP. 5 bugs have been identified till now and fix up for review. - https://bugs.launchpad.net/nova/+bugs?field.tag=policy-defaults-refresh NOTE- There might be some bug which is not tagged as 'api' or 'api-ref', those are not in the above list. Tag such bugs so that we can keep our eyes. -gmann From sean.mcginnis at gmx.com Thu Feb 13 19:22:55 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Thu, 13 Feb 2020 13:22:55 -0600 Subject: [release] Release countdown for week R-13, February 10-14 In-Reply-To: <20200207135017.GA1488525@sm-workstation> References: <20200207135017.GA1488525@sm-workstation> Message-ID: <11e76513-ff3a-ab3a-4ca3-808550a17d6a@gmx.com> > General Information > ------------------- > Libraries need to be released at least once per milestone period. Next week, > the release team will propose releases for any library that has not been > otherwise released since milestone 1. PTL's and release liaisons, please watch > for these and give a +1 to acknowledge them. If there is some reason to hold > off on a release, let us know that as well. A +1 would be appreciated, but if > we do not hear anything at all by the end of the week, we will assume things > are OK to proceed. Thank you to all who have given the explicit go ahead for these releases so far. We have been able to process the majority of the milestone-2 release requests. There are still quite a few out there that we will approve tomorrow if there is no response. Please do let us know if your team is ready for these to go ahead. If you need a little more time (as in, by very early next week), please comment on the proposed patch to let us know to hold off on processing until we've gotten an update from you. Thanks! Sean From rosmaita.fossdev at gmail.com Thu Feb 13 20:07:52 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Thu, 13 Feb 2020 15:07:52 -0500 Subject: [cinder][nova] volume-local-cache meeting results Message-ID: <39695ce2-fe35-6bfb-17f1-bbfaef4293d1@gmail.com> Thanks to everyone who attended this morning, we had a productive meeting. If you missed it and want to know what happened: etherpad: https://etherpad.openstack.org/p/volume-local-cache recording: https://youtu.be/P9bouCCoqVo Liang Fang will be updating the specs to reflect what was discussed: cinder spec: https://review.opendev.org/#/c/684556/ nova spec: https://review.opendev.org/#/c/689070/ cheers, brian From whayutin at redhat.com Thu Feb 13 21:07:36 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Thu, 13 Feb 2020 14:07:36 -0700 Subject: [tripleo] CI is RED py27 related In-Reply-To: References: <1703546defa.11115f230526753.2664650546742483915@ghanshyammann.com> Message-ID: On Wed, Feb 12, 2020 at 8:51 PM Wesley Hayutin wrote: > > > On Wed, Feb 12, 2020 at 12:39 PM Wesley Hayutin > wrote: > >> >> >> On Tue, Feb 11, 2020 at 10:23 AM Ghanshyam Mann >> wrote: >> >>> >>> ---- On Tue, 11 Feb 2020 11:07:14 -0600 Wesley Hayutin < >>> whayutin at redhat.com> wrote ---- >>> > >>> > >>> > On Mon, Feb 10, 2020 at 10:24 PM Wesley Hayutin >>> wrote: >>> > >>> > Patches are up..https://review.opendev.org/707204 >>> > https://review.opendev.org/707054 >>> > https://review.opendev.org/#/c/707062/ >>> >>> Do we still have py27 jobs on tripleo CI ? If so I think it is time to >>> drop it >>> completely as the deadline is 13th Feb. Because there might be more >>> incompatible >>> dependencies as py2 drop from them is speedup. >>> >>> NOTE: stable branch testing till rocky use the stable u-c now to avoid >>> these issues. >>> >>> -gmann >>> >>> >>> > >>> > Greetings, >>> > >>> > Most of the jobs went RED a few minutes ago.Again it's related to >>> python27. Nothing is going to pass CI until this is fixed. >>> > See: https://etherpad.openstack.org/p/ruckroversprint21 >>> > We'll update the list when we have the required patches in.Thanks >>> >>> >> OK.. folks, >> Thanks for your patience! >> The final patch to restore sanity [1] is in the gate. Most everything >> was working since yesterday unless you were patching tripleo-common. This >> is the final item we've identified. >> >> Be safe out there! >> CI == Green >> >> > Well.. a new upstream centos-7 image broke the last patch before it merged. > So.. now there are two patches that need to merge. > > https://review.opendev.org/#/c/707525 > https://review.opendev.org/#/c/707330/ > > Thank you lord for bounty of work you have provided. We are blessed. > Thanks for your patience.. All the patches that needed to merge have merged. Careful w/ rechecks at peak times in the next few days as there have been NO promotions and it will take longer and longer for each node and each container to yum update. Take care > > >> >> [1] https://review.opendev.org/#/c/707330/ >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Burak.Hoban at iag.com.au Thu Feb 13 21:19:16 2020 From: Burak.Hoban at iag.com.au (Burak Hoban) Date: Thu, 13 Feb 2020 21:19:16 +0000 Subject: [CINDER] Distributed storage alternatives Message-ID: Hi guys, We use Dell EMC VxFlex OS, which in its current version allows for free use and commercial (in version 3.5 a licence is needed, but its perpetual). It's similar to Ceph but more geared towards scale and performance etc (it use to be called ScaleIO). Other than that, I know of a couple sites using SAN storage, but a lot of people just seem to use Ceph. Cheers, Burak ------------------------------ Message: 2 Date: Thu, 13 Feb 2020 18:20:29 +0100 From: Ignazio Cassano To: Alfredo De Luca Cc: openstack-discuss Subject: Re: [CINDER] Distributed storage alternatives Message-ID: Content-Type: text/plain; charset="utf-8" Hello Alfredo, I think best opensource solution is ceph. As far as commercial solutions are concerned we are working with network appliance (netapp) and emc unity. Regards Ignazio Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha scritto: > Hi all. > we 'd like to explore storage back end alternatives to CEPH for > Openstack > > I am aware of GlusterFS but what would you recommend for distributed > storage like Ceph and specifically for block device provisioning? > Of course must be: > > 1. *Reliable* > 2. *Fast* > 3. *Capable of good performance over WAN given a good network back > end* > > Both open source and commercial technologies and ideas are welcome. > > Cheers > > -- > *Alfredo* > > _____________________________________________________________________ The information transmitted in this message and its attachments (if any) is intended only for the person or entity to which it is addressed. The message may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon this information, by persons or entities other than the intended recipient is prohibited. If you have received this in error, please contact the sender and delete this e-mail and associated material from any computer. The intended recipient of this e-mail may only use, reproduce, disclose or distribute the information contained in this e-mail and any attached files, with the permission of the sender. This message has been scanned for viruses. _____________________________________________________________________ From doka.ua at gmx.com Thu Feb 13 23:02:51 2020 From: doka.ua at gmx.com (Volodymyr Litovka) Date: Fri, 14 Feb 2020 01:02:51 +0200 Subject: RHOSP-like installation Message-ID: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Dear colleagues, while having a good experience with Openstack on Ubuntu, we're facing a plenty of questions re RHOSP installation. The primary requirement for our team re RHOSP is to get a knowledge on RHOSP - how to install it and maintain. As far as I understand, RDO is the closest way to reach this target but which kind of installation it's better to use? - * plain RDO installation as described in generic Openstack guide at https://docs.openstack.org/install-guide/index.html (specifics in RHEL/CentOS sections) * or TripleO installation as described in http://tripleo.org/install/ * or, may be, it is possible to use RHOSP in kind of trial mode to get enough knowledge on this platform? Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're going to use in "ultraconverged" mode - as both controller and agent (compute/network/storage) nodes (controllers, though, can be in virsh-controlled VMs). In case of TripleO scenario, 4th server can be used for undercloud role. This installation is intended not for production use, but rather for learning purposes, so no special requirements for productivity. The only special requirement - to be functionally as much as close to canonical RHOSP platform. I will highly appreciate your suggestions on this issue. Thank you. -- Volodymyr Litovka "Vision without Execution is Hallucination." -- Thomas Edison From amy at demarco.com Thu Feb 13 23:17:17 2020 From: amy at demarco.com (Amy Marrich) Date: Thu, 13 Feb 2020 17:17:17 -0600 Subject: RHOSP-like installation In-Reply-To: References: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Message-ID: Adding the discuss list back in On Thu, Feb 13, 2020 at 5:16 PM Amy Marrich wrote: > Volodymyr, > > I'm sure someone from the TripleO team will pipe in, but TripleO is closer > to RHOSP then RDO. When I was playing with it in the past I found Keith > Tenzer's blogs helpful. There might be more recent ones then this but > here's a link to one: > > > https://keithtenzer.com/2015/10/14/howto-openstack-deployment-using-tripleo-and-the-red-hat-openstack-director/ > > Thanks, > > Amy Marrich (spotz) > > On Thu, Feb 13, 2020 at 5:05 PM Volodymyr Litovka wrote: > >> Dear colleagues, >> >> while having a good experience with Openstack on Ubuntu, we're facing a >> plenty of questions re RHOSP installation. >> >> The primary requirement for our team re RHOSP is to get a knowledge on >> RHOSP - how to install it and maintain. As far as I understand, RDO is >> the closest way to reach this target but which kind of installation it's >> better to use? - >> * plain RDO installation as described in generic Openstack guide at >> https://docs.openstack.org/install-guide/index.html (specifics in >> RHEL/CentOS sections) >> * or TripleO installation as described in http://tripleo.org/install/ >> * or, may be, it is possible to use RHOSP in kind of trial mode to get >> enough knowledge on this platform? >> >> Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're >> going to use in "ultraconverged" mode - as both controller and agent >> (compute/network/storage) nodes (controllers, though, can be in >> virsh-controlled VMs). In case of TripleO scenario, 4th server can be >> used for undercloud role. This installation is intended not for >> production use, but rather for learning purposes, so no special >> requirements for productivity. The only special requirement - to be >> functionally as much as close to canonical RHOSP platform. >> >> I will highly appreciate your suggestions on this issue. >> >> Thank you. >> >> -- >> Volodymyr Litovka >> "Vision without Execution is Hallucination." -- Thomas Edison >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Thu Feb 13 23:31:39 2020 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 13 Feb 2020 15:31:39 -0800 Subject: [all][PTL][tc] U Community Goal: Project PTL & Contrib Docs Update #2 Message-ID: Hello! First off I want to say thank you to the 4 projects that have already gotten started/completed the goal! Good work Glance, Neutron, Searchlight, and Watcher :) For the rest of the projects, thankfully we have more time to work on this goal than for the dropping py2 goal given that its documentation we aren't restricted to it being completed by feature freeze. That said, the sooner the better :) One thing I did want to draw attention to was the discussion around adding an include for CONTRIBUTING.rst to doc/source/contributor/contributing.rst[4] and the patch I have to add it to the template[5]. I also pushed a patch to clarify doc/source/contributor/contributing.rst vs CONTRIBUTING.rst -Kendall (diablo_rojo) [1] Goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html [2] Docs Template: https://opendev.org/openstack/cookiecutter/src/branch/master/%7b%7bcookiecutter.repo_name%7d%7d/doc/source/contributor/contributing.rst [3] Tracking: https://storyboard.openstack.org/#!/story/2007236 [4] TC Discussion: http://eavesdrop.openstack.org/irclogs/%23openstack-tc/%23openstack-tc.2020-02-06.log.html#t2020-02-06T16:37:23 [5] Patch to add include: https://review.opendev.org/#/c/707735/ [6] contributing.rst vs CONTRIBUTING.rst clarification patch: https://review.opendev.org/#/c/707736/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Thu Feb 13 23:54:43 2020 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 13 Feb 2020 18:54:43 -0500 Subject: RHOSP-like installation In-Reply-To: References: <7a2f6705-b935-fc97-1f5b-1ae6a85b85a1@gmx.com> Message-ID: We use RHOSP at $day_job and my TripleO lab is extremely close to our Production environment. It features the Undercloud and Overcloud topology along the same-ish templates for the deployment. You can use Ironic for metal management or config download. On Thu, Feb 13, 2020, 6:24 PM Amy Marrich wrote: > Adding the discuss list back in > > On Thu, Feb 13, 2020 at 5:16 PM Amy Marrich wrote: > >> Volodymyr, >> >> I'm sure someone from the TripleO team will pipe in, but TripleO is >> closer to RHOSP then RDO. When I was playing with it in the past I found >> Keith Tenzer's blogs helpful. There might be more recent ones then this but >> here's a link to one: >> >> >> https://keithtenzer.com/2015/10/14/howto-openstack-deployment-using-tripleo-and-the-red-hat-openstack-director/ >> >> Thanks, >> >> Amy Marrich (spotz) >> >> On Thu, Feb 13, 2020 at 5:05 PM Volodymyr Litovka >> wrote: >> >>> Dear colleagues, >>> >>> while having a good experience with Openstack on Ubuntu, we're facing a >>> plenty of questions re RHOSP installation. >>> >>> The primary requirement for our team re RHOSP is to get a knowledge on >>> RHOSP - how to install it and maintain. As far as I understand, RDO is >>> the closest way to reach this target but which kind of installation it's >>> better to use? - >>> * plain RDO installation as described in generic Openstack guide at >>> https://docs.openstack.org/install-guide/index.html (specifics in >>> RHEL/CentOS sections) >>> * or TripleO installation as described in http://tripleo.org/install/ >>> * or, may be, it is possible to use RHOSP in kind of trial mode to get >>> enough knowledge on this platform? >>> >>> Our lab consists of four servers (64G RAM, 16 cores at 2GHz) which we're >>> going to use in "ultraconverged" mode - as both controller and agent >>> (compute/network/storage) nodes (controllers, though, can be in >>> virsh-controlled VMs). In case of TripleO scenario, 4th server can be >>> used for undercloud role. This installation is intended not for >>> production use, but rather for learning purposes, so no special >>> requirements for productivity. The only special requirement - to be >>> functionally as much as close to canonical RHOSP platform. >>> >>> I will highly appreciate your suggestions on this issue. >>> >>> Thank you. >>> >>> -- >>> Volodymyr Litovka >>> "Vision without Execution is Hallucination." -- Thomas Edison >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From agarwalvishakha18 at gmail.com Fri Feb 14 06:57:45 2020 From: agarwalvishakha18 at gmail.com (Vishakha Agarwal) Date: Fri, 14 Feb 2020 12:27:45 +0530 Subject: [keystone] Keystone Team Update - Week of 10 February 2020 Message-ID: # Keystone Team Update - Week of 10 February 2020 ## News ### YVR PTG Keystone People can write down the topics in etherpad [1] to have a brief discussion with the team. [1] https://etherpad.openstack.org/p/yvr-ptg-keystone ### User Support and Bug Duty Every week the duty is being rotated between the members. The person-in-charge for bug duty for current and upcoming week can be seen on the etherpad [2] [2] https://etherpad.openstack.org/p/keystone-l1-duty ## Open Specs Ussuri specs: https://bit.ly/2XDdpkU Ongoing specs: https://bit.ly/2OyDLTh ## Recently Merged Changes Search query: https://bit.ly/2pquOwT We merged 10 changes this week. ## Changes that need Attention Search query: https://bit.ly/2tymTje There are 21 changes that are passing CI, not in merge conflict, have no negative reviews and aren't proposed by bots. ### Priority Reviews * Ussuri Roadmap Stories - Groups in keystone SAML assertion https://tree.taiga.io/project/keystone-ussuri-roadmap/us/33 https://review.opendev.org/#/c/588211/ Add openstack_groups to assertion - Add support for modifying resource options to CLI tool https://tree.taiga.io/project/keystone-ussuri-roadmap/us/53 https://review.opendev.org/#/c/697444/ Adding options to user cli ## Bugs This week we opened 3 new bugs and closed 2. Bugs opened (3) Bug #1862802 (keystone:Wishlist): Avoid the default domain usage when the Domain is not specified in the project creation - Opened by Raildo Mascena de Sousa Filho https://bugs.launchpad.net/keystone/+bug/1862802 Bug #1862606 (keystone:Undecided): LDAP support broken if UTF8 characters in DN (python2) - Opened by Rafal Ramocki https://bugs.launchpad.net/keystone/+bug/1862606 Bug #1863098 (keystone:Undecided): Install and configure in keystone - Opened by Assassins! https://bugs.launchpad.net/keystone/+bug/1863098 Bugs closed (2) Bug #1808305 (python-keystoneclient:Won't Fix) https://bugs.launchpad.net/python-keystoneclient/+bug/1808305 Bug #1823258 (keystone:Undecided): RFE: Immutable Resources - Fixed by Colleen Murphy https://bugs.launchpad.net/keystone/+bug/1823258 ## Milestone Outlook https://releases.openstack.org/ussuri/schedule.html This week marks the milestone 2 spec freeze. Feature proposal freeze is in week of March 9. This means code implementing approved specs needs to be in a reviewable, non-WIP state. ## Help with this newsletter Help contribute to this newsletter by editing the etherpad: https://etherpad.openstack.org/p/keystone-team-newsletter ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From eblock at nde.ag Fri Feb 14 07:37:52 2020 From: eblock at nde.ag (Eugen Block) Date: Fri, 14 Feb 2020 07:37:52 +0000 Subject: VPNaaS with multiple endpoint groups Message-ID: <20200214073752.Horde.fEamBT4aCkEnszSfxNwdzso@webmail.nde.ag> Hi all, is anyone here able to help with a vpn issue? It's not really my strong suit but I'll try to explain. In a Rocky environment (using openvswitch) a customer has setup a VPN service successfully, but that only seems to work if there's only one local and one peer endpoint group. According to the docs it should work with multiple endpoint groups, as far as I could tell the setup looks fine and matches the docs (don't create the subnet when creating the vpn service but use said endpoint groups). What we're seeing is that as soon as the vpn site connection is created with multiple endpoints only one of the destination IPs is reachable. And it seems as if it's always the first in the list of EPs (see below). This seems to be reflected in the iptables where we also only see one of the required IP ranges. Also neutron reports duplicate rules if we try to use both EPs: 2020-02-12 14:14:27.638 16275 WARNING neutron.agent.linux.iptables_manager [req-92ff6f06-3a92-4daa-aeea-9c02dc9a31c3 ba9bf239530d461baea2f6f60bd301e6 850dad648ce94dbaa5c0ea2fb450bbda - - -] Duplicate iptables rule detected. This may indicate a bug in the iptables rule generation code. Line: -A neutron-l3-agent-POSTROUTING -s X.X.252.0/24 -d Y.Y.0.0/16 -m policy --dir out --pol ipsec -j ACCEPT These are the configured endpoints: root at control:~ # openstack vpn endpoint group list +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ | ID | Name | Type | Endpoints | +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ | 0f853567-e4bf-4019-9290-4cd9f94a9793 | peer-ep-group-1 | cidr | [u'X.X.253.0/24'] | | 152a0f9e-ce49-4769-94f1-bc0bebedd3ec | peer-ep-group-2 | cidr | [u'X.X.253.0/24', u'Y.Y.0.0/16'] | | 791ab8ef-e150-4ba0-ac2c-c044659f509e | local-ep-group1 | subnet | [u'38efad5e-0f1e-4e36-8995-74a611bfef41'] | | 810b0bf2-d258-459b-9b57-ae5b491ea612 | local-ep-group2 | subnet | [u'38efad5e-0f1e-4e36-8995-74a611bfef41', u'9e35d80f-029e-4cc1-a30b-1753f7683e16'] | | b5c79e08-41e4-441c-9ed3-9b02c2654173 | peer-ep-group-3 | cidr | [u'Y.Y.0.0/16'] | +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ Has anyone experience with this and could help me out? Another follow-up question: how can we gather some information regarding the ipsec status? Althoug there is a active tunnel we don't see anything with 'ipsec statusall', I've checked all namespaces on the control node. Any help is highly appreciated! Best regards, Eugen From samueldmq at gmail.com Fri Feb 14 09:43:55 2020 From: samueldmq at gmail.com (Samuel de Medeiros Queiroz) Date: Fri, 14 Feb 2020 06:43:55 -0300 Subject: [Outreachy] Call for mentors, projects by Feb 25 Message-ID: Hi all, TL;DR OpenStack is participating of Outreachy! Mentors, please submit project proposals by Feb 25. More info: https://www.outreachy.org/communities/cfp/openstack/ Outreachy's goal is to support people from groups underrepresented in the technology industry. The upcoming round runs from May to August 2020. Now that we are confirmed as a participating community, we welcome experienced community contributors to help out as mentors, who need to submit project proposals. Therefore, if you have an interesting project that meets the criteria, please submit it as soon as possible. You can find more information about the program, project criteria, mentor requirements, and submit your project at the Call for mentors page [1]. I have participated twice of Outreachy as a mentor and I can tell you it was a great opportunity to get someone onboard the community and, of course, to learn from them. Besides getting work done (which is great), we happen to make our community more diverse, while helping with its continuity. Please let me know if you have any questions. Thanks, Samuel [1] https://www.outreachy.org/communities/cfp/openstack/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfolco at redhat.com Fri Feb 14 13:38:35 2020 From: rfolco at redhat.com (Rafael Folco) Date: Fri, 14 Feb 2020 11:38:35 -0200 Subject: [tripleo] TripleO CI Summary: Sprint 42 Message-ID: Greetings, The TripleO CI team has just completed Sprint 42 / Unified Sprint 21 (Jan 23 thru Feb 12). The following is a summary of completed work during this sprint cycle: - Started refactoring Promoter code into a modular implementation w/ testing oriented design and accommodating the changes for the new promotion pipeline. - Completed building CentOS8 containers with required repositories. This is still an unofficial build as some of the repositories (like Ceph) are not from RDO/TripleO. - Refined the component pipeline design [3] w/ the new aggregated hash containing all promoted components. Continued to implement the downstream version of the component pipeline. - Translated get-hash into a separated role in ci-config repo, de-attaching from promote-hash role in config. Added support for the new component and integration jobs. - Made improvements to the collect-logs plugin as part of the shared goals for the combined CI team. - Built CI workflow to follow successful manual standalone deployment using an IPA server. The TLS CI job is not running yet and still needs to be activated. Ruck/Rover Notes: - There were at least four upstream gate outages during this sprint. All have been resolved at this time. Notes are here [4]. The planned work for the next sprint [1] extends the work started in the previous sprint and focuses on the following: - Build the CentOS8 pipeline starting with the base jobs to build containers and promote hashes. - Replicate the component jobs to all available components and build the new promotion pipeline in CentOS8. - Continue the promoter code refactoring and converting the legacy code in a modular implementation that can be tested more efficiently. - Continue the collect-logs effort to make it a shared tool used by multiple teams. - Collaborate with the RDO team in migrating 3rd party CI to vexxhost. The Ruck and Rover for this sprint are Wesley Hayutin (weshay) and Marios Andreou (marios). Please direct questions or queries to them regarding CI status or issues in #tripleo, ideally to whomever has the ‘|ruck’ suffix on their nick. Ruck/rover notes to be tracked in etherpad [2]. Thanks, rfolco [1] https://tree.taiga.io/project/tripleo-ci-board/taskboard/unified-sprint-22 [2] https://etherpad.openstack.org/p/ruckroversprint22 [3] https://hackmd.io/5uYRmLaOTI2raTbHWsaiSQ [4] https://etherpad.openstack.org/p/ruckroversprint21 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Fri Feb 14 14:41:04 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Fri, 14 Feb 2020 08:41:04 -0600 Subject: [release] Release countdown for week R-12, February 17-21 Message-ID: Development Focus ----------------- We are now past the Ussuri-2 milestone, and entering the last development phase of the cycle. Teams should be focused on implementing planned work for the cycle. Now is a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- Looking ahead to the end of the release cycle, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last feature release before Non-client library freeze (April 2). Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their last feature release before Client library freeze (April 9) * Deliverables following a cycle-with-rc model (that would be most services) observe a Feature freeze on that same date, April 9. Any feature addition beyond that date should be discussed on the mailing-list and get PTL approval. After feature freeze, cycle-with-rc deliverables need to produce a first release candidate (and a stable branch) before RC1 deadline (April 23) * Deliverables following cycle-with-intermediary model can release as necessary, but in all cases before Final RC deadline (May 7) Upcoming Deadlines & Dates -------------------------- Non-client library freeze: April 2 (R-6 week) Client library freeze: April 9 (R-5 week) Ussuri-3 milestone: April 9 (R-5 week) OpenDev+PTG Vancouver: June 8-11 -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Fri Feb 14 15:13:42 2020 From: gagehugo at gmail.com (Gage Hugo) Date: Fri, 14 Feb 2020 09:13:42 -0600 Subject: [security] Vancouver PTG Planning - Security SIG Message-ID: Hello, It's about that time again, to start planning for the next PTG. To help gauge interest and track any topics that anyone is interested in, I've created an etherpad: https://etherpad.openstack.org/p/yvr-ptg-security-sig Please add your name if you plan to attend and any topic ideas that you are interested in. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dpawlik at redhat.com Fri Feb 14 15:47:12 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Fri, 14 Feb 2020 16:47:12 +0100 Subject: [tripleo] RDO image server migration Message-ID: Hello, We are going to migrate images.rdoproject.org to our new cloud provider, starting on February 17th at 10 AM UTC. Migration should be transparent to the end user (scripts and job definition are using new image server). However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik - Javier Pena Regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex.kavanagh at canonical.com Fri Feb 14 16:00:23 2020 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Fri, 14 Feb 2020 16:00:23 +0000 Subject: [charms] Peter Matulis -> charms-core In-Reply-To: References: Message-ID: I'm +1 too. Having Peter in the charms-core team will allow him to merge documentation for the team. On Thu, Feb 13, 2020 at 5:11 PM James Page wrote: > Hi Team > > I'd like to proposed Peter Matulis for membership of the charms-core team. > > Although he's not be focussed on developing the codebase since he started > contributing to the OpenStack Charms he's made a number of significant > contributions to our documentation as well as regularly providing feedback > on updates to README's and the charm deployment guide. > > I think he would make a valuable addition to the charms-core review team! > > Cheers > > James > > -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Feb 14 16:51:01 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 17:51:01 +0100 Subject: [queens][config_drive] not working for ubuntu bionic Message-ID: Hello, I configured config drive and centos instances works fine with it. Ubuntu bionic tries to get metadata from network and cloud-init does not set hostname and does not insert keys for ssh. Any help, please Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From whayutin at redhat.com Fri Feb 14 17:36:35 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Fri, 14 Feb 2020 10:36:35 -0700 Subject: [tripleo] RDO image server migration In-Reply-To: References: Message-ID: FYI... Possible interruption on 3rd party ovb jobs. ---------- Forwarded message --------- From: Daniel Pawlik Date: Fri, Feb 14, 2020 at 8:47 AM Subject: [rdo-dev] [rdo-users] [infra] RDO image server migration To: Hello, We are going to migrate images.rdoproject.org to our new cloud provider, starting on February 17th at 10 AM UTC. Migration should be transparent to the end user (scripts and job definition are using new image server). However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik - Javier Pena Regards, Daniel _______________________________________________ dev mailing list dev at lists.rdoproject.org http://lists.rdoproject.org/mailman/listinfo/dev To unsubscribe: dev-unsubscribe at lists.rdoproject.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Fri Feb 14 18:08:03 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 19:08:03 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: <20200213130936.3ainb4cift5euslw@yuggoth.org> References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: Hello Jeremy, I disabled isolate metadata in dhcp agent an configura config drive. It works fine with centos 7 but it does not with ubuntu 18. On ubuntu 18 cloud init tries ti contact metadata on 169.254.169.254 and does not object ssh keys :-( Ignazio Il Gio 13 Feb 2020, 14:16 Jeremy Stanley ha scritto: > On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: > > Hello everyone, in my installation of Queees I am using many > > provider networks. I don't use openstack router but only dhcp. I > > would like my instances to reach the metadata agent without the > > 169.154.169.254 route, so I would like the provider networks to > > directly reach the metadata agent on the internal api vip. How can > > I get this configuration? > > Have you tried using configdrive instead of the metadata service? > It's generally more reliable. The main downside is that it doesn't > change while the instance is running, so if you're wanting to use > this to update routes for active instances between reboots then I > suppose it wouldn't solve your problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Feb 14 18:10:32 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 14 Feb 2020 18:10:32 +0000 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: Message-ID: <20200214181032.p462ick2esyex3tv@yuggoth.org> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > Hello, I configured config drive and centos instances works fine > with it. Ubuntu bionic tries to get metadata from network and > cloud-init does not set hostname and does not insert keys for ssh. The cloud-init changelog indicates that initial support for configdrive first appeared in 0.6.3, so the versions in Ubuntu Bionic (and even Xenial) should be new enough to make use of it. In fact, the official Ubuntu bionic-updates package suite includes cloud-init 19.4 (the most recent release), so missing features/support seem unlikely. Detailed log entries from cloud-init running at boot might help in diagnosing the problem. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Fri Feb 14 19:14:55 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 20:14:55 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: <20200214181032.p462ick2esyex3tv@yuggoth.org> References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. Note I can mount /dev/cdrom and see metadata: mount -o ro /dev/cdrom /mnt ls -laR /mnt total 10 dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack /mnt/ec2: total 8 dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest /mnt/ec2/2009-04-04: total 5 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json /mnt/ec2/latest: total 5 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json /mnt/openstack: total 22 dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest /mnt/openstack/2012-08-10: total 6 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json /mnt/openstack/2013-04-04: total 6 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json /mnt/openstack/2013-10-17: total 7 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json /mnt/openstack/2015-10-15: total 7 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json /mnt/openstack/2016-06-30: total 7 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json /mnt/openstack/2016-10-06: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json /mnt/openstack/2017-02-22: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json /mnt/openstack/2018-08-27: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json /mnt/openstack/latest: total 8 dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json Thanks Ignazio Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley ha scritto: > On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > > Hello, I configured config drive and centos instances works fine > > with it. Ubuntu bionic tries to get metadata from network and > > cloud-init does not set hostname and does not insert keys for ssh. > > The cloud-init changelog indicates that initial support for > configdrive first appeared in 0.6.3, so the versions in Ubuntu > Bionic (and even Xenial) should be new enough to make use of it. In > fact, the official Ubuntu bionic-updates package suite includes > cloud-init 19.4 (the most recent release), so missing > features/support seem unlikely. Detailed log entries from cloud-init > running at boot might help in diagnosing the problem. > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cloud-init.log Type: text/x-log Size: 106195 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cloud-init-output.log Type: text/x-log Size: 6403 bytes Desc: not available URL: From ignaziocassano at gmail.com Fri Feb 14 19:26:27 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Fri, 14 Feb 2020 20:26:27 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Hello, at the following link you can find the cloud init logs file: https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva PS I can mount and read metadata mount manually the cdrom and I do not understand why cloud-init cannot. Ignazio Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < ignaziocassano at gmail.com> ha scritto: > Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. > Note I can mount /dev/cdrom and see metadata: > mount -o ro /dev/cdrom /mnt > ls -laR /mnt > total 10 > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > > /mnt/ec2: > total 8 > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > /mnt/ec2/2009-04-04: > total 5 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > /mnt/ec2/latest: > total 5 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > /mnt/openstack: > total 22 > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > /mnt/openstack/2012-08-10: > total 6 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > > /mnt/openstack/2013-04-04: > total 6 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > > /mnt/openstack/2013-10-17: > total 7 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > /mnt/openstack/2015-10-15: > total 7 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > /mnt/openstack/2016-06-30: > total 7 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > /mnt/openstack/2016-10-06: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > /mnt/openstack/2017-02-22: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > /mnt/openstack/2018-08-27: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > /mnt/openstack/latest: > total 8 > dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > Thanks > Ignazio > > Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley > ha scritto: > >> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: >> > Hello, I configured config drive and centos instances works fine >> > with it. Ubuntu bionic tries to get metadata from network and >> > cloud-init does not set hostname and does not insert keys for ssh. >> >> The cloud-init changelog indicates that initial support for >> configdrive first appeared in 0.6.3, so the versions in Ubuntu >> Bionic (and even Xenial) should be new enough to make use of it. In >> fact, the official Ubuntu bionic-updates package suite includes >> cloud-init 19.4 (the most recent release), so missing >> features/support seem unlikely. Detailed log entries from cloud-init >> running at boot might help in diagnosing the problem. >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Feb 14 21:56:49 2020 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 14 Feb 2020 13:56:49 -0800 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: I haven't had time to look at your logs, but I can tell you that configdrive has worked in the Ubuntu images since at least Xenial (I am pretty sure trusty, but I can't remember for sure). The Octavia project uses it exclusively for the amphora instances. I'm not sure where you got your image, but there is a setting for cloud-init that defines which data sources it will use. For Octavia we explicitly set this in our images to only poll configdrive to speed the boot process. We build our images using diskimage-builder, and this element (script) is the component we use to set the cloud-init datasource: https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources Maybe check the settings for cloud-init inside your image by grep for "datasource_list" in /etc/cloud/cloud.cfg.d/* ? Michael On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano wrote: > > Hello, at the following link you can find the cloud init logs file: > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > PS > I can mount and read metadata mount manually the cdrom and I do not understand why cloud-init cannot. > Ignazio > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano ha scritto: >> >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. >> Note I can mount /dev/cdrom and see metadata: >> mount -o ro /dev/cdrom /mnt >> ls -laR /mnt >> total 10 >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack >> >> /mnt/ec2: >> total 8 >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest >> >> /mnt/ec2/2009-04-04: >> total 5 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json >> >> /mnt/ec2/latest: >> total 5 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json >> >> /mnt/openstack: >> total 22 >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest >> >> /mnt/openstack/2012-08-10: >> total 6 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json >> >> /mnt/openstack/2013-04-04: >> total 6 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json >> >> /mnt/openstack/2013-10-17: >> total 7 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> >> /mnt/openstack/2015-10-15: >> total 7 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> >> /mnt/openstack/2016-06-30: >> total 7 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> >> /mnt/openstack/2016-10-06: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> /mnt/openstack/2017-02-22: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> /mnt/openstack/2018-08-27: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> /mnt/openstack/latest: >> total 8 >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json >> >> Thanks >> Ignazio >> >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley ha scritto: >>> >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: >>> > Hello, I configured config drive and centos instances works fine >>> > with it. Ubuntu bionic tries to get metadata from network and >>> > cloud-init does not set hostname and does not insert keys for ssh. >>> >>> The cloud-init changelog indicates that initial support for >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu >>> Bionic (and even Xenial) should be new enough to make use of it. In >>> fact, the official Ubuntu bionic-updates package suite includes >>> cloud-init 19.4 (the most recent release), so missing >>> features/support seem unlikely. Detailed log entries from cloud-init >>> running at boot might help in diagnosing the problem. >>> -- >>> Jeremy Stanley From donny at fortnebula.com Fri Feb 14 22:07:57 2020 From: donny at fortnebula.com (Donny Davis) Date: Fri, 14 Feb 2020 17:07:57 -0500 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: On Fri, Feb 14, 2020 at 5:02 PM Michael Johnson wrote: > > I haven't had time to look at your logs, but I can tell you that > configdrive has worked in the Ubuntu images since at least Xenial (I > am pretty sure trusty, but I can't remember for sure). > The Octavia project uses it exclusively for the amphora instances. > > I'm not sure where you got your image, but there is a setting for > cloud-init that defines which data sources it will use. For Octavia we > explicitly set this in our images to only poll configdrive to speed > the boot process. > > We build our images using diskimage-builder, and this element (script) > is the component we use to set the cloud-init datasource: > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > Maybe check the settings for cloud-init inside your image by grep for > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > Michael > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > wrote: > > > > Hello, at the following link you can find the cloud init logs file: > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > PS > > I can mount and read metadata mount manually the cdrom and I do not understand why cloud-init cannot. > > Ignazio > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano ha scritto: > >> > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 instance. > >> Note I can mount /dev/cdrom and see metadata: > >> mount -o ro /dev/cdrom /mnt > >> ls -laR /mnt > >> total 10 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > >> > >> /mnt/ec2: > >> total 8 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/ec2/2009-04-04: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/ec2/latest: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/openstack: > >> total 22 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/openstack/2012-08-10: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-04-04: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-10-17: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2015-10-15: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-06-30: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-10-06: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2017-02-22: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2018-08-27: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/latest: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> Thanks > >> Ignazio > >> > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley ha scritto: > >>> > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > >>> > Hello, I configured config drive and centos instances works fine > >>> > with it. Ubuntu bionic tries to get metadata from network and > >>> > cloud-init does not set hostname and does not insert keys for ssh. > >>> > >>> The cloud-init changelog indicates that initial support for > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > >>> Bionic (and even Xenial) should be new enough to make use of it. In > >>> fact, the official Ubuntu bionic-updates package suite includes > >>> cloud-init 19.4 (the most recent release), so missing > >>> features/support seem unlikely. Detailed log entries from cloud-init > >>> running at boot might help in diagnosing the problem. > >>> -- > >>> Jeremy Stanley > Lately I have been using glean for my own images and I do believe the openstack CI uses it as well. Works great for me. https://docs.openstack.org/infra/glean/ -- ~/DonnyD C: 805 814 6800 "No mission too difficult. No sacrifice too great. Duty First" From amy at demarco.com Fri Feb 14 23:58:54 2020 From: amy at demarco.com (Amy Marrich) Date: Fri, 14 Feb 2020 17:58:54 -0600 Subject: UC Nominations now open In-Reply-To: References: Message-ID: Just a reminder that UC nominations end aat February 16, 23:59 UTC Thanks, Amy (spotz) On Mon, Feb 3, 2020 at 9:50 AM Amy Marrich wrote: > The nomination period for the February User Committee elections is now > open. > > Any individual member of the Foundation who is an Active User Contributor > (AUC) can propose their candidacy (except the two sitting UC members > elected in the previous election). > > Self-nomination is common; no third party nomination is required. > Nominations can be made by sending an email to the > user-committee at lists.openstack.org mailing-list[0], with the subject: “UC > candidacy” by February 16, 23:59 UTC aa voting will begin on February 17. > The email can include a description of the candidate platform. The > candidacy is then confirmed by one of the election officials, after > verification of the electorate status of the candidate. > > Criteria for AUC status can be found at > https://superuser.openstack.org/articles/auc-community/. If you are > still not sure of your status and would like to verify in advance > please email myself(amy at demarco.com) and Rain Leander(rleander at redhat.com) > as we are serving as the Election Officials. > > Thanks, > > Amy Marrich (spotz) > > 0 - Please make sure you are subscribed to this list before sending in > your nomination. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 08:08:25 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 09:08:25 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Thanks Michael. I have just tried to use dpkg to reconfigure cloud init source forcing ConfigDrive it did not solve. I am going to force it with with diskimage builder. I hope this will work. Ignazio Il Ven 14 Feb 2020, 22:57 Michael Johnson ha scritto: > I haven't had time to look at your logs, but I can tell you that > configdrive has worked in the Ubuntu images since at least Xenial (I > am pretty sure trusty, but I can't remember for sure). > The Octavia project uses it exclusively for the amphora instances. > > I'm not sure where you got your image, but there is a setting for > cloud-init that defines which data sources it will use. For Octavia we > explicitly set this in our images to only poll configdrive to speed > the boot process. > > We build our images using diskimage-builder, and this element (script) > is the component we use to set the cloud-init datasource: > > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > Maybe check the settings for cloud-init inside your image by grep for > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > Michael > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > wrote: > > > > Hello, at the following link you can find the cloud init logs file: > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > PS > > I can mount and read metadata mount manually the cdrom and I do not > understand why cloud-init cannot. > > Ignazio > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 > instance. > >> Note I can mount /dev/cdrom and see metadata: > >> mount -o ro /dev/cdrom /mnt > >> ls -laR /mnt > >> total 10 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > >> > >> /mnt/ec2: > >> total 8 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/ec2/2009-04-04: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/ec2/latest: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/openstack: > >> total 22 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/openstack/2012-08-10: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-04-04: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-10-17: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2015-10-15: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-06-30: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-10-06: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2017-02-22: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2018-08-27: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/latest: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> Thanks > >> Ignazio > >> > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley < > fungi at yuggoth.org> ha scritto: > >>> > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > >>> > Hello, I configured config drive and centos instances works fine > >>> > with it. Ubuntu bionic tries to get metadata from network and > >>> > cloud-init does not set hostname and does not insert keys for ssh. > >>> > >>> The cloud-init changelog indicates that initial support for > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > >>> Bionic (and even Xenial) should be new enough to make use of it. In > >>> fact, the official Ubuntu bionic-updates package suite includes > >>> cloud-init 19.4 (the most recent release), so missing > >>> features/support seem unlikely. Detailed log entries from cloud-init > >>> running at boot might help in diagnosing the problem. > >>> -- > >>> Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 08:10:21 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 09:10:21 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Donny, thank you.I will try it. Ignazio Il Ven 14 Feb 2020, 23:08 Donny Davis ha scritto: > On Fri, Feb 14, 2020 at 5:02 PM Michael Johnson > wrote: > > > > I haven't had time to look at your logs, but I can tell you that > > configdrive has worked in the Ubuntu images since at least Xenial (I > > am pretty sure trusty, but I can't remember for sure). > > The Octavia project uses it exclusively for the amphora instances. > > > > I'm not sure where you got your image, but there is a setting for > > cloud-init that defines which data sources it will use. For Octavia we > > explicitly set this in our images to only poll configdrive to speed > > the boot process. > > > > We build our images using diskimage-builder, and this element (script) > > is the component we use to set the cloud-init datasource: > > > > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > > > Maybe check the settings for cloud-init inside your image by grep for > > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > > > Michael > > > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > > wrote: > > > > > > Hello, at the following link you can find the cloud init logs file: > > > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > > > PS > > > I can mount and read metadata mount manually the cdrom and I do not > understand why cloud-init cannot. > > > Ignazio > > > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > > >> > > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 > instance. > > >> Note I can mount /dev/cdrom and see metadata: > > >> mount -o ro /dev/cdrom /mnt > > >> ls -laR /mnt > > >> total 10 > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > > >> > > >> /mnt/ec2: > > >> total 8 > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > >> > > >> /mnt/ec2/2009-04-04: > > >> total 5 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > >> > > >> /mnt/ec2/latest: > > >> total 5 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > > >> > > >> /mnt/openstack: > > >> total 22 > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > > >> > > >> /mnt/openstack/2012-08-10: > > >> total 6 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > > >> > > >> /mnt/openstack/2013-04-04: > > >> total 6 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > > >> > > >> /mnt/openstack/2013-10-17: > > >> total 7 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> > > >> /mnt/openstack/2015-10-15: > > >> total 7 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> > > >> /mnt/openstack/2016-06-30: > > >> total 7 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> > > >> /mnt/openstack/2016-10-06: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> /mnt/openstack/2017-02-22: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> /mnt/openstack/2018-08-27: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> /mnt/openstack/latest: > > >> total 8 > > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > > >> > > >> Thanks > > >> Ignazio > > >> > > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley < > fungi at yuggoth.org> ha scritto: > > >>> > > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > > >>> > Hello, I configured config drive and centos instances works fine > > >>> > with it. Ubuntu bionic tries to get metadata from network and > > >>> > cloud-init does not set hostname and does not insert keys for ssh. > > >>> > > >>> The cloud-init changelog indicates that initial support for > > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > > >>> Bionic (and even Xenial) should be new enough to make use of it. In > > >>> fact, the official Ubuntu bionic-updates package suite includes > > >>> cloud-init 19.4 (the most recent release), so missing > > >>> features/support seem unlikely. Detailed log entries from cloud-init > > >>> running at boot might help in diagnosing the problem. > > >>> -- > > >>> Jeremy Stanley > > > > Lately I have been using glean for my own images and I do believe the > openstack CI uses it as well. Works great for me. > > https://docs.openstack.org/infra/glean/ > > -- > ~/DonnyD > C: 805 814 6800 > "No mission too difficult. No sacrifice too great. Duty First" > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 12:10:49 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 13:10:49 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: Hi Michael, solve using DIB_CLOUD_INIT_DATASOURCES in diskimage builder Regards Ignazio Il Ven 14 Feb 2020, 22:57 Michael Johnson ha scritto: > I haven't had time to look at your logs, but I can tell you that > configdrive has worked in the Ubuntu images since at least Xenial (I > am pretty sure trusty, but I can't remember for sure). > The Octavia project uses it exclusively for the amphora instances. > > I'm not sure where you got your image, but there is a setting for > cloud-init that defines which data sources it will use. For Octavia we > explicitly set this in our images to only poll configdrive to speed > the boot process. > > We build our images using diskimage-builder, and this element (script) > is the component we use to set the cloud-init datasource: > > > https://github.com/openstack/diskimage-builder/blob/master/diskimage_builder/elements/cloud-init-datasources/install.d/05-set-cloud-init-sources > > Maybe check the settings for cloud-init inside your image by grep for > "datasource_list" in /etc/cloud/cloud.cfg.d/* ? > > Michael > > On Fri, Feb 14, 2020 at 11:29 AM Ignazio Cassano > wrote: > > > > Hello, at the following link you can find the cloud init logs file: > > > > https://drive.google.com/open?id=1IXp85kfLAC4H3Jp2pHrkwij61XWiFNva > > > > PS > > I can mount and read metadata mount manually the cdrom and I do not > understand why cloud-init cannot. > > Ignazio > > > > Il giorno ven 14 feb 2020 alle ore 20:14 Ignazio Cassano < > ignaziocassano at gmail.com> ha scritto: > >> > >> Hello, attaced here thera are the cloud-init logs form ubuntu 18 > instance. > >> Note I can mount /dev/cdrom and see metadata: > >> mount -o ro /dev/cdrom /mnt > >> ls -laR /mnt > >> total 10 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> drwxr-xr-x 24 root root 4096 Feb 14 17:08 .. > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 ec2 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 openstack > >> > >> /mnt/ec2: > >> total 8 > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2009-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/ec2/2009-04-04: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/ec2/latest: > >> total 5 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 963 Feb 14 17:05 meta-data.json > >> > >> /mnt/openstack: > >> total 22 > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 4 root root 2048 Feb 14 17:05 .. > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2012-08-10 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-04-04 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2013-10-17 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2015-10-15 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-06-30 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2016-10-06 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2017-02-22 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 2018-08-27 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 latest > >> > >> /mnt/openstack/2012-08-10: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1056 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-04-04: > >> total 6 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> > >> /mnt/openstack/2013-10-17: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1759 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2015-10-15: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1809 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-06-30: > >> total 7 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> > >> /mnt/openstack/2016-10-06: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2017-02-22: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/2018-08-27: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> /mnt/openstack/latest: > >> total 8 > >> dr-xr-xr-x 2 root root 2048 Feb 14 17:05 . > >> dr-xr-xr-x 11 root root 2048 Feb 14 17:05 .. > >> -r--r--r-- 1 root root 1824 Feb 14 17:05 meta_data.json > >> -r--r--r-- 1 root root 397 Feb 14 17:05 network_data.json > >> -r--r--r-- 1 root root 2 Feb 14 17:05 vendor_data.json > >> -r--r--r-- 1 root root 14 Feb 14 17:05 vendor_data2.json > >> > >> Thanks > >> Ignazio > >> > >> Il giorno ven 14 feb 2020 alle ore 19:16 Jeremy Stanley < > fungi at yuggoth.org> ha scritto: > >>> > >>> On 2020-02-14 17:51:01 +0100 (+0100), Ignazio Cassano wrote: > >>> > Hello, I configured config drive and centos instances works fine > >>> > with it. Ubuntu bionic tries to get metadata from network and > >>> > cloud-init does not set hostname and does not insert keys for ssh. > >>> > >>> The cloud-init changelog indicates that initial support for > >>> configdrive first appeared in 0.6.3, so the versions in Ubuntu > >>> Bionic (and even Xenial) should be new enough to make use of it. In > >>> fact, the official Ubuntu bionic-updates package suite includes > >>> cloud-init 19.4 (the most recent release), so missing > >>> features/support seem unlikely. Detailed log entries from cloud-init > >>> running at boot might help in diagnosing the problem. > >>> -- > >>> Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Feb 15 12:14:43 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 13:14:43 +0100 Subject: [queens]neutron][metadata] configuration In-Reply-To: References: <20200213130936.3ainb4cift5euslw@yuggoth.org> Message-ID: Hi Jeremy, on ubuntu works if in disk rimane builder we use the variabile DIB_CLOUD_INIT_DATASOURCES with value ConfigDrive Ignazio Il Gio 13 Feb 2020, 16:31 Ignazio Cassano ha scritto: > Hello, config drive is the best solution for our situation. > Thanks > Ignazio > > Il giorno gio 13 feb 2020 alle ore 14:16 Jeremy Stanley > ha scritto: > >> On 2020-02-13 11:19:06 +0100 (+0100), Ignazio Cassano wrote: >> > Hello everyone, in my installation of Queees I am using many >> > provider networks. I don't use openstack router but only dhcp. I >> > would like my instances to reach the metadata agent without the >> > 169.154.169.254 route, so I would like the provider networks to >> > directly reach the metadata agent on the internal api vip. How can >> > I get this configuration? >> >> Have you tried using configdrive instead of the metadata service? >> It's generally more reliable. The main downside is that it doesn't >> change while the instance is running, so if you're wanting to use >> this to update routes for active instances between reboots then I >> suppose it wouldn't solve your problem. >> -- >> Jeremy Stanley >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Feb 15 12:51:03 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 15 Feb 2020 12:51:03 +0000 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: References: <20200214181032.p462ick2esyex3tv@yuggoth.org> Message-ID: <20200215125103.fj3ygmjj7p4yes5k@yuggoth.org> On 2020-02-15 13:10:49 +0100 (+0100), Ignazio Cassano wrote: > solve using DIB_CLOUD_INIT_DATASOURCES in diskimage builder [...] I see, I didn't realize from your earlier posts that you're building your own images, but it certainly makes sense that you'd need to configure cloud-init appropriately when doing that. Thanks for following up! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From ignaziocassano at gmail.com Sat Feb 15 13:47:58 2020 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 15 Feb 2020 14:47:58 +0100 Subject: [queens][config_drive] not working for ubuntu bionic In-Reply-To: <20200215125103.fj3ygmjj7p4yes5k@yuggoth.org> References: <20200214181032.p462ick2esyex3tv@yuggoth.org> <20200215125103.fj3ygmjj7p4yes5k@yuggoth.org> Message-ID: I realize my image for inserting heat tools inside them. Some heat features are not included in standard images. Ignazio Il Sab 15 Feb 2020, 13:57 Jeremy Stanley ha scritto: > On 2020-02-15 13:10:49 +0100 (+0100), Ignazio Cassano wrote: > > solve using DIB_CLOUD_INIT_DATASOURCES in diskimage builder > [...] > > I see, I didn't realize from your earlier posts that you're building > your own images, but it certainly makes sense that you'd need to > configure cloud-init appropriately when doing that. Thanks for > following up! > -- > Jeremy Stanley > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ziaul.ict2018 at gmail.com Sun Feb 16 06:21:36 2020 From: ziaul.ict2018 at gmail.com (Md. Ziaul Haque) Date: Sun, 16 Feb 2020 12:21:36 +0600 Subject: Automatically remove /var/log directory from the openstack instance Message-ID: Hello, During few days we have faced that from the instance /var/log directory log have been removed automatically after the unexpected reboot. We have faced this issue with centos and ubuntu images. Openstack version is rocky and qemu-kvm version 2.12.0 . If anybody has faced the same issue, kindly help us for solving this issue Thanks & Regards Ziaul -------------- next part -------------- An HTML attachment was scrubbed... URL: From liang.a.fang at intel.com Sun Feb 16 15:43:46 2020 From: liang.a.fang at intel.com (Fang, Liang A) Date: Sun, 16 Feb 2020 15:43:46 +0000 Subject: [cinder][nova] volume-local-cache meeting results In-Reply-To: <39695ce2-fe35-6bfb-17f1-bbfaef4293d1@gmail.com> References: <39695ce2-fe35-6bfb-17f1-bbfaef4293d1@gmail.com> Message-ID: Thanks Brian organized the meeting, so Nova and Cinder experts can discuss directly, it is very efficient. Now I think most of you are very clear of the spec. I have updated the spec this weekend. Hope it meets your expectation now. Time fly, this spec has been worked/discussed on 4 months from Shanghai PTG. I continues put effort on this because I have confidence that it can greatly improve storage performance. Although not perfect, I still hope the first edition can land in U release, and let's improve it in V release. Regards LiangFang -----Original Message----- From: Brian Rosmaita Sent: Friday, February 14, 2020 4:08 AM To: openstack-discuss at lists.openstack.org Subject: [cinder][nova] volume-local-cache meeting results Thanks to everyone who attended this morning, we had a productive meeting. If you missed it and want to know what happened: etherpad: https://etherpad.openstack.org/p/volume-local-cache recording: https://youtu.be/P9bouCCoqVo Liang Fang will be updating the specs to reflect what was discussed: cinder spec: https://review.opendev.org/#/c/684556/ nova spec: https://review.opendev.org/#/c/689070/ cheers, brian From gmann at ghanshyammann.com Mon Feb 17 03:59:49 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 16 Feb 2020 21:59:49 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-13 Update (we are at deadline) Message-ID: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> Hello Everyone, Below is the progress on "Drop Python 2.7 Support" at end of R-13 week. Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule Highlights: ======== * Deadline was M-2 (R-13 week). * Things are breaking and being fixed daily. * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. * Tempest has dropped the py2 and stop running the py2 jobs and so does devstack. * tempest-full job has been moved to py2 (it started running on py3 when devstack default to py3). This job is not supposed to run on master gate (except d-g which should keep running on py2 jobs). tempest-full-py3 is the py3 version of it. But few master jobs derived from tempest-full needs to either start using tempest-full-py3 or explicitly enable py3. I will be working on those next week. * Bugs & Status: No open bugs as of now. Below is the status of last week's bugs. ** QA dropped py3.5 support also from many tooling like stackviz etc which are branchless and should keep working on stable py3.5 env. We have reverted the dropping py3.5 and working now. ** Nova stable branches are still falling for the u-c reason on nova-live-migration. Fixes are up[2] *** Fixed and gate is green. * * Tempest tox 'all-plugin' usage issue[3] *** Fixed and gate is green. ** neutron-vpass in-tree plugin issue *** Fixed and gate is green. NOTE: I know few py2.7 drop patches are failing on few projects, I will continue debugging those. Help from the failing project will be greatly appreciated to meet the deadline. Project wise status and need reviews: ============================ Phase-1 status: The OpenStack services have not merged the py2 drop patches: NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). * Adjutant ** https://review.opendev.org/#/c/706723/ * Masakari ** https://review.opendev.org/#/c/694551/ * Qinling ** https://review.opendev.org/#/c/694687/ Phase-2 status: By today, we should be completing the phase-2 work which is nothing but drop py2 from everything except requirement repo. But we have few repos to merge the patches on priority. * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open How you can help: ============== - Review the patches. Push the patches if I missed any repo. [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012463.html [2] https://review.opendev.org/#/q/I8190f93e0a754fa59ed848a3a230d1ef63a06abc [3] https://bugs.launchpad.net/tempest/+bug/1862240 [4] https://review.opendev.org/#/q/topic:bug/1862240+(status:open+OR+status:merged) -gmann From amotoki at gmail.com Mon Feb 17 08:19:53 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Mon, 17 Feb 2020 17:19:53 +0900 Subject: [goals][Drop Python 2.7 Support] Week R-13 Update (we are at deadline) In-Reply-To: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> References: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> Message-ID: On Mon, Feb 17, 2020 at 1:02 PM Ghanshyam Mann wrote: > > Hello Everyone, > > Below is the progress on "Drop Python 2.7 Support" at end of R-13 week. > > Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > Highlights: > ======== > * Deadline was M-2 (R-13 week). > > * Things are breaking and being fixed daily. > > * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. > > * Tempest has dropped the py2 and stop running the py2 jobs and so does devstack. > > * tempest-full job has been moved to py2 (it started running on py3 when devstack default to py3). This job is not supposed > to run on master gate (except d-g which should keep running on py2 jobs). tempest-full-py3 is the py3 version of it. But few > master jobs derived from tempest-full needs to either start using tempest-full-py3 or explicitly enable py3. I will be working > on those next week. > > * Bugs & Status: No open bugs as of now. Below is the status of last week's bugs. There is another bug fixed. The release notes job in stable branches was broken as the job is run against the master branch but it was run with python 2.7. The project-template in openstack-zuul-jobs was updated [1] and build-openstack-releasenotes job now runs with python3 in all stable branches. If there are jobs in stable branches which runs using the master branch, the similar change might be needed. [1] https://review.opendev.org/#/c/706825/ > > ** QA dropped py3.5 support also from many tooling like stackviz etc which are branchless and should keep working on > stable py3.5 env. We have reverted the dropping py3.5 and working now. > > ** Nova stable branches are still falling for the u-c reason on nova-live-migration. Fixes are up[2] > *** Fixed and gate is green. > > * * Tempest tox 'all-plugin' usage issue[3] > *** Fixed and gate is green. > > ** neutron-vpass in-tree plugin issue > *** Fixed and gate is green. > > NOTE: I know few py2.7 drop patches are failing on few projects, I will continue debugging those. Help from the failing project will be greatly appreciated to meet the deadline. > > > Project wise status and need reviews: > ============================ > Phase-1 status: > The OpenStack services have not merged the py2 drop patches: > NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). > > * Adjutant > ** https://review.opendev.org/#/c/706723/ > * Masakari > ** https://review.opendev.org/#/c/694551/ > * Qinling > ** https://review.opendev.org/#/c/694687/ > > Phase-2 status: > By today, we should be completing the phase-2 work which is nothing but drop py2 from everything except requirement repo. > But we have few repos to merge the patches on priority. > > * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open > > > How you can help: > ============== > - Review the patches. Push the patches if I missed any repo. > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012463.html > [2] https://review.opendev.org/#/q/I8190f93e0a754fa59ed848a3a230d1ef63a06abc > [3] https://bugs.launchpad.net/tempest/+bug/1862240 > [4] https://review.opendev.org/#/q/topic:bug/1862240+(status:open+OR+status:merged) > > -gmann > > From isanjayk5 at gmail.com Mon Feb 17 08:51:16 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Mon, 17 Feb 2020 14:21:16 +0530 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster Message-ID: Hello openstack-helm team, I am trying to deploy stein cinder service in my k8s cluster using persistent volume and persistent volume claim for local storage or NFS storage. However after deploying cinder in my cluster, the cinder pods remains in Init state even though the PV and PVC are created. Please look at my below post on openstack forum and guide me how to resolve this issue. https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ thank you for your help and support on this. best regards, Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From tinlam at gmail.com Mon Feb 17 09:04:26 2020 From: tinlam at gmail.com (Tin Lam) Date: Mon, 17 Feb 2020 03:04:26 -0600 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster In-Reply-To: References: Message-ID: Hello, Sanjay - IIRC, cinder service in OSH never supported the NFS provisioner. Can you try using the Ceph storage charts instead? Regards, Tin On Mon, Feb 17, 2020 at 2:54 AM Sanjay K wrote: > Hello openstack-helm team, > > I am trying to deploy stein cinder service in my k8s cluster using > persistent volume and persistent volume claim for local storage or NFS > storage. However after deploying cinder in my cluster, the cinder pods > remains in Init state even though the PV and PVC are created. > > Please look at my below post on openstack forum and guide me how to > resolve this issue. > > > https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ > > thank you for your help and support on this. > > best regards, > Sanjay > -- Regards, Tin Lam -------------- next part -------------- An HTML attachment was scrubbed... URL: From isanjayk5 at gmail.com Mon Feb 17 09:24:06 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Mon, 17 Feb 2020 14:54:06 +0530 Subject: [openstack-helm][stein]Cinder pods stuck in init state after deployed in k8s cluster In-Reply-To: References: Message-ID: Hi Tin, Is there any support for local storage for cinder deployment? If yes, how can I try that? Since Ceph is not part of our production deployment, I can't include Ceph in our deployment. thanks and regards, Sanjay On Mon, Feb 17, 2020 at 2:34 PM Tin Lam wrote: > Hello, Sanjay - > > IIRC, cinder service in OSH never supported the NFS provisioner. Can you > try using the Ceph storage charts instead? > > Regards, > Tin > > On Mon, Feb 17, 2020 at 2:54 AM Sanjay K wrote: > >> Hello openstack-helm team, >> >> I am trying to deploy stein cinder service in my k8s cluster using >> persistent volume and persistent volume claim for local storage or NFS >> storage. However after deploying cinder in my cluster, the cinder pods >> remains in Init state even though the PV and PVC are created. >> >> Please look at my below post on openstack forum and guide me how to >> resolve this issue. >> >> >> https://ask.openstack.org/en/question/126191/cinder-pods-in-init-state-when-deployed-with-openstack-helm/ >> >> thank you for your help and support on this. >> >> best regards, >> Sanjay >> > > > -- > Regards, > > Tin Lam > -------------- next part -------------- An HTML attachment was scrubbed... URL: From veeraready at yahoo.co.in Mon Feb 17 11:19:39 2020 From: veeraready at yahoo.co.in (VeeraReddy) Date: Mon, 17 Feb 2020 11:19:39 +0000 (UTC) Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> <1317590151.1993706.1581576773856@mail.yahoo.com> <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> Message-ID: <1415279454.3463025.1581938379342@mail.yahoo.com> Hi mdulko,Thanks you very much,I am able to launch Container and VM side by side in same network on arm64 platform. Issue was openvswitch kernel  Module(Enabled "geneve module" in openvswitch). Now i am able to ping from container to VM but unable to ping Vice Versa (VM to container). And also, I am able ping VM to VM (Both spawned from OpenStack dashboard). Is there any configuration to enable traffic from VM to Container? Regards, Veera. On Thursday, 13 February, 2020, 02:42:21 pm IST, wrote: We're using 2.9.5 on the x86_64 gate and that works fine. I'm not sure if downgrading could help. This is a Neutron issue and I don't have much experience on such a low level. You can try asking on IRC, e.g. on #openstack-neutron. On Thu, 2020-02-13 at 06:52 +0000, VeeraReddy wrote: > Thanks mdulko, > Issue is in openvswitch, iam getting following error in switchd logs > ./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message > > Do we need to patch openvswitch to support above flow? > > My ovs version > > [root at node-2088 ~]# ovs-vsctl --version > ovs-vsctl (Open vSwitch) 2.11.0 > DB Schema 7.16.1 > > > > Regards, > Veera. > > > On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: > > > The controller logs are from an hour later than the CNI one. The issues > seems not to be present. > > Isn't your controller still restarting? If so try to use -p option on > `kubectl logs` to get logs from previous run. > > On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > > Hi Mdulko, > > Below are log files: > > > > Controller Log:http://paste.openstack.org/show/789457/ > > cni : http://paste.openstack.org/show/789456/ > > kubelet : http://paste.openstack.org/show/789453/ > > > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > > > Please let me know the issue > > > > > > > > Regards, > > Veera. > > > > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > > > > Hi, > > > > So from this run you need the kuryr-controller logs. Apparently the pod > > never got annotated with an information about the VIF. > > > > Thanks, > > Michał > > > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > > Hi mdulko, > > > Thanks for your support. > > > > > > As you mention i removed readinessProbe and > > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > > on /healthz endpoint. Are those full logs, including the moment you > > > tried spawning a container there? It seems like you only pasted the > > > fragments with tracebacks regarding failures to read /healthz endpoint > > > of kube-apiserver. That is another problem you should investigate - > > > that causes Kuryr pods to restart. > > > > > > At first I'd disable the healthchecks (remove readinessProbe and > > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > > logs. > > > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > > Hi mdulko, > > > > Please find kuryr-cni logs > > > > http://paste.openstack.org/show/789209/ > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > > > > Hi, > > > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > > > Thanks, > > > > Michał > > > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > > Hi, > > > > > I am trying to run kubelet in arm64 platform > > > > > 1.    Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > > 2.    Generated kuryr-cni-arm64 container. > > > > > 3.    my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > > > Please help me to fix the issue > > > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > Veera. > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Feb 17 14:06:05 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 17 Feb 2020 08:06:05 -0600 Subject: [goals][Drop Python 2.7 Support] Week R-13 Update (we are at deadline) In-Reply-To: References: <170514d5d69.fd04ed7581004.7286037153527546017@ghanshyammann.com> Message-ID: <1705378699e.f6be3b5d110323.5998502757333700076@ghanshyammann.com> ---- On Mon, 17 Feb 2020 02:19:53 -0600 Akihiro Motoki wrote ---- > On Mon, Feb 17, 2020 at 1:02 PM Ghanshyam Mann wrote: > > > > Hello Everyone, > > > > Below is the progress on "Drop Python 2.7 Support" at end of R-13 week. > > > > Schedule: https://governance.openstack.org/tc/goals/selected/ussuri/drop-py27.html#schedule > > > > > > Highlights: > > ======== > > * Deadline was M-2 (R-13 week). > > > > * Things are breaking and being fixed daily. > > > > * Do not worry about the gate failure, it is better to break it now and fix so that we can smoothly release Ussuri. > > > > * Tempest has dropped the py2 and stop running the py2 jobs and so does devstack. > > > > * tempest-full job has been moved to py2 (it started running on py3 when devstack default to py3). This job is not supposed > > to run on master gate (except d-g which should keep running on py2 jobs). tempest-full-py3 is the py3 version of it. But few > > master jobs derived from tempest-full needs to either start using tempest-full-py3 or explicitly enable py3. I will be working > > on those next week. > > > > * Bugs & Status: No open bugs as of now. Below is the status of last week's bugs. > > There is another bug fixed. The release notes job in stable branches > was broken as the job is run against the master branch but it was run > with python 2.7. > The project-template in openstack-zuul-jobs was updated [1] and > build-openstack-releasenotes job now runs with python3 in all stable > branches. > If there are jobs in stable branches which runs using the master > branch, the similar change might be needed. > [1] https://review.opendev.org/#/c/706825/ Thanks amotoki for updates and fix. This is very helpful. Yeah, we need to switch the master related jobs on py3 irrespective where they run. -gmann > > > > > ** QA dropped py3.5 support also from many tooling like stackviz etc which are branchless and should keep working on > > stable py3.5 env. We have reverted the dropping py3.5 and working now. > > > > ** Nova stable branches are still falling for the u-c reason on nova-live-migration. Fixes are up[2] > > *** Fixed and gate is green. > > > > * * Tempest tox 'all-plugin' usage issue[3] > > *** Fixed and gate is green. > > > > ** neutron-vpass in-tree plugin issue > > *** Fixed and gate is green. > > > > NOTE: I know few py2.7 drop patches are failing on few projects, I will continue debugging those. Help from the failing project will be greatly appreciated to meet the deadline. > > > > > > Project wise status and need reviews: > > ============================ > > Phase-1 status: > > The OpenStack services have not merged the py2 drop patches: > > NOTE: This was supposed to be completed by milestone-1 (Dec 13th, 19). > > > > * Adjutant > > ** https://review.opendev.org/#/c/706723/ > > * Masakari > > ** https://review.opendev.org/#/c/694551/ > > * Qinling > > ** https://review.opendev.org/#/c/694687/ > > > > Phase-2 status: > > By today, we should be completing the phase-2 work which is nothing but drop py2 from everything except requirement repo. > > But we have few repos to merge the patches on priority. > > > > * Open review: https://review.opendev.org/#/q/topic:drop-py27-support+status:open > > > > > > How you can help: > > ============== > > - Review the patches. Push the patches if I missed any repo. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012463.html > > [2] https://review.opendev.org/#/q/I8190f93e0a754fa59ed848a3a230d1ef63a06abc > > [3] https://bugs.launchpad.net/tempest/+bug/1862240 > > [4] https://review.opendev.org/#/q/topic:bug/1862240+(status:open+OR+status:merged) > > > > -gmann > > > > > From dpawlik at redhat.com Mon Feb 17 15:39:26 2020 From: dpawlik at redhat.com (Daniel Pawlik) Date: Mon, 17 Feb 2020 16:39:26 +0100 Subject: [tripleo] RDO image server migration Message-ID: Hello, Todays migration of images server to the new cloud provider was not finished. We are planning to continue tomorrow (18th Feb), on 10 AM UTC. What was done today: - moved rhel-8 build base image to new image server What will be done tomorrow: - change DNS record - disable upload images to old host - sync old images (if some are available) Migration should be transparent to the end user. However, you have to keep in mind the unforeseen events that may occur. Write access to the old server will be disabled and until DNS propagation is not done, you could have read-only access. If you have any doubts or concerns, please do not hesitate to contact: - Daniel Pawlik - Javier Pena Regards, Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From jasonanderson at uchicago.edu Mon Feb 17 17:19:36 2020 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Mon, 17 Feb 2020 17:19:36 +0000 Subject: [blazar] Spec for flexible reservation policy enforcement Message-ID: Hi all, Last week I introduced a final version of the flexible reservation usage spec[1]. This builds on the original spec proposed by Pierre Riteau some months ago. The purpose of the functionality is to allow operators to define various limits or more sophisticated policies around advanced reservations, in order to prevent e.g., a single user from reserving all resources for an indefinite amount of time. Instead of a quota-based approach, the decisions are delegated to an external service; a future improvement could be providing a default implementation (perhaps using quotas and some default time limits) that can be deployed alongside Blazar. I would appreciate reviews from the core team and feedback from others as to the design. This work is planned for Ussuri pending spec approval. Thanks, /Jason [1]: https://review.opendev.org/#/c/707042/ -- Jason Anderson Chameleon DevOps Lead Consortium for Advanced Science and Engineering, The University of Chicago Mathematics & Computer Science Division, Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 17 18:45:40 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 17 Feb 2020 12:45:40 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 Message-ID: Hi, We seem to have created a bit of a problem with the latest oslo.limit release. In keeping with our general policy of bumping the major version when we release libraries without py2 support, oslo.limit got bumped to 1.0. Unfortunately, being a pre-1.0 library it should have only had the minor version bumped. This puts us in an awkward situation since the library is still under heavy development and we expect the API will change, possibly multiple times, before we're ready to commit to a stable API. We also need the ability to continue doing releases during development so we can test the library with consuming projects. I can think of a few options, although I won't guarantee all of these are even possible: * Unpublish 1.0.0 and do further pre-1.0 development on a feature branch cut from before we released 1.0.0. Once we're ready for "1.0", we merge the feature branch to master and release it as 2.0.0. * Stick a big disclaimer in the 1.0.0 docs that it is still under development and proceed to treat 1.0 the same as we would have treated a pre-1.0 library. Again, when ready for "1.0" we tag it 2.0.0. * Make our breaking changes as needed and just continue bumping the major version every release. This unfortunately makes it hard to communicate via the version when the library is ready for use. :-/ * [some better idea that you suggest :-)] Any input on the best way to handle this is greatly appreciated. Thanks. -Ben From sean.mcginnis at gmx.com Mon Feb 17 19:05:46 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Mon, 17 Feb 2020 13:05:46 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: Message-ID: On 2/17/20 12:45 PM, Ben Nemec wrote: > Hi, > > We seem to have created a bit of a problem with the latest oslo.limit > release. In keeping with our general policy of bumping the major > version when we release libraries without py2 support, oslo.limit got > bumped to 1.0. Unfortunately, being a pre-1.0 library it should have > only had the minor version bumped. > > This puts us in an awkward situation since the library is still under > heavy development and we expect the API will change, possibly multiple > times, before we're ready to commit to a stable API. We also need the > ability to continue doing releases during development so we can test > the library with consuming projects. > > I can think of a few options, although I won't guarantee all of these > are even possible: > > * Unpublish 1.0.0 and do further pre-1.0 development on a feature > branch cut from before we released 1.0.0. Once we're ready for "1.0", > we merge the feature branch to master and release it as 2.0.0. > In general, the idea of unpublishing something is a Very Bad Thing. That said, in this situation I think it's worth considering. Publishing something as 1.0 conveys something that will be used by consumers to make assumptions about the state of the library, which in this case would be very misleading. It's not easy to unpublish (nor should it be) but if we can get some infra help, we should be able to take down that library from PyPi and push up a patch to the openstack/releases repo removing the 1.0.0 release. We can then do another release patch to do a 0.2.0 (or whatever the team thinks is appropriate) to re-release the package under a version number that more accurately conveys the status of the library. > * Stick a big disclaimer in the 1.0.0 docs that it is still under > development and proceed to treat 1.0 the same as we would have treated > a pre-1.0 library. Again, when ready for "1.0" we tag it 2.0.0. > This is certainly an options too. And of course, everyone always reads the docs, so we should be totally safe. ;) > * Make our breaking changes as needed and just continue bumping the > major version every release. This unfortunately makes it hard to > communicate via the version when the library is ready for use. :-/ > Also an option. This does work, and there's no reason we can't have multiple major version bumps over a short period of time. But like you say, communication is an issue here, and with the current release we are communicating something that we probably shouldn't be. > * [some better idea that you suggest :-)] > > Any input on the best way to handle this is greatly appreciated. Thanks. > > -Ben > From peter.matulis at canonical.com Mon Feb 17 19:56:03 2020 From: peter.matulis at canonical.com (Peter Matulis) Date: Mon, 17 Feb 2020 14:56:03 -0500 Subject: [charms] OpenStack Charms 20.02 release is now available Message-ID: The 20.02 release of the OpenStack Charms is now available. This release brings several new features to the existing OpenStack Charms deployments for Queens, Rocky, Stein, Train, and many other stable combinations of Ubuntu + OpenStack. Please see the Release Notes for full details: https://docs.openstack.org/charm-guide/latest/2002.html == Highlights == * New charm: manila-ganesha There is a new charm to support Ganesha for use with Manila and CephFS: manila-ganesha. This charm, as well as the requisite manila and ceph-fs charms, have been promoted to supported status. * Swift global cluster With OpenStack Newton or later, support for a global cluster with Swift is available as a tech preview. * OVN With OpenStack Train or later, support for integration with Open Virtual Network (OVN) is available as a tech preview. * MySQL8 With Ubuntu 19.10 or later, support for MySQL 8 is available as a tech preview. * New charms: watcher and watcher-dashboard There are two new charms to support Watcher, the resource optimization service for multi-tenant OpenStack-based clouds: watcher and watcher-dashboard. This is the first release of these charms and they are available as a tech preview. * Policy overrides The policy overrides feature provides operators with a mechanism to override policy defaults on a per-service basis. The last release (19.10) introduced the feature for a number of charms. This release adds support for openstack-dashboard and octavia charms. * Disabling snapshots as a boot source for the OpenStack dashboard Snapshots can be disabled as valid boot sources for launching instances in the dashboard. This is done via the new 'disable-instance-snapshot' configuration option in the openstack-dashboard charm. == OpenStack Charms team == The OpenStack Charms team can be contacted on the #openstack-charms IRC channel on Freenode. == Thank you == Lots of thanks to the below 49 charm contributors who squashed 53 bugs, enabled support for a new release of OpenStack, improved documentation, and added exciting new functionality! Liam Young Corey Bryant Peter Matulis Sahid Orentino Ferdjaoui Frode Nordahl Alex Kavanagh David Ames Chris MacNaughton Stamatis Katsaounis inspurericzhang Tiago Pasqualini Andrew McLeod ShangXiao Ryan Beisner James Page Felipe Reyes kangyufei Edward Hope-Morley Adam Dyess Xuan Yandong Arif Ali Chris Johnston wangfaxin Aurelien Lourot Alexandros Soumplis Jose Guedez Qitao Tytus Kurek Seyeong Kim Dongdong Tao Haw Loeung Jorge Niedbalski Qiu Fossen Yanos Angelopoulos Syed Mohammad Adnan Karim JiaSiRui Xiyue Wang Adam Dyess Jose Delarosa Alexander Balderson Andreas Jaeger Dmitrii Shcherbakov Jacek Nykis Thobias Trevisan Hemanth Nakkina Shuo Liu Drew Freiberger zhangboye Aggelos Kolaitis -- OpenStack Charms Team From doug at doughellmann.com Mon Feb 17 20:02:14 2020 From: doug at doughellmann.com (Doug Hellmann) Date: Mon, 17 Feb 2020 15:02:14 -0500 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: Message-ID: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> > On Feb 17, 2020, at 2:05 PM, Sean McGinnis wrote: > > On 2/17/20 12:45 PM, Ben Nemec wrote: >> Hi, >> >> We seem to have created a bit of a problem with the latest oslo.limit >> release. In keeping with our general policy of bumping the major >> version when we release libraries without py2 support, oslo.limit got >> bumped to 1.0. Unfortunately, being a pre-1.0 library it should have >> only had the minor version bumped. >> >> This puts us in an awkward situation since the library is still under >> heavy development and we expect the API will change, possibly multiple >> times, before we're ready to commit to a stable API. We also need the >> ability to continue doing releases during development so we can test >> the library with consuming projects. >> >> I can think of a few options, although I won't guarantee all of these >> are even possible: >> >> * Unpublish 1.0.0 and do further pre-1.0 development on a feature >> branch cut from before we released 1.0.0. Once we're ready for "1.0", >> we merge the feature branch to master and release it as 2.0.0. >> > In general, the idea of unpublishing something is a Very Bad Thing. > > That said, in this situation I think it's worth considering. Publishing > something as 1.0 conveys something that will be used by consumers to > make assumptions about the state of the library, which in this case > would be very misleading. > > It's not easy to unpublish (nor should it be) but if we can get some > infra help, we should be able to take down that library from PyPi and > push up a patch to the openstack/releases repo removing the 1.0.0 > release. We can then do another release patch to do a 0.2.0 (or whatever > the team thinks is appropriate) to re-release the package under a > version number that more accurately conveys the status of the library. I’m not 100% sure, but I think if you remove a release from PyPI you can’t release again using that version number. So a future stable release would have to be 1.1.0, or something like that. > >> * Stick a big disclaimer in the 1.0.0 docs that it is still under >> development and proceed to treat 1.0 the same as we would have treated >> a pre-1.0 library. Again, when ready for "1.0" we tag it 2.0.0. >> > This is certainly an options too. And of course, everyone always reads > the docs, so we should be totally safe. ;) > >> * Make our breaking changes as needed and just continue bumping the >> major version every release. This unfortunately makes it hard to >> communicate via the version when the library is ready for use. :-/ >> > Also an option. This does work, and there's no reason we can't have > multiple major version bumps over a short period of time. But like you > say, communication is an issue here, and with the current release we are > communicating something that we probably shouldn't be. When is the next breaking release likely to be? >> * [some better idea that you suggest :-)] >> >> Any input on the best way to handle this is greatly appreciated. Thanks. >> >> -Ben From amotoki at gmail.com Mon Feb 17 20:08:49 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Tue, 18 Feb 2020 05:08:49 +0900 Subject: [neutron] bug deputy report (Feb 10-17) Message-ID: Hi, Here is a neutron bug deputy report last week. While 15 new bugs were reported, several bugs needs further investigation and are not triaged yet. I will look into them but it would be great if they need attentions. --- Undecided - https://bugs.launchpad.net/neutron/+bug/1862611 Neutron try to register invalid host to nova aggregate for ironic routed network Undecided It was covered by the last week report, but it needs more eyes familiar with routed network. - https://bugs.launchpad.net/neutron/+bug/1862851 update_floatingip_statuses: StaleDataError: UPDATE statement on table 'standardattributes' expected to update 1 row(s); 0 were matched. Undecided Needs to be checked by folks familiar with DVR. It looks like a race condition. - https://bugs.launchpad.net/neutron/+bug/1862932 [neutron-bgp-dragent] passive agents send wrong number of routes Undecided Needs attention by folks familiar with dynamic-routing stuff. - https://bugs.launchpad.net/neutron/+bug/1863068 Dublicated Neutron Meter Rules in different projects kills metering Undecided - https://bugs.launchpad.net/neutron/+bug/1863091 IPVS setup fails with openvswitch firewall driver, works with iptables_hybrid Undecided - https://bugs.launchpad.net/neutron/+bug/1863110 2/3 snat namespace transitions to master Undecided - https://bugs.launchpad.net/neutron/+bug/1863201 stein regression listing security group rules Undecided - https://bugs.launchpad.net/neutron/+bug/1863213 Spawning of DHCP processes fail: invalid netcat options Undecided ralonsoh assigned himself. Any update on this? Incomplete - https://bugs.launchpad.net/neutron/+bug/1863206 Port is reported with 'port_security_enabled=True' without port-security extension Incomplete I think the bug author misunderstood the default of port-security extension behavior. Double check would be appreciated. Confirmed - https://bugs.launchpad.net/neutron/+bug/1863577 [ovn] tempest.scenario.test_network_v6.TestGettingAddress tests failing 100% times Confirmed, High In Progress - https://bugs.launchpad.net/neutron/+bug/1862618 [OVN] functional test test_virtual_port_delete_parents is unstable In Progress, Medium - https://bugs.launchpad.net/neutron/+bug/1862703 Neutron remote security group does not work In Progress, High - https://bugs.launchpad.net/neutron/+bug/1862648 [OVN] Reduce the number of tables watched by MetadataProxyHandler High, Fix Released - https://bugs.launchpad.net/neutron/+bug/1862893 [OVN]Updating a QoS policy for a port will cause a KeyEerror In Progress, Low Fix Released - https://bugs.launchpad.net/neutron/+bug/1862927 "ncat" rootwrap filter is missing Fix Released The root cause turned out that we need to specify rootwrap filter config by abspath. From fungi at yuggoth.org Mon Feb 17 20:42:34 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 17 Feb 2020 20:42:34 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> Message-ID: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: [...] > I’m not 100% sure, but I think if you remove a release from PyPI > you can’t release again using that version number. So a future > stable release would have to be 1.1.0, or something like that. [...] More accurately, you can't republish the same filename to PyPI even if it's been previously deleted. You could however publish a oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz though that seems a bit of a messy workaround. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at fried.cc Mon Feb 17 22:15:27 2020 From: openstack at fried.cc (Eric Fried) Date: Mon, 17 Feb 2020 16:15:27 -0600 Subject: [nova] Ussuri feature scrub Message-ID: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> Nova maintainers and contributors- { Please refer to this ML thread [1][2] for background. } Now that spec freeze has passed, I would like to assess the Design:Approved blueprints and understand how many we could reasonably expect to land in Ussuri. We completed 25 blueprints in Train. However, mriedem is gone, and it is likely that I will drop off the radar after Q1. Obviously all blueprints/releases/reviews/etc. are not created equal, but using stackalytics review numbers as a rough heuristic, I expect about 20 blueprints to get completed in Ussuri. If we figure that 5-ish of the incompletes will be due to factors beyond our control, that would mean we should Direction:Approve about 25. As of this writing: - 30 blueprints are targeted for ussuri [3]. Of these, - 7 are already implemented. Of the remaining 23, - 2 are not yet Design:Approved. These will need an exception if they are to proceed. And - 19 (including the unapproved ones) have code in various stages. I would like to see us cut 5-ish of the 30. I have made an etherpad [4] with the unimplemented blueprints listed with owners and code links. I made notes on some of the ones I would like to see prioritized, and a couple on which I'm more meh. If you have a stake in Nova/Ussuri, I encourage you to weigh in. How will we ultimately decide? Will we actually cut anything? I don't have the answers yet. Let's go through this exercise and see if anything obvious falls out, and then we can figure out the next steps. Thanks, efried [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009832.html [2] http://lists.openstack.org/pipermail/openstack-discuss/2019-October/thread.html#9835 [3] https://blueprints.launchpad.net/nova/ussuri [4] https://etherpad.openstack.org/p/nova-ussuri-planning From er.gauravgoyal at gmail.com Mon Feb 17 21:57:51 2020 From: er.gauravgoyal at gmail.com (Gaurav Goyal) Date: Mon, 17 Feb 2020 16:57:51 -0500 Subject: [Ask OpenStack] 7 updates about "galera", "rabbiitmq", "auth_token", "keysone", "swift-proxy" and more In-Reply-To: <20200213032627.13937.95474@ask01.openstack.org> References: <20200213032627.13937.95474@ask01.openstack.org> Message-ID: Dear Openstack Experts, We are using a big openstack setup in our organization and different operation teams adds new computes nodes to it. We as system admin of this Openstack environment wants to audit the configuration parameters on all the nodes. Can you please help to suggest a better way (Tools) to audit this environment? Awaiting your kind reply. Regards Gaurav Goyal On Wed, Feb 12, 2020 at 10:26 PM wrote: > Dear Gaurav Goyal, > > Ask OpenStack has these updates, please have a look: > > - Kubernetes cluster created with Magnum doesn't work > (new > question) > - how do i list the top objects consuming more space in the swift > container > (new > question) > - action requests is empty > (new > question) > - How to Migrate OpenStack Instance from one Tenant to another > (new > question) > - Architecture of keystone, swift and memcached together > (3 > rev) > - why is octavia not using keystone public endpoint to validate tokens? > (new > question) > - Error while launching instance on openstack > (2 > rev, 1 ans, 3 ans rev) > > To change frequency and content of these alerts, please visit your user > profile > . > To unsubscribe, visit this page > > If you believe that this message was sent in an error, please email about > it the forum administrator at communitymngr at openstack.org. > ------------------------------ > > Sincerely, > Ask OpenStack Administrator > > To unsubscribe: there's a big "Stop Email" button in the link to your user > profile above. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Mon Feb 17 23:18:42 2020 From: openstack at nemebean.com (Ben Nemec) Date: Mon, 17 Feb 2020 17:18:42 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> Message-ID: On 2/17/20 2:42 PM, Jeremy Stanley wrote: > On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > [...] >> I’m not 100% sure, but I think if you remove a release from PyPI >> you can’t release again using that version number. So a future >> stable release would have to be 1.1.0, or something like that. > [...] > > More accurately, you can't republish the same filename to PyPI even > if it's been previously deleted. You could however publish a > oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > though that seems a bit of a messy workaround. > This seems sensible - it would be kind of like rewriting history in a git repo to re-release 1.0 with different content. I'm also completely fine with having to use a different release number for our eventual 1.0 release. It may make our release version checks unhappy, but since this is (hopefully) not a thing we'll be doing regularly I imagine we can find a way around that. If we can pull the 1.0.0 release that would be ideal since as Sean mentioned people aren't good about reading docs and a 1.0 implies some things that aren't true here. From liang.a.fang at intel.com Tue Feb 18 01:18:32 2020 From: liang.a.fang at intel.com (Fang, Liang A) Date: Tue, 18 Feb 2020 01:18:32 +0000 Subject: [nova][sfe] Support volume local cache Message-ID: Hi We would like to have a spec freeze exception for the spec: Support volume local cache [1]. This is part of cross project contribution, with another spec in cinder [2]. I will attend the Nova meeting on February 20 2020 1400 UTC. [1] https://review.opendev.org/#/c/689070/ [2] https://review.opendev.org/#/c/684556/ Regards Liang -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhangbailin at inspur.com Tue Feb 18 02:57:28 2020 From: zhangbailin at inspur.com (=?gb2312?B?QnJpbiBaaGFuZyjVxbDZwdYp?=) Date: Tue, 18 Feb 2020 02:57:28 +0000 Subject: [nova][sfe] Support re-configure deleted_on_termination in server Message-ID: <63aa5ce6b60e4be7a0fff11abfce47e4@inspur.com> Hi, nova: We would like to have a spec freeze exception for the spec: Support re-configure deleted_on_termination in server [1], and it’s PoC code in [2] I will attend the nova meeting on February 20 2020 1400 UTC as much as possible. [1] https://review.opendev.org/#/c/580336/ [2] https://review.opendev.org/#/c/693828/ brinzhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Feb 18 04:30:23 2020 From: hongbin034 at gmail.com (Hongbin Lu) Date: Mon, 17 Feb 2020 23:30:23 -0500 Subject: [kolla][zun] Zun image source Message-ID: Hi Kolla team, I was looking into the CentOS Zun image downloaded from DockerHub. It looks the source code is from stable/train branch: $ docker run kolla/centos-source-zun-compute:master cat zun/zun.egg-info/pbr.json {"git_version": "25e56636", "is_release": false} The git_version points to the stable/train branch, but I think it should point to the master branch. FWIW, I also checked the Ubuntu image and the git version in there looks correct: $ docker run kolla/ubuntu-source-zun-compute:master cat zun/zun.egg-info/pbr.json {"git_version": "6fbf52ae", "is_release": false} Any suggestion? Best regards, Hongbin -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Tue Feb 18 06:40:37 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 18 Feb 2020 07:40:37 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> Message-ID: At first glance the documentation don't seem to quote the version, can you confirm that point. If we decide to drop the pypi version we also need to be sure to keep the documentation version aligned with the latest version available of the doc. If the doc version is only represented by "latest" I don't think that's an issue here. Always in a situation where we decide to drop the pypi version, what's about this doc? (It will become false) => https://releases.openstack.org/ussuri/index.html#ussuri-oslo-limit Could be worth to initiate a doc to track things that should be updated too to don't missing something. Le mar. 18 févr. 2020 à 00:22, Ben Nemec a écrit : > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: > > On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > > [...] > >> I’m not 100% sure, but I think if you remove a release from PyPI > >> you can’t release again using that version number. So a future > >> stable release would have to be 1.1.0, or something like that. > > [...] > > > > More accurately, you can't republish the same filename to PyPI even > > if it's been previously deleted. You could however publish a > > oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > > though that seems a bit of a messy workaround. > > > > This seems sensible - it would be kind of like rewriting history in a > git repo to re-release 1.0 with different content. I'm also completely > fine with having to use a different release number for our eventual 1.0 > release. It may make our release version checks unhappy, but since this > is (hopefully) not a thing we'll be doing regularly I imagine we can > find a way around that. > > If we can pull the 1.0.0 release that would be ideal since as Sean > mentioned people aren't good about reading docs and a 1.0 implies some > things that aren't true here. > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From ccamacho at redhat.com Tue Feb 18 07:01:57 2020 From: ccamacho at redhat.com (Carlos Camacho Gonzalez) Date: Tue, 18 Feb 2020 08:01:57 +0100 Subject: [tripleo] deep-dive containers & tooling In-Reply-To: References: Message-ID: Hi, Here you have the video published in case you want to see it[1], sorry for the delay but I had some issues related to the audio codecs when processing the video. [1]: https://youtu.be/D18RaSBGyQU Cheers, Carlos. On Fri, Feb 7, 2020 at 4:23 PM Carlos Camacho Gonzalez wrote: > Thanks Emilien for the session, > > I wasn't able to be present but I'll proceed to edit/publish it in the > TripleO youtube channel. > > Thanks! > > On Fri, Feb 7, 2020 at 4:21 PM Emilien Macchi wrote: > >> Thanks for joining, and the great questions, I hope you learned >> something, and that we can do it again soon. >> >> Here is the recording: >> https://bluejeans.com/s/vTSAY >> Slides: >> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/edit >> >> >> >> On Wed, Feb 5, 2020 at 7:22 PM Emilien Macchi wrote: >> >>> Of course it'll be recorded and the link will be available for everyone. >>> >>> On Wed., Feb. 5, 2020, 7:14 p.m. Emilien Macchi, >>> wrote: >>> >>>> Hi folks, >>>> >>>> On Friday I'll do a deep-dive on where we are with container tools. >>>> It's basically an update on the removal of Paunch, what will change etc. >>>> >>>> I'll be on Bluejeans at 2pm UTC, anyone is welcome to join and ask >>>> questions or give feedback. >>>> >>>> https://bluejeans.com/6007759543 >>>> Link of the slides: >>>> https://docs.google.com/presentation/d/111sEwyIKxx2NCqTIQizdPNVaRbv0HAT5YeMGx1YIGqA/ >>>> -- >>>> Emilien Macchi >>>> >>> >> >> -- >> Emilien Macchi >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Tue Feb 18 08:56:55 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Tue, 18 Feb 2020 09:56:55 +0100 Subject: [kolla][zun] Zun image source In-Reply-To: References: Message-ID: Hi Hongbin, it's confusing but Ussuri branches of many projects stopped working on CentOS 7, hence we pinned it to Train for the time being. The CentOS 8 images properly use the Ussuri (master) branches, as do other distros. CentOS 8 images will soon replace CentOS 7 ones on master and the confusion will finally be gone. -yoctozepto wt., 18 lut 2020 o 05:39 Hongbin Lu napisał(a): > > Hi Kolla team, > > I was looking into the CentOS Zun image downloaded from DockerHub. It looks the source code is from stable/train branch: > > $ docker run kolla/centos-source-zun-compute:master cat zun/zun.egg-info/pbr.json > {"git_version": "25e56636", "is_release": false} > > The git_version points to the stable/train branch, but I think it should point to the master branch. FWIW, I also checked the Ubuntu image and the git version in there looks correct: > > $ docker run kolla/ubuntu-source-zun-compute:master cat zun/zun.egg-info/pbr.json > {"git_version": "6fbf52ae", "is_release": false} > > Any suggestion? > > Best regards, > Hongbin From thierry at openstack.org Tue Feb 18 10:23:18 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Feb 2020 11:23:18 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> Message-ID: <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Ben Nemec wrote: > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: >> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: >> [...] >>> I’m not 100% sure, but I think if you remove a release from PyPI >>> you can’t release again using that version number. So a future >>> stable release would have to be 1.1.0, or something like that. >> [...] >> >> More accurately, you can't republish the same filename to PyPI even >> if it's been previously deleted. You could however publish a >> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz >> though that seems a bit of a messy workaround. >> > > This seems sensible - it would be kind of like rewriting history in a > git repo to re-release 1.0 with different content. I'm also completely > fine with having to use a different release number for our eventual 1.0 > release. It may make our release version checks unhappy, but since this > is (hopefully) not a thing we'll be doing regularly I imagine we can > find a way around that. > > If we can pull the 1.0.0 release that would be ideal since as Sean > mentioned people aren't good about reading docs and a 1.0 implies some > things that aren't true here. As others suggested, the simplest is probably to remove 1.0.0 from PyPI and releases.o.o, and then wait until the API is stable to push a 2.0.0 tag. That way we don't break anything (the tag stays, we still increment releases, we do not rewrite history, we do not use weird post1 bits) but just limit the diffusion of the confusing 1.0.0 artifact. I'm not sure a feature branch is really needed ? -- Thierry Carrez (ttx) From moguimar at redhat.com Tue Feb 18 10:34:37 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Tue, 18 Feb 2020 11:34:37 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: If removing 1.0.0 is the way we choose to go, people who already have 1.0.0 won't be able to get "newer" 0.x.y versions. We will need an announcement to blacklist 1.0.0. Then, when the time comes to finally make it stable, we can choose to either go 2.0.0 or 1.0.1. We should specifically put in the installation page instructions to blacklist 1.0.0 in requirements files. On Tue, Feb 18, 2020 at 11:24 AM Thierry Carrez wrote: > Ben Nemec wrote: > > > > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: > >> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > >> [...] > >>> I’m not 100% sure, but I think if you remove a release from PyPI > >>> you can’t release again using that version number. So a future > >>> stable release would have to be 1.1.0, or something like that. > >> [...] > >> > >> More accurately, you can't republish the same filename to PyPI even > >> if it's been previously deleted. You could however publish a > >> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > >> though that seems a bit of a messy workaround. > >> > > > > This seems sensible - it would be kind of like rewriting history in a > > git repo to re-release 1.0 with different content. I'm also completely > > fine with having to use a different release number for our eventual 1.0 > > release. It may make our release version checks unhappy, but since this > > is (hopefully) not a thing we'll be doing regularly I imagine we can > > find a way around that. > > > > If we can pull the 1.0.0 release that would be ideal since as Sean > > mentioned people aren't good about reading docs and a 1.0 implies some > > things that aren't true here. > > As others suggested, the simplest is probably to remove 1.0.0 from PyPI > and releases.o.o, and then wait until the API is stable to push a 2.0.0 > tag. > > That way we don't break anything (the tag stays, we still increment > releases, we do not rewrite history, we do not use weird post1 bits) but > just limit the diffusion of the confusing 1.0.0 artifact. > > I'm not sure a feature branch is really needed ? > > -- > Thierry Carrez (ttx) > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 18 10:40:28 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Feb 2020 11:40:28 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Moises Guimaraes de Medeiros wrote: > If removing 1.0.0 is the way we choose to go, people who already have > 1.0.0 won't be able to get "newer" 0.x.y versions. Indeed. Do we need to release 0.x.y versions before the API stabilizes, though ? Do we expect anything to use it ? If we really do, then the least confusing might be to keep 1.0.0 and bump major every time you release a breaking change. -- Thierry Carrez (ttx) From lyarwood at redhat.com Tue Feb 18 11:06:58 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Tue, 18 Feb 2020 11:06:58 +0000 Subject: [nova][cinder] What should the behaviour of extend_volume be with attached encrypted volumes? In-Reply-To: <20200213095102.qrvvdqbxa22jceo7@lyarwood.usersys.redhat.com> References: <20200213095102.qrvvdqbxa22jceo7@lyarwood.usersys.redhat.com> Message-ID: <20200218110643.pyxnrb67pbpcqajn@lyarwood.usersys.redhat.com> On 13-02-20 09:51:02, Lee Yarwood wrote: > Hello all, > > The following bug was raised recently regarding a failure to extend > attached encrypted volumes: > > Failing to extend an attached encrypted volume > https://bugs.launchpad.net/nova/+bug/1861071 > > I've worked up a series below that resolves this for LUKSv1 volumes by > taking the LUKSv1 header into account before calling Libvirt to resize > the block device within the instance: > > https://review.opendev.org/#/q/topic:bug/1861071 > > This results in the instance visable block device being resized to a > size just smaller than that requested through Cinder's API. > > My question to the list is if that behaviour is acceptable given the > same call to extend an attached unencrypted volume *will* grow the > instance visable block device to the requested size? Bumping the thread as I'm still looking for input. The above topic is ready for review now so if I don't hear any objections I'll move forward with the current approach of making the user visible block device smaller within the instance. Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From hberaud at redhat.com Tue Feb 18 11:09:35 2020 From: hberaud at redhat.com (Herve Beraud) Date: Tue, 18 Feb 2020 12:09:35 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: Le mar. 18 févr. 2020 à 11:42, Thierry Carrez a écrit : > Moises Guimaraes de Medeiros wrote: > > If removing 1.0.0 is the way we choose to go, people who already have > > 1.0.0 won't be able to get "newer" 0.x.y versions. > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > though ? Do we expect anything to use it ? > > If we really do, then the least confusing might be to keep 1.0.0 and > bump major every time you release a breaking change. > Could be a more proper solution. > -- > Thierry Carrez (ttx) > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sfinucan at redhat.com Tue Feb 18 11:09:52 2020 From: sfinucan at redhat.com (Stephen Finucane) Date: Tue, 18 Feb 2020 11:09:52 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: On Tue, 2020-02-18 at 11:40 +0100, Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: > > If removing 1.0.0 is the way we choose to go, people who already have > > 1.0.0 won't be able to get "newer" 0.x.y versions. > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > though ? Do we expect anything to use it ? > > If we really do, then the least confusing might be to keep 1.0.0 and > bump major every time you release a breaking change. Agreed. I don't imageine anyone is using this yet, and so long as we use a future major version to indicate breaking changes, what would it matter even if they were? I think we should just keep 1.0.0 and remember to release 2.0.0 when it's done, personally. Certainly a lot less work. Stephen From zhengyupann at 163.com Tue Feb 18 11:12:39 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Tue, 18 Feb 2020 19:12:39 +0800 (CST) Subject: [neutron] Can br-ex and br-tun use the same interface? Message-ID: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> hi, I have only two physical interfaces. In my deploying, network node and compute node are the same. Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the other interface? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alfredo.deluca at gmail.com Tue Feb 18 11:36:50 2020 From: alfredo.deluca at gmail.com (Alfredo De Luca) Date: Tue, 18 Feb 2020 12:36:50 +0100 Subject: [CINDER] Distributed storage alternatives In-Reply-To: References: Message-ID: Thanks Burak and Ignazio. Appreciate it On Thu, Feb 13, 2020 at 10:19 PM Burak Hoban wrote: > Hi guys, > > We use Dell EMC VxFlex OS, which in its current version allows for free > use and commercial (in version 3.5 a licence is needed, but its perpetual). > It's similar to Ceph but more geared towards scale and performance etc (it > use to be called ScaleIO). > > Other than that, I know of a couple sites using SAN storage, but a lot of > people just seem to use Ceph. > > Cheers, > > Burak > > ------------------------------ > > Message: 2 > Date: Thu, 13 Feb 2020 18:20:29 +0100 > From: Ignazio Cassano > To: Alfredo De Luca > Cc: openstack-discuss > Subject: Re: [CINDER] Distributed storage alternatives > Message-ID: > < > CAB7j8cXLQWh5fx-E9AveUEa6OncDwCL6BOGc-Pm2TX4FKwnUKg at mail.gmail.com> > Content-Type: text/plain; charset="utf-8" > > Hello Alfredo, I think best opensource solution is ceph. > As far as commercial solutions are concerned we are working with network > appliance (netapp) and emc unity. > Regards > Ignazio > > Il Gio 13 Feb 2020, 13:48 Alfredo De Luca ha > scritto: > > > Hi all. > > we 'd like to explore storage back end alternatives to CEPH for > > Openstack > > > > I am aware of GlusterFS but what would you recommend for distributed > > storage like Ceph and specifically for block device provisioning? > > Of course must be: > > > > 1. *Reliable* > > 2. *Fast* > > 3. *Capable of good performance over WAN given a good network back > > end* > > > > Both open source and commercial technologies and ideas are welcome. > > > > Cheers > > > > -- > > *Alfredo* > > > > > > _____________________________________________________________________ > > The information transmitted in this message and its attachments (if any) > is intended > only for the person or entity to which it is addressed. > The message may contain confidential and/or privileged material. Any > review, > retransmission, dissemination or other use of, or taking of any action in > reliance > upon this information, by persons or entities other than the intended > recipient is > prohibited. > > If you have received this in error, please contact the sender and delete > this e-mail > and associated material from any computer. > > The intended recipient of this e-mail may only use, reproduce, disclose or > distribute > the information contained in this e-mail and any attached files, with the > permission > of the sender. > > This message has been scanned for viruses. > _____________________________________________________________________ > -- *Alfredo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Feb 18 11:39:25 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 18 Feb 2020 11:39:25 +0000 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> Message-ID: <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: > hi, > I have only two physical interfaces. In my deploying, network node and compute node are the same. > Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the other > interface? yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup what interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via patch ports i it will use the out_port action to skip sending the packet via the kernel networking stack. so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont know if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all packets that use vxlan will be sent via the kernel which will significantly reduce performance. im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. From zhengyupann at 163.com Tue Feb 18 12:03:52 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Tue, 18 Feb 2020 20:03:52 +0800 (CST) Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> Message-ID: <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> Hi, Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds tunnel ip in br-ex? -- Thanks. Zhengyu At 2020-02-18 18:39:25, "Sean Mooney" wrote: >On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: >> hi, >> I have only two physical interfaces. In my deploying, network node and compute node are the same. >> Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the other >> interface? >yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup what >interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need >to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is >different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, >in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via patch >ports i it will use the out_port action to skip sending the packet via the kernel networking stack. > >so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. >that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont know >if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. > >not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all packets >that use vxlan will be sent via the kernel which will significantly reduce performance. > >im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Tue Feb 18 12:29:35 2020 From: thierry at openstack.org (Thierry Carrez) Date: Tue, 18 Feb 2020 13:29:35 +0100 Subject: [Release-job-failures] Release of openstack/puppet-keystone for ref refs/tags/16.1.0 failed In-Reply-To: References: Message-ID: <11e5710b-bb25-21f6-aaf9-3c458641b899@openstack.org> We had a series of failures uploading Puppet artifacts to the forge: Forge API failed to upload tarball with code: 500 errors: An error was encountered while processing your request, please try again later. This affected: openstack/puppet-ceilometer (16.1.0) https://zuul.opendev.org/t/openstack/build/5b0b890d777545c098ea88b70eb89b52 openstack/puppet-panko (16.1.0) https://zuul.opendev.org/t/openstack/build/073cbb3b72ac4241aa1802ccbaf4ada5 openstack/puppet-murano (16.1.0) https://zuul.opendev.org/t/openstack/build/b07a78bd59c84176bb7d9d2f1e0f635c openstack/puppet-keystone (16.1.0) https://zuul.opendev.org/t/openstack/build/be7510a717d3413ab008ab3d34db3672 As a result the tarballs were not uploaded to tarballs.openstack.org either. I think we should reenqueue the tag reference to retrigger the release-openstack-puppet job for those ? -- Thierry Carrez (ttx) From smooney at redhat.com Tue Feb 18 12:30:17 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 18 Feb 2020 12:30:17 +0000 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> Message-ID: On Tue, 2020-02-18 at 20:03 +0800, Zhengyu Pan wrote: > Hi, > Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds tunnel > ip in br-ex? no the br-ex should be connect to the br-int by a patch port already and the br-tun will be connected to the br-int via a patch port already so br-tun and br-ex are connected indirectly so the optimisation will work. if you do not have a patch port between br-ex and br-int then yes you shoudl create one. you want to avoid a loop between the bridge so you dont want all bridge to be connected directly. they should all connect to br-int but not to each other. regarding the ip i alwasy just configruied it on the br-ex local bridge port so "ifconfig br-ex 192.168.100.42/24 up" or whatever you ip is. you can obviously do that with network manager or systemd network script too. just ensure whatever ip is set as the neutron local tunnel ip is assigned to the br-ex and your tunnel traffic will use that interface as long as the routing table identifs it as the correct interface. if you have two interface in the same subnet or your vxlan tunnel ips are on multiple subnets and are routed you need to make sure the metric/route pirortiy for the br-ex will be set correctly so that it is selected instead of other options. normally you wont have to do anything as your tunnel endpoint ips will come form a singel subnet and you wont have other interfaces in the same range i just mention that above incase you have a non standard deployment. > > > -- > > Thanks. > Zhengyu > > > > At 2020-02-18 18:39:25, "Sean Mooney" wrote: > > On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: > > > hi, > > > I have only two physical interfaces. In my deploying, network node and compute node are the same. > > > Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the > > > other > > > interface? > > > > yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup > > what > > interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need > > to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is > > different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, > > in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via > > patch > > ports i it will use the out_port action to skip sending the packet via the kernel networking stack. > > > > so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. > > that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont > > know > > if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. > > > > not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all > > packets > > that use vxlan will be sent via the kernel which will significantly reduce performance. > > > > im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. From skaplons at redhat.com Tue Feb 18 12:55:29 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 18 Feb 2020 13:55:29 +0100 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> Message-ID: <322C66B3-F3A1-491A-9A76-1649DE996C18@redhat.com> Hi, > On 18 Feb 2020, at 13:30, Sean Mooney wrote: > > On Tue, 2020-02-18 at 20:03 +0800, Zhengyu Pan wrote: >> Hi, >> Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds tunnel >> ip in br-ex? > no the br-ex should be connect to the br-int by a patch port already and the br-tun will be connected to the br-int > via a patch port already so br-tun and br-ex are connected indirectly so the optimisation will work. > if you do not have a patch port between br-ex and br-int then yes you shoudl create one. Patch ports between br-int and all external bridges defined in bridge_mappings are created automatically by neutron-ovs-agent: https://github.com/openstack/neutron/blob/8ba44d672059e2dbea6a0516e5832cec40800a77/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1420 > > you want to avoid a loop between the bridge so you dont want all bridge to be connected directly. > they should all connect to br-int but not to each other. > > regarding the ip i alwasy just configruied it on the br-ex local bridge port > so "ifconfig br-ex 192.168.100.42/24 up" or whatever you ip is. > you can obviously do that with network manager or systemd network script too. > > just ensure whatever ip is set as the neutron local tunnel ip is assigned to the br-ex and > your tunnel traffic will use that interface as long as the routing table identifs it as the correct > interface. > > if you have two interface in the same subnet or your vxlan tunnel ips are on multiple subnets and are routed > you need to make sure the metric/route pirortiy for the br-ex will be set correctly so that it is selected instead of > other options. > > normally you wont have to do anything as your tunnel endpoint ips will come form a singel subnet and you wont have other > interfaces in the same range i just mention that above incase you have a non standard deployment. >> >> >> -- >> >> Thanks. >> Zhengyu >> >> >> >> At 2020-02-18 18:39:25, "Sean Mooney" wrote: >>> On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: >>>> hi, >>>> I have only two physical interfaces. In my deploying, network node and compute node are the same. >>>> Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the >>>> other >>>> interface? >>> >>> yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup >>> what >>> interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you need >>> to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port which is >>> different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a bridge, >>> in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via >>> patch >>> ports i it will use the out_port action to skip sending the packet via the kernel networking stack. >>> >>> so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. >>> that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i dont >>> know >>> if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. >>> >>> not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all >>> packets >>> that use vxlan will be sent via the kernel which will significantly reduce performance. >>> >>> im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. > > — Slawek Kaplonski Senior software engineer Red Hat From smooney at redhat.com Tue Feb 18 13:01:51 2020 From: smooney at redhat.com (Sean Mooney) Date: Tue, 18 Feb 2020 13:01:51 +0000 Subject: [neutron] Can br-ex and br-tun use the same interface? In-Reply-To: <322C66B3-F3A1-491A-9A76-1649DE996C18@redhat.com> References: <735280f9.7832.17057fffe1e.Coremail.zhengyupann@163.com> <44c47fc4cdbff2a7b85477ab522bf3fe51a7befa.camel@redhat.com> <41904b41.7d79.170582ee341.Coremail.zhengyupann@163.com> <322C66B3-F3A1-491A-9A76-1649DE996C18@redhat.com> Message-ID: <81d065c5b3cc83b4a6b12da720ce5424e329dd34.camel@redhat.com> On Tue, 2020-02-18 at 13:55 +0100, Slawek Kaplonski wrote: > Hi, > > > On 18 Feb 2020, at 13:30, Sean Mooney wrote: > > > > On Tue, 2020-02-18 at 20:03 +0800, Zhengyu Pan wrote: > > > Hi, > > > Thank you. Do i only need to add a patch port that connects br-ex with br-tun? And create a port that binds > > > tunnel > > > ip in br-ex? > > > > no the br-ex should be connect to the br-int by a patch port already and the br-tun will be connected to the br-int > > via a patch port already so br-tun and br-ex are connected indirectly so the optimisation will work. > > if you do not have a patch port between br-ex and br-int then yes you shoudl create one. > > Patch ports between br-int and all external bridges defined in bridge_mappings are created automatically by neutron- > ovs-agent: > https://github.com/openstack/neutron/blob/8ba44d672059e2dbea6a0516e5832cec40800a77/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L1420 yep they shoudl be. however if you have not configured the bridge mapping becasue you are not usign provider networks you might not have one. that said i have always configure them via the bridge mappings. > > > > > you want to avoid a loop between the bridge so you dont want all bridge to be connected directly. > > they should all connect to br-int but not to each other. > > > > regarding the ip i alwasy just configruied it on the br-ex local bridge port > > so "ifconfig br-ex 192.168.100.42/24 up" or whatever you ip is. > > you can obviously do that with network manager or systemd network script too. > > > > just ensure whatever ip is set as the neutron local tunnel ip is assigned to the br-ex and > > your tunnel traffic will use that interface as long as the routing table identifs it as the correct > > interface. > > > > if you have two interface in the same subnet or your vxlan tunnel ips are on multiple subnets and are routed > > you need to make sure the metric/route pirortiy for the br-ex will be set correctly so that it is selected instead > > of > > other options. > > > > normally you wont have to do anything as your tunnel endpoint ips will come form a singel subnet and you wont have > > other > > interfaces in the same range i just mention that above incase you have a non standard deployment. > > > > > > > > > -- > > > > > > Thanks. > > > Zhengyu > > > > > > > > > > > > At 2020-02-18 18:39:25, "Sean Mooney" wrote: > > > > On Tue, 2020-02-18 at 19:12 +0800, Zhengyu Pan wrote: > > > > > hi, > > > > > I have only two physical interfaces. In my deploying, network node and compute node are the same. > > > > > Can Bridge br-tun and br-ex use the same interface when using vxlan network type ? management network use the > > > > > other > > > > > interface? > > > > > > > > yes they can. the way this works wehn ovs encapsulates teh packet the vxlan tunnel endpoint ip is used to lookup > > > > what > > > > interface to transmit the packet on. so to use the same interface for both tunnels and provider networks you > > > > need > > > > to assign the tunnel endpoint ip to br-ex. ovs has a special operation at the dataplane level call out_port > > > > which is > > > > different form output. if ovs detects that the the source ip adress of the vxlan tunnel is assocaited with a > > > > bridge, > > > > in this case br-ex and if that bridge is connect to the bridge with the tunnel port directly or indirectly via > > > > patch > > > > ports i it will use the out_port action to skip sending the packet via the kernel networking stack. > > > > > > > > so if you use use an interface that is attached to an ovs bridge it will actully imporve performance in general. > > > > that said adding the tunnel endpoint ip to the br-tun and adding an interface to br-tun used to crash ovs. i > > > > dont > > > > know > > > > if that was ever fixed but i would recommend not trying and just adding the tunnel enpoint ip to br-ex. > > > > > > > > not that this is the recommended way to deploy ovs-dpdk as if you dont add the tunnel endpoint ip to br-ex all > > > > packets > > > > that use vxlan will be sent via the kernel which will significantly reduce performance. > > > > > > > > im not sure if this works with hardwar offloaded ovs but i would consider it a bug if it did not. > > > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > From fungi at yuggoth.org Tue Feb 18 15:16:26 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 18 Feb 2020 15:16:26 +0000 Subject: [Release-job-failures] Release of openstack/puppet-keystone for ref refs/tags/16.1.0 failed In-Reply-To: <11e5710b-bb25-21f6-aaf9-3c458641b899@openstack.org> References: <11e5710b-bb25-21f6-aaf9-3c458641b899@openstack.org> Message-ID: <20200218151626.wmjnmtujzr46tril@yuggoth.org> On 2020-02-18 13:29:35 +0100 (+0100), Thierry Carrez wrote: > We had a series of failures uploading Puppet artifacts to the > forge: [...] > I think we should reenqueue the tag reference to retrigger the > release-openstack-puppet job for those ? This sounds reasonable to me. Unless there are objections to this plan, I'll try to get to them later today between meetings. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Tue Feb 18 15:46:16 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 18 Feb 2020 09:46:16 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: On 2/18/20 4:34 AM, Moises Guimaraes de Medeiros wrote: > If removing 1.0.0 is the way we choose to go, people who already have > 1.0.0 won't be able to get "newer" 0.x.y versions. > > We will need an announcement to blacklist 1.0.0. Then, when the time > comes to finally make it stable, we can choose to either go 2.0.0 or 1.0.1. > > We should specifically put in the installation page instructions to > blacklist 1.0.0 in requirements files. If we pull it from pypi, do we really need to blacklist it? A regular pip install would only find the 0.x versions after that, right? In general, I'm not that concerned about someone having already installed it at this point. It was just released and the only people who are likely aware of the library are the ones working on it. My main concern is that we've released the library with a version number that implies a certain level of completeness that doesn't actually exist yet. Given the length of time it has taken to get it to this point, the possibility exists that this bad state could persist for six months or more. I'd prefer to nip it in the bud now rather than have somebody find it down the road and waste a bunch of time trying to make an incomplete thing work. > > On Tue, Feb 18, 2020 at 11:24 AM Thierry Carrez > wrote: > > Ben Nemec wrote: > > > > > > On 2/17/20 2:42 PM, Jeremy Stanley wrote: > >> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: > >> [...] > >>> I’m not 100% sure, but I think if you remove a release from PyPI > >>> you can’t release again using that version number. So a future > >>> stable release would have to be 1.1.0, or something like that. > >> [...] > >> > >> More accurately, you can't republish the same filename to PyPI even > >> if it's been previously deleted. You could however publish a > >> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz > >> though that seems a bit of a messy workaround. > >> > > > > This seems sensible - it would be kind of like rewriting history > in a > > git repo to re-release 1.0 with different content. I'm also > completely > > fine with having to use a different release number for our > eventual 1.0 > > release. It may make our release version checks unhappy, but > since this > > is (hopefully) not a thing we'll be doing regularly I imagine we can > > find a way around that. > > > > If we can pull the 1.0.0 release that would be ideal since as Sean > > mentioned people aren't good about reading docs and a 1.0 implies > some > > things that aren't true here. > > As others suggested, the simplest is probably to remove 1.0.0 from PyPI > and releases.o.o, and then wait until the API is stable to push a > 2.0.0 tag. > > That way we don't break anything (the tag stays, we still increment > releases, we do not rewrite history, we do not use weird post1 bits) > but > just limit the diffusion of the confusing 1.0.0 artifact. > > I'm not sure a feature branch is really needed ? > > -- > Thierry Carrez (ttx) > > > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > From mihalis68 at gmail.com Tue Feb 18 15:47:05 2020 From: mihalis68 at gmail.com (Chris Morgan) Date: Tue, 18 Feb 2020 10:47:05 -0500 Subject: ops meetups team meeting 2020-2-18 Message-ID: The OpenStack Ops Meetups team meeting was held today on IRC, minutes linked below. Key links vancouver opendev+ptg : https://www.openstack.org/events/opendev-ptg-2020/ South Korea Ops Meetup proposal : https://etherpad.openstack.org/p/ops-meetup-2nd-korea-2020 OpenInfra summit 2020 announced for Berlin, Oct 19-23 : https://superuser.openstack.org/articles/inside-open-infrastructure-the-latest-from-the-openstack-foundation-4/ The meetups team will be proposing content for Vancouver and will shortly solicit feedback on preferred dates for the Ops Meetups even in South Korea. Please follow https://twitter.com/osopsmeetup for more announcements. Minutes: <•openstack> Meeting ended Tue Feb 18 15:37:42 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) 10:37 AM Minutes: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-02-18-15.01.html 10:37 AM Minutes (text): http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-02-18-15.01.txt 10:37 AM Log: http://eavesdrop.openstack.org/meetings/ops_meetup_team/2020/ops_meetup_team.2020-02-18-15.01.log.html Chris Morgan - on behalf of the OpenStack Ops Meetups team ( https://wiki.openstack.org/wiki/Ops_Meetups_Team) -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at nemebean.com Tue Feb 18 15:47:36 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 18 Feb 2020 09:47:36 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> Message-ID: On 2/18/20 4:23 AM, Thierry Carrez wrote: > Ben Nemec wrote: >> >> >> On 2/17/20 2:42 PM, Jeremy Stanley wrote: >>> On 2020-02-17 15:02:14 -0500 (-0500), Doug Hellmann wrote: >>> [...] >>>> I’m not 100% sure, but I think if you remove a release from PyPI >>>> you can’t release again using that version number. So a future >>>> stable release would have to be 1.1.0, or something like that. >>> [...] >>> >>> More accurately, you can't republish the same filename to PyPI even >>> if it's been previously deleted. You could however publish a >>> oslo.limit-1.0.0.post1.tar.gz after deleting oslo.limit-1.0.0.tar.gz >>> though that seems a bit of a messy workaround. >>> >> >> This seems sensible - it would be kind of like rewriting history in a >> git repo to re-release 1.0 with different content. I'm also completely >> fine with having to use a different release number for our eventual >> 1.0 release. It may make our release version checks unhappy, but since >> this is (hopefully) not a thing we'll be doing regularly I imagine we >> can find a way around that. >> >> If we can pull the 1.0.0 release that would be ideal since as Sean >> mentioned people aren't good about reading docs and a 1.0 implies some >> things that aren't true here. > > As others suggested, the simplest is probably to remove 1.0.0 from PyPI > and releases.o.o, and then wait until the API is stable to push a 2.0.0 > tag. > > That way we don't break anything (the tag stays, we still increment > releases, we do not rewrite history, we do not use weird post1 bits) but > just limit the diffusion of the confusing 1.0.0 artifact. > > I'm not sure a feature branch is really needed ? > If we could continue to tag master with 0.x releases then no. I think my feature branch option was in case we couldn't have a 0.1.0 tag that was later than 1.0.0 on the same branch. From openstack at nemebean.com Tue Feb 18 15:50:52 2020 From: openstack at nemebean.com (Ben Nemec) Date: Tue, 18 Feb 2020 09:50:52 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: On 2/18/20 4:40 AM, Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: >> If removing 1.0.0 is the way we choose to go, people who already have >> 1.0.0 won't be able to get "newer" 0.x.y versions. > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > though ? Do we expect anything to use it ? Yes, the Nova team has a PoC change up that uses these early releases. That's how we're iterating on the API. I can't guarantee that we'll need more releases, but I'm also not aware of anyone having said, "yep, it's ready" so I do expect more releases. > > If we really do, then the least confusing might be to keep 1.0.0 and > bump major every time you release a breaking change. > From Albert.Braden at synopsys.com Tue Feb 18 16:46:38 2020 From: Albert.Braden at synopsys.com (Albert Braden) Date: Tue, 18 Feb 2020 16:46:38 +0000 Subject: Virtio memory balloon driver In-Reply-To: References: <20200204120059.w6efstb7zl6nq3sm@yuggoth.org> <45c88b3d88408c4acce83fd59b28e74f7a58dfa8.camel@redhat.com> Message-ID: Hi Sean, Do you have time to look at the mem_stats_period_seconds / virtio memory balloon issue this week? -----Original Message----- From: Albert Braden Sent: Friday, February 7, 2020 2:26 PM To: Sean Mooney ; openstack-discuss at lists.openstack.org Subject: RE: Virtio memory balloon driver I opened a bug: https://bugs.launchpad.net/nova/+bug/1862425 -----Original Message----- From: Sean Mooney Sent: Wednesday, February 5, 2020 10:25 AM To: Albert Braden ; openstack-discuss at lists.openstack.org Subject: Re: Virtio memory balloon driver On Wed, 2020-02-05 at 17:33 +0000, Albert Braden wrote: > When I start and stop the giant VM I don't see any evidence of OOM errors. I suspect that the #centos guys may be > correct when they say that the "Virtio memory balloon" device is not capable of addressing that much memory, and that > I must disable it if I want to create VMs with 1.4T RAM. Setting "mem_stats_period_seconds = 0" doesn't seem to > disable it. > > How are others working around this? Is anyone else creating Centos 6 VMs with 1.4T or more RAM? i suspect not. spawning 1 giant vm that uses all the resouse on the host is not a typical usecse. in general people move to ironic when the need a vm that large. i unfortunetly dont have time to look into this right now but we can likely add a way to disabel the ballon device and if you remind me in a day or two i can try and see why mem_stats_period_seconds = 0 is not working for you. looking at https://urldefense.proofpoint.com/v2/url?u=https-3A__opendev.org_openstack_nova_src_branch_master_nova_virt_libvirt_driver.py-23L5842-2DL5852&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=_kEGfZqTkPscjy0GJB2N_WBXRJPEt2400ADV12hhxR8&e= it should work but libvirt addes extra element to the xml after we generate it and fills in some fields. its possibel that libvirt is adding it and when we dont want the device we need to explcitly disable it in some way. if that is the case we could track this as a bug and potentially backport it. > > Console log: https://urldefense.proofpoint.com/v2/url?u=https-3A__f.perl.bot_p_njvgbm&d=DwICaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=XrJBXYlVPpvOXkMqGPz6KucRW_ils95ZMrEmlTflPm8&m=WF6NUF1-K7cJv2js_9SXU42-chTUhO8odllpI7Mk26s&s=5J3hH_mxdtOyNFqbW6j9yGiSyMXmhy3bXrmXRHkJ9I0&e= > > The error is at line 404: [ 18.736435] BUG: unable to handle kernel paging request at ffff9ca8d9980000 > > Dmesg: > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:42 2020] device tap039191ba-25 left promiscuous mode > [Tue Feb 4 17:50:42 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled state > [Tue Feb 4 17:50:47 2020] device tap039191ba-25 entered promiscuous mode > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking state > [Tue Feb 4 17:50:47 2020] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding state > > Syslog: > > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751339] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751342] brq49cbe55d-51: port 1(tap039191ba-25) entered disabled > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751450] device tap039191ba-25 entered promiscuous mode > Feb 4 17:50:51 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained carrier > Feb 4 17:50:51 us01odc-p01-hv214 libvirtd[37317]: 2020-02-05 01:50:51.386+0000: 37321: warning : > qemuDomainObjTaint:5602 : Domain id=15 name='instance-00002164' uuid=33611060-887a-44c1-a3b8-1c36cb8f9984 is tainted: > host-cpu > Feb 4 17:50:51 us01odc-p01-hv214 systemd-udevd[238052]: link_config: autonegotiation is unset or enabled, the speed > and duplex are not writable. > Feb 4 17:50:51 us01odc-p01-hv214 networkd-dispatcher[1214]: WARNING:Unknown index 32 seen, reloading interface list > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751683] brq49cbe55d-51: port 1(tap039191ba-25) entered blocking > state > Feb 4 17:50:51 us01odc-p01-hv214 kernel: [2859840.751685] brq49cbe55d-51: port 1(tap039191ba-25) entered forwarding > state > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:51 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > Feb 4 17:50:52 us01odc-p01-hv214 systemd-networkd[781]: tap039191ba-25: Gained IPv6LL > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: reading /etc/resolv.conf > Feb 4 17:50:52 us01odc-p01-hv214 dnsmasq[28739]: using nameserver 127.0.0.53#53 > > > -----Original Message----- > From: Jeremy Stanley > Sent: Tuesday, February 4, 2020 4:01 AM > To: openstack-discuss at lists.openstack.org > Subject: Re: Virtio memory balloon driver > > On 2020-02-03 23:57:28 +0000 (+0000), Albert Braden wrote: > > We are reserving 2 CPU and 16G RAM for the hypervisor. I haven't > > seen any OOM errors. Where should I look for those? > > [...] > > The `dmesg` utility on the hypervisor host should show you the > kernel's log ring buffer contents (the -T flag is useful to > translate its timestamps into something more readable than seconds > since boot too). If the ring buffer has overwritten the relevant > timeframe then look for signs of kernel OOM killer invocation in > your syslog or persistent journald storage. From rosmaita.fossdev at gmail.com Tue Feb 18 22:32:38 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Tue, 18 Feb 2020 17:32:38 -0500 Subject: [operators][cinder][nova][glance] possible data loss situation (bug 1852106) Message-ID: tl;dr: If you are running the OpenStack Train release, add the following to the [DEFAULT]non_inheritable_image_properties configuration option [0] in your nova.conf: cinder_encryption_key_id,cinder_encryption_key_deletion_policy About Your nova.conf ==================== - if you already have a value set for non_inheritable_image_properties, add the above to what you currently have - if non_inheritable_image_properties is *not* set in your current nova.conf, you must set its value to include BOTH the above properties AND the default values (which you can see when you generate a sample nova configuration file [1]) NOTE: At a minimum, the non_inheritable_image_properties list should contain: - the properties used for image-signature-validation (these are not transferrable from image to image): * img_signature_hash_method * img_signature * img_signature_key_type * img_signature_certificate_uuid - the properties used to manage the keys for images of cinder encrypted volumes: * cinder_encryption_key_id * cinder_encryption_key_deletion_policy - review the documentation to determine whether you want to include the other default properties: * cache_in_nova * bittorrent Details ======= This issue is being tracked as Launchpad bug 1852106 [2]. This is probably a low-occurrence situation, because in order for the issue to occur, all of the following must happen: (0) using the OpenStack Train release (or code from master (Ussuri development)) (1) cinder_encryption_key_id and cinder_encryption_key_deletion_policy are NOT included in the non_inheritable_image_properties setting in nova.conf (which they aren't, by default) (2) a user has created a volume of an encrypted volume-type in the Block Storage service (cinder). Call this Volume-1 (3) using the Block Storage service, the user has uploaded the encrypted volume as an image to the Image service (glance). Call this Image-1 (4) using the Compute service (nova), the user has attempted to directly boot a server from the image. (Note: this is an unsupported action, the supported workflow is to use the image to boot-from-volume.) (5) although an unsupported action, if a user does (4), it currently results in a server in status ACTIVE but which is unusable because the operating system can't be found (6) using the Compute service, the user requests the createImage action on the unusable (yet ACTIVE) server, resulting in Image-2 (7) using the Image service, the user deletes Image-2 (which has inherited the cinder_encryption_key_* properties from Image-1) upon which the encryption key is deleted, thereby rendering Image-1 non-decryptable so that it can no longer be used in the normal boot-from-volume workflow NOTE 1: the cinder_encryption_key_deletion_policy image property was introduced in Train. In pre-Train releases, deleting the useless Image-2 in step (7) does NOT result in encryption key deletion. NOTE 2: Volume-1 created in step (2) has a *different* encryption key ID than the one associated with Image-1. Thus, even in the scenario where Image-1 becomes non-decryptable, Volume-1 is not affected. Workaround ========== When cinder_encryption_key_id,cinder_encryption_key_deletion_policy are added to the non_inheritable_image_properties setting in nova.conf, the useless Image-2 created in step (6) above will not have the image properties on it that enable Glance to delete the encryption key still in use by Image-1. This does not, however, protect images for which steps (4)-(6) have been performed before the deployment of this workaround. The safest way to deal with images created before the workaround is deployed is to remove the cinder_encryption_key_deletion_policy image property from any image that has it (or to change its value to 'do_not_delete'). While it is possible to use other image properties to identify images created by Nova as opposed to images created by Cinder, this is not guaranteed to be reliable because image properties may have been modified or removed by the image owner. Proposed Longer-term Fixes ========================== - In the Ussuri release, the unsupported action in step (4) above will result in a 400 rather than an active yet unusable server. Hence it will no longer be possible to create the image of the unusable server that causes the issue. [3] - Additionally, given that the image properties associated with cinder encrypted volumes and image signature validation are specific to a single image and should not be inherited by server snapshots under any circumstances, in Ussuri these "absolutely non-inheritable image properties" will no longer be required to appear in the non_inheritable_image_properties configuration setting in order to prevent them from being propagated to server snapshots. [4] References ========== [0] https://docs.openstack.org/nova/train/configuration/config.html#DEFAULT.non_inheritable_image_properties [1] https://docs.openstack.org/nova/train/configuration/sample-config.html [2] https://bugs.launchpad.net/nova/+bug/1852106 [3] https://review.opendev.org/#/c/707738/ [4] https://review.opendev.org/#/c/708126/ From john at johngarbutt.com Tue Feb 18 22:47:53 2020 From: john at johngarbutt.com (John Garbutt) Date: Tue, 18 Feb 2020 22:47:53 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: On Tue, 18 Feb 2020 at 15:55, Ben Nemec wrote: > On 2/18/20 4:40 AM, Thierry Carrez wrote: > > Moises Guimaraes de Medeiros wrote: > >> If removing 1.0.0 is the way we choose to go, people who already have > >> 1.0.0 won't be able to get "newer" 0.x.y versions. > > > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > > though ? Do we expect anything to use it ? > > Yes, the Nova team has a PoC change up that uses these early releases. > That's how we're iterating on the API. I can't guarantee that we'll need > more releases, but I'm also not aware of anyone having said, "yep, it's > ready" so I do expect more releases. I certainly don't feel like its ready yet. I am pushing on Nova support for unified limits here (but its very WIP right now): https://review.opendev.org/#/c/615180 I was hoping we would better understand two level limits before cutting v1.0.0: https://review.opendev.org/#/c/695527 > > If we really do, then the least confusing might be to keep 1.0.0 and > > bump major every time you release a breaking change. I am +1 this approach. What we have might work well enough. Thanks, johnthetubaguy From kpuusild at gmail.com Wed Feb 19 06:29:43 2020 From: kpuusild at gmail.com (Kevin Puusild) Date: Wed, 19 Feb 2020 08:29:43 +0200 Subject: OpenStack with vCenter Message-ID: Hello I'm currently trying to setup DevStack with vCenter (Not ESXi), i found a perfect documentation for this task: https://wiki.openstack.org/wiki/NovaVMware/DeveloperGuide But the problem here is that when i start *stack.sh *with localrc file shown in documentation the installing process fails. Is this documentation out-dated? Is there some up to date documentation? -- Best Regards. Kevin Puusild -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdulko at redhat.com Wed Feb 19 08:09:17 2020 From: mdulko at redhat.com (mdulko at redhat.com) Date: Wed, 19 Feb 2020 09:09:17 +0100 Subject: [openstack-dev][kuryr] Not working in ARM64 (as Node) In-Reply-To: <1415279454.3463025.1581938379342@mail.yahoo.com> References: <1416517604.208171.1580972979203.ref@mail.yahoo.com> <1416517604.208171.1580972979203@mail.yahoo.com> <6a14ec2f02d5aef265e5e651eae67adc20b01167.camel@redhat.com> <541855276.335476.1580985962108@mail.yahoo.com> <97420918.593071.1581052477180@mail.yahoo.com> <2acc6c78c58d92cce50ace5f8b3a8f3fd77d48c3.camel@redhat.com> <2135735478.1625599.1581500298473@mail.yahoo.com> <1317590151.1993706.1581576773856@mail.yahoo.com> <57f5629cbc06889ebe89ca7297cfe74cdc8d271c.camel@redhat.com> <1415279454.3463025.1581938379342@mail.yahoo.com> Message-ID: <16d8d2567f79374114e69a78d563f11addccc567.camel@redhat.com> On Mon, 2020-02-17 at 11:19 +0000, VeeraReddy wrote: > Hi mdulko, > Thanks you very much, > I am able to launch Container and VM side by side in same network on arm64 platform. Issue was openvswitch kernel Module(Enabled "geneve module" in openvswitch). > > > Now i am able to ping from container to VM but unable to ping Vice Versa (VM to container). > And also, I am able ping VM to VM (Both spawned from OpenStack dashboard). This should depend entirely on your network topology. If Pod->VM traffic works, then subnets seems to be routed correctly, so I'd expect it's the security groups that are blocking traffic. Please check that. Please note that we test VM->Pod and Pod->VM on the gate [1]. [1] https://github.com/openstack/kuryr-tempest-plugin/blob/master/kuryr_tempest_plugin/tests/scenario/test_cross_ping.py#L34-L73 > Is there any configuration to enable traffic from VM to Container? > > Regards, > Veera. > > > On Thursday, 13 February, 2020, 02:42:21 pm IST, wrote: > > > We're using 2.9.5 on the x86_64 gate and that works fine. I'm not sure > if downgrading could help. This is a Neutron issue and I don't have > much experience on such a low level. You can try asking on IRC, e.g. on > #openstack-neutron. > > On Thu, 2020-02-13 at 06:52 +0000, VeeraReddy wrote: > > Thanks mdulko, > > Issue is in openvswitch, iam getting following error in switchd logs > > ./ovs-vswitchd.log:2020-02-12T12:21:18.177Z|00493|connmgr|INFO|br-int<->unix#50: sending NXBAC_CT_DATAPATH_SUPPORT error reply to OFPT_BUNDLE_ADD_MESSAGE message > > > > Do we need to patch openvswitch to support above flow? > > > > My ovs version > > > > [root at node-2088 ~]# ovs-vsctl --version > > ovs-vsctl (Open vSwitch) 2.11.0 > > DB Schema 7.16.1 > > > > > > > > Regards, > > Veera. > > > > > > On Wednesday, 12 February, 2020, 05:37:31 pm IST, wrote: > > > > > > The controller logs are from an hour later than the CNI one. The issues > > seems not to be present. > > > > Isn't your controller still restarting? If so try to use -p option on > > `kubectl logs` to get logs from previous run. > > > > On Wed, 2020-02-12 at 09:38 +0000, VeeraReddy wrote: > > > Hi Mdulko, > > > Below are log files: > > > > > > Controller Log:http://paste.openstack.org/show/789457/ > > > cni : http://paste.openstack.org/show/789456/ > > > kubelet : http://paste.openstack.org/show/789453/ > > > > > > Unable to create kubelet interface in node, so not able to reach cluster (i.e 10.0.0.129) > > > > > > Please let me know the issue > > > > > > > > > > > > Regards, > > > Veera. > > > > > > > > > On Tuesday, 11 February, 2020, 03:29:10 pm IST, wrote: > > > > > > > > > Hi, > > > > > > So from this run you need the kuryr-controller logs. Apparently the pod > > > never got annotated with an information about the VIF. > > > > > > Thanks, > > > Michał > > > > > > On Fri, 2020-02-07 at 05:14 +0000, VeeraReddy wrote: > > > > Hi mdulko, > > > > Thanks for your support. > > > > > > > > As you mention i removed readinessProbe and > > > > livenessProbe from Kuryr pod definitions. Still i am facing issue , unable to create pod. > > > > > > > > > > > > > > > > > > > > Attached kubelet and kuryr-cni logs. > > > > > > > > > > > > > > > > Regards, > > > > Veera. > > > > > > > > > > > > On Thursday, 6 February, 2020, 05:19:12 pm IST, wrote: > > > > > > > > > > > > Hm, nothing too troubling there too, besides Kubernetes not answering > > > > on /healthz endpoint. Are those full logs, including the moment you > > > > tried spawning a container there? It seems like you only pasted the > > > > fragments with tracebacks regarding failures to read /healthz endpoint > > > > of kube-apiserver. That is another problem you should investigate - > > > > that causes Kuryr pods to restart. > > > > > > > > At first I'd disable the healthchecks (remove readinessProbe and > > > > livenessProbe from Kuryr pod definitions) and try to get fresh set of > > > > logs. > > > > > > > > On Thu, 2020-02-06 at 10:46 +0000, VeeraReddy wrote: > > > > > Hi mdulko, > > > > > Please find kuryr-cni logs > > > > > http://paste.openstack.org/show/789209/ > > > > > > > > > > > > > > > Regards, > > > > > Veera. > > > > > > > > > > > > > > > On Thursday, 6 February, 2020, 04:08:35 pm IST, wrote: > > > > > > > > > > > > > > > Hi, > > > > > > > > > > The logs you provided doesn't seem to indicate any issues. Please > > > > > provide logs of kuryr-daemon (kuryr-cni pod). > > > > > > > > > > Thanks, > > > > > Michał > > > > > > > > > > On Thu, 2020-02-06 at 07:09 +0000, VeeraReddy wrote: > > > > > > Hi, > > > > > > I am trying to run kubelet in arm64 platform > > > > > > 1. Generated kuryr-cni successfullly. using kur-cni Dockerfile > > > > > > 2. Generated kuryr-cni-arm64 container. > > > > > > 3. my kube-kuryr-arm64.yml (http://paste.openstack.org/show/789208/) > > > > > > > > > > > > My master node in x86 installed successfully using devstack > > > > > > > > > > > > While running kubelet in arm platform , not able to create kubelet interface (kubelet logs: http://paste.openstack.org/show/789206/) > > > > > > > > > > > > COntroller logs: http://paste.openstack.org/show/789209/ > > > > > > > > > > > > Please help me to fix the issue > > > > > > > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > Veera. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From moguimar at redhat.com Wed Feb 19 08:41:03 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 19 Feb 2020 09:41:03 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: I vote for removing 1.0.0 first (ASAP) and then deciding which will be the next version. The longer the time 1.0.0 is available, the harder it will be to push for a 0.x solution. On Tue, Feb 18, 2020 at 11:48 PM John Garbutt wrote: > On Tue, 18 Feb 2020 at 15:55, Ben Nemec wrote: > > On 2/18/20 4:40 AM, Thierry Carrez wrote: > > > Moises Guimaraes de Medeiros wrote: > > >> If removing 1.0.0 is the way we choose to go, people who already have > > >> 1.0.0 won't be able to get "newer" 0.x.y versions. > > > > > > Indeed. Do we need to release 0.x.y versions before the API stabilizes, > > > though ? Do we expect anything to use it ? > > > > Yes, the Nova team has a PoC change up that uses these early releases. > > That's how we're iterating on the API. I can't guarantee that we'll need > > more releases, but I'm also not aware of anyone having said, "yep, it's > > ready" so I do expect more releases. > > I certainly don't feel like its ready yet. > > I am pushing on Nova support for unified limits here (but its very WIP > right now): > https://review.opendev.org/#/c/615180 > > I was hoping we would better understand two level limits before cutting > v1.0.0: > https://review.opendev.org/#/c/695527 > > > > If we really do, then the least confusing might be to keep 1.0.0 and > > > bump major every time you release a breaking change. > > I am +1 this approach. > What we have might work well enough. > > Thanks, > johnthetubaguy > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Feb 19 10:45:19 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 11:45:19 +0100 Subject: [vmware][hyperv][kolla] Is there any interest? (or should we deprecate and remove?) In-Reply-To: References: Message-ID: We received no reply to this so I am proposing relevant deprecations. -yoctozepto wt., 21 sty 2020 o 19:42 Radosław Piliszek napisał(a): > > Hello Fellow Stackers, > > In Kolla and Kolla-Ansible we have some support for VMware (both > hypervisor and networking controller stuff) and Hyper-V. > The issue is the relevant code is in pretty bad shape and we had no > recent reports about these being used nor working at all for that > matter and we are looking into dropping support for these. > Please respond if you are interested in these. > Long term we would require access to some CI running these to really > keep things in shape. > > -yoctozepto From thierry at openstack.org Wed Feb 19 11:20:43 2020 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 19 Feb 2020 12:20:43 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: Moises Guimaraes de Medeiros wrote: > I vote for removing 1.0.0 first (ASAP) and then deciding which will be > the next version. > The longer the time 1.0.0 is available, the harder it will be to push > for a 0.x solution. The long-standing position of the release team[1] is that you can't "remove" a release. It's out there. We can hide it so that it's harder to accidentally consume it, but we should otherwise assume that some people got it. So I'm not a big fan of the plan to release 0.x versions and pretending 1.0.0 never happened, potentially breaking upgrades. From a user perspective I see it as equally disruptive to cranking out major releases at each future API break. Rather than rewrite history for an equally-suboptimal result, personally I would just own our mistake and accept that oslo.limit version numbers convey a level of readiness that might not be already there. That seems easier to communicate out than explaining that the 1.0.0 that you may have picked up at one point in the git repo, the tarballs site, Pypi (or any distro that accidentally picked it up since) is not really a thing and you need to take manual cleanup steps to restore local sanity. [1] heck, we even did a presentation about that rule at EuroPython -- Thierry Carrez (ttx) From sfinucan at redhat.com Wed Feb 19 13:51:47 2020 From: sfinucan at redhat.com (Stephen Finucane) Date: Wed, 19 Feb 2020 13:51:47 +0000 Subject: [oslo] Core reviewer changes Message-ID: Hi all, Just an update on some recent changes that have been made to oslo-core after discussion among current, active cores. Added: scmcginnis Removed: ChangBo Guo(gcb) Davanum Srinivas (dims) Flavio Percoco Joshua Harlow Mehdi Abaakouk (sileht) Michael Still Victor Stinner lifeless Please let me know if anyone has any concerns about these changes. Cheers, Stephen From moguimar at redhat.com Wed Feb 19 14:33:06 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 19 Feb 2020 15:33:06 +0100 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: +1 here is the link to one of the EuroPython talks: https://youtu.be/5MaDhl01fpc?t=1470 On Wed, Feb 19, 2020 at 12:21 PM Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: > > I vote for removing 1.0.0 first (ASAP) and then deciding which will be > > the next version. > > The longer the time 1.0.0 is available, the harder it will be to push > > for a 0.x solution. > > The long-standing position of the release team[1] is that you can't > "remove" a release. It's out there. We can hide it so that it's harder > to accidentally consume it, but we should otherwise assume that some > people got it. > > So I'm not a big fan of the plan to release 0.x versions and pretending > 1.0.0 never happened, potentially breaking upgrades. From a user > perspective I see it as equally disruptive to cranking out major > releases at each future API break. > > Rather than rewrite history for an equally-suboptimal result, personally > I would just own our mistake and accept that oslo.limit version numbers > convey a level of readiness that might not be already there. > > That seems easier to communicate out than explaining that the 1.0.0 that > you may have picked up at one point in the git repo, the tarballs site, > Pypi (or any distro that accidentally picked it up since) is not really > a thing and you need to take manual cleanup steps to restore local sanity. > > [1] heck, we even did a presentation about that rule at EuroPython > > -- > Thierry Carrez (ttx) > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 19 14:34:58 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 19 Feb 2020 08:34:58 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: References: <421919C3-83C5-4E0E-9132-324D736A5A17@doughellmann.com> <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> Message-ID: <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> On 2/19/20 5:20 AM, Thierry Carrez wrote: > Moises Guimaraes de Medeiros wrote: >> I vote for removing 1.0.0 first (ASAP) and then deciding which will >> be the next version. >> The longer the time 1.0.0 is available, the harder it will be to push >> for a 0.x solution. > > The long-standing position of the release team[1] is that you can't > "remove" a release. It's out there. We can hide it so that it's harder > to accidentally consume it, but we should otherwise assume that some > people got it. > > So I'm not a big fan of the plan to release 0.x versions and > pretending 1.0.0 never happened, potentially breaking upgrades. From a > user perspective I see it as equally disruptive to cranking out major > releases at each future API break. > > Rather than rewrite history for an equally-suboptimal result, > personally I would just own our mistake and accept that oslo.limit > version numbers convey a level of readiness that might not be already > there. > > That seems easier to communicate out than explaining that the 1.0.0 > that you may have picked up at one point in the git repo, the tarballs > site, Pypi (or any distro that accidentally picked it up since) is not > really a thing and you need to take manual cleanup steps to restore > local sanity. > > [1] heck, we even did a presentation about that rule at EuroPython > It seems there's no great answer here. This thread has been great to go over the options though. I think after reading through everything, our best bet is probably to just go with documenting the state of 1.0.0, then plan on bumping the major release version on any breaking changes like normal. We are still conveying something that we don't really want to be by the 1.0 designation, and chances are high that the docs will be missed, but at least we would have somewhere to point to if there are any questions about it. So I guess I'm saying, let's cut our losses, move ahead with this 1.0.0 release, and hopefully the library will get to a more complete state that this is no longer an issue. Sean From fungi at yuggoth.org Wed Feb 19 14:52:09 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Feb 2020 14:52:09 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> References: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> Message-ID: <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: [...] > We are still conveying something that we don't really want to be by the > 1.0 designation, and chances are high that the docs will be missed, but > at least we would have somewhere to point to if there are any questions > about it. [...] To what extent is it likely to see production use before Nova has ironed out its consumption of the library? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From sean.mcginnis at gmx.com Wed Feb 19 14:54:20 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 19 Feb 2020 08:54:20 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> References: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> Message-ID: <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> On 2/19/20 8:52 AM, Jeremy Stanley wrote: > On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: > [...] >> We are still conveying something that we don't really want to be by the >> 1.0 designation, and chances are high that the docs will be missed, but >> at least we would have somewhere to point to if there are any questions >> about it. > [...] > > To what extent is it likely to see production use before Nova has > ironed out its consumption of the library? I would assume the likelihood to be very low. From openstack at nemebean.com Wed Feb 19 15:14:49 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 19 Feb 2020 09:14:49 -0600 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> References: <20200217204233.ldqcyzqwgxyrg3f2@yuggoth.org> <2f0f01cf-26dd-cd22-38d6-154553b7627c@openstack.org> <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> Message-ID: <0f9a3d26-fe35-1927-336c-ed221afc7e5a@nemebean.com> On 2/19/20 8:54 AM, Sean McGinnis wrote: > On 2/19/20 8:52 AM, Jeremy Stanley wrote: >> On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: >> [...] >>> We are still conveying something that we don't really want to be by the >>> 1.0 designation, and chances are high that the docs will be missed, but >>> at least we would have somewhere to point to if there are any questions >>> about it. >> [...] >> >> To what extent is it likely to see production use before Nova has >> ironed out its consumption of the library? > I would assume the likelihood to be very low. > The Nova folks are driving this work so at this point I wouldn't declare the oslo.limit API stable without their signoff. From fungi at yuggoth.org Wed Feb 19 15:26:12 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Feb 2020 15:26:12 +0000 Subject: [oslo][release] oslo.limit mistakenly released as 1.0.0 In-Reply-To: <0f9a3d26-fe35-1927-336c-ed221afc7e5a@nemebean.com> References: <6b243f3b-a7f6-44f5-b20c-dd91bed0928f@openstack.org> <6cd40c46-f7f8-3420-3198-cdc1a3ea6349@gmx.com> <20200219145209.6grmfdc5owh4nzbu@yuggoth.org> <1e9bcb5d-963f-3af5-67c9-76e7b3eb7030@gmx.com> <0f9a3d26-fe35-1927-336c-ed221afc7e5a@nemebean.com> Message-ID: <20200219152612.zkkc5lu3mjvqg367@yuggoth.org> On 2020-02-19 09:14:49 -0600 (-0600), Ben Nemec wrote: > On 2/19/20 8:54 AM, Sean McGinnis wrote: > > On 2/19/20 8:52 AM, Jeremy Stanley wrote: > > > On 2020-02-19 08:34:58 -0600 (-0600), Sean McGinnis wrote: > > > [...] > > > > We are still conveying something that we don't really want > > > > to be by the 1.0 designation, and chances are high that the > > > > docs will be missed, but at least we would have somewhere to > > > > point to if there are any questions about it. > > > [...] > > > > > > To what extent is it likely to see production use before Nova > > > has ironed out its consumption of the library? > > > > I would assume the likelihood to be very low. > > The Nova folks are driving this work so at this point I wouldn't > declare the oslo.limit API stable without their signoff. With that, I wouldn't expect any other projects to even try to use it before Nova's officially doing so. It sounds like the projects likely to be confused about the production-ready state of the library could be approximately zero regardless of which solution you choose. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From openstack at nemebean.com Wed Feb 19 15:32:31 2020 From: openstack at nemebean.com (Ben Nemec) Date: Wed, 19 Feb 2020 09:32:31 -0600 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: Thanks, and welcome Sean! On 2/19/20 7:51 AM, Stephen Finucane wrote: > Hi all, > > Just an update on some recent changes that have been made to oslo-core > after discussion among current, active cores. > > Added: > > scmcginnis > > Removed: > > ChangBo Guo(gcb) > Davanum Srinivas (dims) > Flavio Percoco > Joshua Harlow > Mehdi Abaakouk (sileht) > Michael Still > Victor Stinner > lifeless > > Please let me know if anyone has any concerns about these changes. > > Cheers, > Stephen > > From moguimar at redhat.com Wed Feb 19 15:51:30 2020 From: moguimar at redhat.com (Moises Guimaraes de Medeiros) Date: Wed, 19 Feb 2020 16:51:30 +0100 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: Welcome Sean o/ On Wed, Feb 19, 2020 at 4:32 PM Ben Nemec wrote: > Thanks, and welcome Sean! > > On 2/19/20 7:51 AM, Stephen Finucane wrote: > > Hi all, > > > > Just an update on some recent changes that have been made to oslo-core > > after discussion among current, active cores. > > > > Added: > > > > scmcginnis > > > > Removed: > > > > ChangBo Guo(gcb) > > Davanum Srinivas (dims) > > Flavio Percoco > > Joshua Harlow > > Mehdi Abaakouk (sileht) > > Michael Still > > Victor Stinner > > lifeless > > > > Please let me know if anyone has any concerns about these changes. > > > > Cheers, > > Stephen > > > > > > -- Moisés Guimarães Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Feb 19 16:07:12 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 19 Feb 2020 17:07:12 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: References: <20191119102615.oq46xojyhoybulna@skaplons-mac> Message-ID: <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> Hi, Ussuri-2 milestone is behind us and we still don’t have anyone who would like to maintain neutron-fwaas project upstream. So I just sent patch [1] to add info to project’s docs that it is deprecated in Neutron stadium. Question, to TC members mostly - should I do also some other actions to officially deprecate this project? Any changes to governance repo or something like that? Thanks in advance for any help with that. [1] https://review.opendev.org/708675 > On 20 Jan 2020, at 21:26, Slawek Kaplonski wrote: > > Hi, > > We are getting closer and closer to Ussuri-2 milestone which is our deadline to deprecate neutron-fwaas project if there will be no any volunteers to maintain this project. > So if You are interested in this project, please raise Your hand here or ping me on IRC about that. > >> On 6 Jan 2020, at 21:05, Slawek Kaplonski wrote: >> >> Hi, >> >> Just as a reminder, we are still looking for maintainers who want to keep neutron-fwaas project alive. As it was written in my previous email, we will mark this project as deprecated. >> So please reply to this email or contact me directly if You are interested in maintaining this project. >> >>> On 19 Nov 2019, at 11:26, Slawek Kaplonski wrote: >>> >>> Hi, >>> >>> Over the past couple of cycles we have noticed that new contributions and >>> maintenance efforts for neutron-fwaas project were almost non existent. >>> This impacts patches for bug fixes, new features and reviews. The Neutron >>> core team is trying to at least keep the CI of this project healthy, but we >>> don’t have enough knowledge about the details of the neutron-fwaas >>> code base to review more complex patches. >>> >>> During the PTG in Shanghai we discussed that with operators and TC members >>> during the forum session [1] and later within the Neutron team during the >>> PTG session [2]. >>> >>> During these discussions, with the help of operators and TC members, we reached >>> the conclusion that we need to have someone responsible for maintaining project. >>> This doesn’t mean that the maintainer needs to spend full time working on this >>> project. Rather, we need someone to be the contact person for the project, who >>> takes care of the project’s CI and review patches. Of course that’s only a >>> minimal requirement. If the new maintainer works on new features for the >>> project, it’s even better :) >>> >>> If we don’t have any new maintainer(s) before milestone Ussuri-2, which is >>> Feb 10 - Feb 14 according to [3], we will need to mark neutron-fwaas >>> as deprecated and in “V” cycle we will propose to move the project >>> from the Neutron stadium, hosted in the “openstack/“ namespace, to the >>> unofficial projects hosted in the “x/“ namespace. >>> >>> So if You are using this project now, or if You have customers who are >>> using it, please consider the possibility of maintaining it. Otherwise, >>> please be aware that it is highly possible that the project will be >>> deprecated and moved out from the official OpenStack projects. >>> >>> [1] >>> https://etherpad.openstack.org/p/PVG-Neutron-stadium-projects-the-path-forward >>> [2] https://etherpad.openstack.org/p/Shanghai-Neutron-Planning-restored - >>> Lines 379-421 >>> [3] https://releases.openstack.org/ussuri/schedule.html >>> >>> -- >>> Slawek Kaplonski >>> Senior software engineer >>> Red Hat >> >> — >> Slawek Kaplonski >> Senior software engineer >> Red Hat >> > > — > Slawek Kaplonski > Senior software engineer > Red Hat > — Slawek Kaplonski Senior software engineer Red Hat From hberaud at redhat.com Wed Feb 19 16:11:24 2020 From: hberaud at redhat.com (Herve Beraud) Date: Wed, 19 Feb 2020 17:11:24 +0100 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: Welcome on board Sean! Le mer. 19 févr. 2020 à 16:54, Moises Guimaraes de Medeiros < moguimar at redhat.com> a écrit : > Welcome Sean o/ > > On Wed, Feb 19, 2020 at 4:32 PM Ben Nemec wrote: > >> Thanks, and welcome Sean! >> >> On 2/19/20 7:51 AM, Stephen Finucane wrote: >> > Hi all, >> > >> > Just an update on some recent changes that have been made to oslo-core >> > after discussion among current, active cores. >> > >> > Added: >> > >> > scmcginnis >> > >> > Removed: >> > >> > ChangBo Guo(gcb) >> > Davanum Srinivas (dims) >> > Flavio Percoco >> > Joshua Harlow >> > Mehdi Abaakouk (sileht) >> > Michael Still >> > Victor Stinner >> > lifeless >> > >> > Please let me know if anyone has any concerns about these changes. >> > >> > Cheers, >> > Stephen >> > >> > >> >> > > -- > > Moisés Guimarães > > Software Engineer > > Red Hat > > > -- Hervé Beraud Senior Software Engineer Red Hat - Openstack Oslo irc: hberaud -----BEGIN PGP SIGNATURE----- wsFcBAABCAAQBQJb4AwCCRAHwXRBNkGNegAALSkQAHrotwCiL3VMwDR0vcja10Q+ Kf31yCutl5bAlS7tOKpPQ9XN4oC0ZSThyNNFVrg8ail0SczHXsC4rOrsPblgGRN+ RQLoCm2eO1AkB0ubCYLaq0XqSaO+Uk81QxAPkyPCEGT6SRxXr2lhADK0T86kBnMP F8RvGolu3EFjlqCVgeOZaR51PqwUlEhZXZuuNKrWZXg/oRiY4811GmnvzmUhgK5G 5+f8mUg74hfjDbR2VhjTeaLKp0PhskjOIKY3vqHXofLuaqFDD+WrAy/NgDGvN22g glGfj472T3xyHnUzM8ILgAGSghfzZF5Skj2qEeci9cB6K3Hm3osj+PbvfsXE/7Kw m/xtm+FjnaywZEv54uCmVIzQsRIm1qJscu20Qw6Q0UiPpDFqD7O6tWSRKdX11UTZ hwVQTMh9AKQDBEh2W9nnFi9kzSSNu4OQ1dRMcYHWfd9BEkccezxHwUM4Xyov5Fe0 qnbfzTB1tYkjU78loMWFaLa00ftSxP/DtQ//iYVyfVNfcCwfDszXLOqlkvGmY1/Y F1ON0ONekDZkGJsDoS6QdiUSn8RZ2mHArGEWMV00EV5DCIbCXRvywXV43ckx8Z+3 B8qUJhBqJ8RS2F+vTs3DTaXqcktgJ4UkhYC2c1gImcPRyGrK9VY0sCT+1iA+wp/O v6rDpkeNksZ9fFSyoY2o =ECSj -----END PGP SIGNATURE----- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Feb 19 17:09:18 2020 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 19 Feb 2020 17:09:18 +0000 Subject: [OSSA-2020-001] Nova can leak consoleauth token into log files (CVE-2015-9543) Message-ID: <20200219170918.4n33kxopcu7fzw3k@yuggoth.org> ============================================================= OSSA-2020-001: Nova can leak consoleauth token into log files ============================================================= :Date: February 19, 2020 :CVE: CVE-2015-9543 Affects ~~~~~~~ - Nova: <18.2.4,>=19.0.0<19.1.0,>=20.0.0<20.1.0 Description ~~~~~~~~~~~ Paul Carlton from HP reported a vulnerability in Nova. An attacker with read access to the service’s logs may obtain tokens used for console access. All Nova setups using novncproxy are affected. Patches ~~~~~~~ - https://review.opendev.org/707845 (Queens) - https://review.opendev.org/704255 (Rocky) - https://review.opendev.org/702181 (Stein) - https://review.opendev.org/696685 (Train) - https://review.opendev.org/220622 (Ussuri) Credits ~~~~~~~ - Paul Carlton from HP (CVE-2015-9543) References ~~~~~~~~~~ - https://launchpad.net/bugs/1492140 - http://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2015-9543 Notes ~~~~~ - The stable/queens branch is under extended maintenance and will receive no new point releases, but a patch for it is provided as a courtesy. -- Jeremy Stanley, on behalf of OpenStack Vulnerability Management -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jasonanderson at uchicago.edu Wed Feb 19 17:21:19 2020 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Wed, 19 Feb 2020 17:21:19 +0000 Subject: [kolla-ansible] External Ceph keyring encryption Message-ID: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Hi all, My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? Thanks, /Jason [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html -- Jason Anderson Chameleon DevOps Lead Consortium for Advanced Science and Engineering, The University of Chicago Mathematics & Computer Science Division, Argonne National Laboratory -------------- next part -------------- An HTML attachment was scrubbed... URL: From mnasiadka at gmail.com Wed Feb 19 17:29:00 2020 From: mnasiadka at gmail.com (=?UTF-8?Q?Micha=C5=82_Nasiadka?=) Date: Wed, 19 Feb 2020 18:29:00 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Message-ID: Hi Jason, I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. Best regards, Michal On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > Hi all, > > My understanding is that KA has dropped support for provisioning Ceph > directly, and now requires an external Ceph cluster (side note: we should > update the docs[1], which state it is only "sometimes necessary" to use an > external cluster--I will try to submit something today). > > I think this works well, but the handling of keyrings cuts a bit against > the grain of KA. The keyring files must be dropped in to the > node_custom_config directory. This means that operators who prefer to keep > their KA configuration in source control must have some mechanism for > securing that, as it is unencrypted. What does everybody think about > storing Ceph keyring secrets in passwords.yml instead, similar to how SSH > keys are handled? > > Thanks, > /Jason > > [1]: > https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > > > -- > Jason Anderson > > Chameleon DevOps Lead > *Consortium for Advanced Science and Engineering, The University of > Chicago* > *Mathematics & Computer Science Division, Argonne National Laboratory* > -- Michał Nasiadka mnasiadka at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Wed Feb 19 17:40:53 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 18:40:53 +0100 Subject: [kolla] Following Nova with XenAPI deprecation Message-ID: Hello fellow OpenStackers! There exists some logic in Kolla Ansible that handles XenAPI-specific overrides and bootstrapping with os-xenapi. It is one of those murky, untested features that we prefer to remove as soon as possible. :-) Hence we follow Nova in deprecating this functionality for removal. We will do the removal in Victoria. If you are using XenAPI with recent Kolla-Ansible, please let us know. I will be proposing deprec changes later today. As far as other deployment projects are concerned, DevStack, OpenStack-Helm and Puppet-OpenStack seem to still handle XenAPI specifics. -yoctozepto From openstack at fried.cc Wed Feb 19 17:43:57 2020 From: openstack at fried.cc (Eric Fried) Date: Wed, 19 Feb 2020 11:43:57 -0600 Subject: [all][nova][ptl] Another one bites the dust Message-ID: Dear OpenStack- Due to circumstances beyond my control, my job responsibilities will be changing shortly and I will be leaving the community. I have enjoyed my time here immensely; I have never loved a job, my colleagues, or the tools of the trade more than I have here. My last official day will be March 31st (though some portion of my remaining time will be vacation -- TBD). Unfortunately this means I will need to abdicate my position as Nova PTL mid-cycle. As noted in the last meeting [1], I'm calling for a volunteer to take over for the remainder of Ussuri. Feel free to approach me privately if you prefer. Thanks, efried [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 From radoslaw.piliszek at gmail.com Wed Feb 19 17:46:01 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 18:46:01 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Message-ID: Hi Jason, Ansible autodecrypts files on copy so they can be stored encrypted. It could go in docs. :-) -yoctozepto śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > > Hi Jason, > > I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > > Best regards, > Michal > > On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: >> >> Hi all, >> >> My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). >> >> I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? >> >> Thanks, >> /Jason >> >> [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html >> >> >> -- >> Jason Anderson >> >> Chameleon DevOps Lead >> Consortium for Advanced Science and Engineering, The University of Chicago >> Mathematics & Computer Science Division, Argonne National Laboratory > > -- > Michał Nasiadka > mnasiadka at gmail.com From jasonanderson at uchicago.edu Wed Feb 19 17:52:24 2020 From: jasonanderson at uchicago.edu (Jason Anderson) Date: Wed, 19 Feb 2020 17:52:24 +0000 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> Message-ID: <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> Oh wow, I did not know it could do this transparently. Thanks, I will have a look at that. I can update the docs to reference this approach as well if it works out. Cheers! /Jason On 2/19/20 11:46 AM, Radosław Piliszek wrote: > Hi Jason, > > Ansible autodecrypts files on copy so they can be stored encrypted. > It could go in docs. :-) > > -yoctozepto > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): >> Hi Jason, >> >> I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. >> >> Best regards, >> Michal >> >> On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: >>> Hi all, >>> >>> My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). >>> >>> I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? >>> >>> Thanks, >>> /Jason >>> >>> [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html >>> >>> >>> -- >>> Jason Anderson >>> >>> Chameleon DevOps Lead >>> Consortium for Advanced Science and Engineering, The University of Chicago >>> Mathematics & Computer Science Division, Argonne National Laboratory >> -- >> Michał Nasiadka >> mnasiadka at gmail.com From nate.johnston at redhat.com Wed Feb 19 18:02:18 2020 From: nate.johnston at redhat.com (Nate Johnston) Date: Wed, 19 Feb 2020 13:02:18 -0500 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> Message-ID: <20200219180218.ahgxnglss3jrvqgp@firewall> On Wed, Feb 19, 2020 at 05:07:12PM +0100, Slawek Kaplonski wrote: > Hi, > > Ussuri-2 milestone is behind us and we still don’t have anyone who would like to maintain neutron-fwaas project upstream. > So I just sent patch [1] to add info to project’s docs that it is deprecated in Neutron stadium. > > Question, to TC members mostly - should I do also some other actions to officially deprecate this project? Any changes to governance repo or something like that? Thanks in advance for any help with that. Shall we follow the same process we used for LBaaS in [1] and [2], or does that need to wait? I think there is a good chance we will not see another release of neutron-fwaas code. Thanks Nate [1] https://review.opendev.org/#/c/705780/ [2] https://review.opendev.org/#/c/658493/ > [1] https://review.opendev.org/708675 > > > On 20 Jan 2020, at 21:26, Slawek Kaplonski wrote: > > > > Hi, > > > > We are getting closer and closer to Ussuri-2 milestone which is our deadline to deprecate neutron-fwaas project if there will be no any volunteers to maintain this project. > > So if You are interested in this project, please raise Your hand here or ping me on IRC about that. > > > >> On 6 Jan 2020, at 21:05, Slawek Kaplonski wrote: > >> > >> Hi, > >> > >> Just as a reminder, we are still looking for maintainers who want to keep neutron-fwaas project alive. As it was written in my previous email, we will mark this project as deprecated. > >> So please reply to this email or contact me directly if You are interested in maintaining this project. > >> > >>> On 19 Nov 2019, at 11:26, Slawek Kaplonski wrote: > >>> > >>> Hi, > >>> > >>> Over the past couple of cycles we have noticed that new contributions and > >>> maintenance efforts for neutron-fwaas project were almost non existent. > >>> This impacts patches for bug fixes, new features and reviews. The Neutron > >>> core team is trying to at least keep the CI of this project healthy, but we > >>> don’t have enough knowledge about the details of the neutron-fwaas > >>> code base to review more complex patches. > >>> > >>> During the PTG in Shanghai we discussed that with operators and TC members > >>> during the forum session [1] and later within the Neutron team during the > >>> PTG session [2]. > >>> > >>> During these discussions, with the help of operators and TC members, we reached > >>> the conclusion that we need to have someone responsible for maintaining project. > >>> This doesn’t mean that the maintainer needs to spend full time working on this > >>> project. Rather, we need someone to be the contact person for the project, who > >>> takes care of the project’s CI and review patches. Of course that’s only a > >>> minimal requirement. If the new maintainer works on new features for the > >>> project, it’s even better :) > >>> > >>> If we don’t have any new maintainer(s) before milestone Ussuri-2, which is > >>> Feb 10 - Feb 14 according to [3], we will need to mark neutron-fwaas > >>> as deprecated and in “V” cycle we will propose to move the project > >>> from the Neutron stadium, hosted in the “openstack/“ namespace, to the > >>> unofficial projects hosted in the “x/“ namespace. > >>> > >>> So if You are using this project now, or if You have customers who are > >>> using it, please consider the possibility of maintaining it. Otherwise, > >>> please be aware that it is highly possible that the project will be > >>> deprecated and moved out from the official OpenStack projects. > >>> > >>> [1] > >>> https://etherpad.openstack.org/p/PVG-Neutron-stadium-projects-the-path-forward > >>> [2] https://etherpad.openstack.org/p/Shanghai-Neutron-Planning-restored - > >>> Lines 379-421 > >>> [3] https://releases.openstack.org/ussuri/schedule.html > >>> > >>> -- > >>> Slawek Kaplonski > >>> Senior software engineer > >>> Red Hat > >> > >> — > >> Slawek Kaplonski > >> Senior software engineer > >> Red Hat > >> > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > — > Slawek Kaplonski > Senior software engineer > Red Hat > > From radoslaw.piliszek at gmail.com Wed Feb 19 18:02:47 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 19 Feb 2020 19:02:47 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> Message-ID: I just realized we also do a lookup on them and not sure if that works though. -yoctozepto śr., 19 lut 2020 o 18:52 Jason Anderson napisał(a): > > Oh wow, I did not know it could do this transparently. Thanks, I will > have a look at that. I can update the docs to reference this approach as > well if it works out. > > Cheers! > /Jason > > On 2/19/20 11:46 AM, Radosław Piliszek wrote: > > Hi Jason, > > > > Ansible autodecrypts files on copy so they can be stored encrypted. > > It could go in docs. :-) > > > > -yoctozepto > > > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > >> Hi Jason, > >> > >> I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > >> > >> Best regards, > >> Michal > >> > >> On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > >>> Hi all, > >>> > >>> My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). > >>> > >>> I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? > >>> > >>> Thanks, > >>> /Jason > >>> > >>> [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > >>> > >>> > >>> -- > >>> Jason Anderson > >>> > >>> Chameleon DevOps Lead > >>> Consortium for Advanced Science and Engineering, The University of Chicago > >>> Mathematics & Computer Science Division, Argonne National Laboratory > >> -- > >> Michał Nasiadka > >> mnasiadka at gmail.com > From elmiko at redhat.com Wed Feb 19 19:57:36 2020 From: elmiko at redhat.com (Michael McCune) Date: Wed, 19 Feb 2020 14:57:36 -0500 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: Eric, we never really got to work on the same project (outside of sig activities), but i just wanted to say thank you! it was always a pleasure getting to interact and collaborate with you, good luck on the new adventures =) peace o/ On Wed, Feb 19, 2020 at 12:47 PM Eric Fried wrote: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. > > Thanks, > efried > > [1] > > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rosmaita.fossdev at gmail.com Wed Feb 19 20:28:58 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 19 Feb 2020 15:28:58 -0500 Subject: [cinder] volume-local-cache spec - last call Message-ID: <9dfcde28-f8d8-20a8-2ac6-552b9fdc049c@gmail.com> The volume-local-cache spec [0], which was granted a Cinder spec-freeze exception, currently has two +2s. It's been reviewed by a lot of people, and as far as I can tell, everyone's concerns have been addressed at this point. However, I'd like to give people one last opportunity to verify this for themselves. Unless a substantive objection is raised, I intend to approve the spec around 12:30 UTC tomorrow (Thursday) so that it can be merged before the Nova meeting at 14:00 UTC. cheers, brian [0] https://review.opendev.org/#/c/684556/ From rosmaita.fossdev at gmail.com Wed Feb 19 20:47:48 2020 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 19 Feb 2020 15:47:48 -0500 Subject: [operators][cinder] driver removal policy update Message-ID: <860145f9-d06d-b547-3996-96b2e2fb5e51@gmail.com> The Cinder driver removal policy has been updated [0]. Drivers eligible for removal, at the discretion of the team, may remain in the code repository as long as they continue to pass OpenStack CI testing. When such a driver blocks the CI check or gate, it will be removed immediately. The intent of the policy revision is twofold. First, it gives vendors a longer grace period in which to make the necessary changes to have their drivers reinstated as ‘supported’. Second, keeping these drivers in-tree longer should make life easier for operators who have deployed storage backends with drivers that have been marked as ‘unsupported’. Please see the full statement of the policy [0] for details. [0] https://docs.openstack.org/cinder/latest/drivers-all-about.html#driver-removal From lyarwood at redhat.com Wed Feb 19 20:51:35 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Wed, 19 Feb 2020 20:51:35 +0000 Subject: [operators][cinder][nova][glance] possible data loss situation (bug 1852106) In-Reply-To: References: Message-ID: <20200219205135.3q5sqdg6ndptdbdf@lyarwood.usersys.redhat.com> On 18-02-20 17:32:38, Brian Rosmaita wrote: > Proposed Longer-term Fixes > ========================== > - In the Ussuri release, the unsupported action in step (4) above will > result in a 400 rather than an active yet unusable server. Hence it will no > longer be possible to create the image of the unusable server that causes > the issue. [3] Thanks for the detailed write up Brian. I've proposed backports back to stable/queens for this change below: https://review.opendev.org/#/q/Idf84ccff254d26fa13473fe9741ddac21cbcf321 Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From whayutin at redhat.com Wed Feb 19 22:27:40 2020 From: whayutin at redhat.com (Wesley Hayutin) Date: Wed, 19 Feb 2020 15:27:40 -0700 Subject: [tripleo] proposal to make Kevin Carter core Message-ID: Greetings, I'm sure by now you have all seen the contributions from Kevin to the tripleo-ansible project, transformation, mistral to ansible etc. In his short tenure in TripleO Kevin has accomplished a lot and is the number #3 contributor to tripleo for the ussuri cycle! First of all very well done, and thank you for all the contributions! Secondly, I'd like to propose Kevin to core.. Please vote by replying to this email. Thank you +1 -------------- next part -------------- An HTML attachment was scrubbed... URL: From johfulto at redhat.com Wed Feb 19 22:31:41 2020 From: johfulto at redhat.com (John Fulton) Date: Wed, 19 Feb 2020 17:31:41 -0500 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, Feb 19, 2020, 5:30 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From abishop at redhat.com Wed Feb 19 22:51:53 2020 From: abishop at redhat.com (Alan Bishop) Date: Wed, 19 Feb 2020 14:51:53 -0800 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, Feb 19, 2020 at 2:31 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sean.mcginnis at gmx.com Wed Feb 19 22:55:01 2020 From: sean.mcginnis at gmx.com (Sean McGinnis) Date: Wed, 19 Feb 2020 16:55:01 -0600 Subject: [oslo] Core reviewer changes In-Reply-To: References: Message-ID: <08d694df-a27c-be45-48bd-2cbc392dda37@gmx.com> Thanks all, glad to be able to help! On 2/19/20 10:11 AM, Herve Beraud wrote: > Welcome on board Sean! > > Le mer. 19 févr. 2020 à 16:54, Moises Guimaraes de Medeiros > > a écrit : > > Welcome Sean o/ > > On Wed, Feb 19, 2020 at 4:32 PM Ben Nemec > wrote: > > Thanks, and welcome Sean! > > On 2/19/20 7:51 AM, Stephen Finucane wrote: > > Hi all, > > > > Just an update on some recent changes that have been made to > oslo-core > > after discussion among current, active cores. > > > > Added: > > > > scmcginnis > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aschultz at redhat.com Wed Feb 19 23:04:00 2020 From: aschultz at redhat.com (Alex Schultz) Date: Wed, 19 Feb 2020 16:04:00 -0700 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, Feb 19, 2020, 3:32 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Wed Feb 19 23:44:04 2020 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Wed, 19 Feb 2020 15:44:04 -0800 Subject: [manila] Core Team additions Message-ID: Hello Zorillas/Stackers, I'd like to make propose some core team additions. Earlier in the cycle [1], I sought contributors who are interested in helping us maintain and grow manila. I'm happy to report that we had an fairly enthusiastic response. I'd like to propose two individuals who have stepped up to join the core maintainers team. Bear with me as I seek to support my proposals with my personal notes of endorsement: Victoria Martinez de la Cruz - Victoria has been contributing to Manila since it's inception. She has played various roles during this time and has contributed in significant ways to build this community. She's been the go-to person to seek reviews and collaborate on for CephFS integration, python-manilaclient, manila-ui maintenance and support for the OpenStack client. She has also brought onboard and mentored multiple interns on the team (Fun fact: She was recognized as Mentor of Mentors [2] by this community). It gives me great joy that she agreed to help maintain the project as a core maintainer. Carlos Eduardo - Carlos has made significant contributions to Manila for the past two releases. He worked on several feature gaps with the DHSS=True driver mode, and is now working on graduating experimental features that the project has been building since the Newton release. He performs meaningful reviews that drive good design discussions. I am happy to note that he needed little mentoring to start reviewing the OpenStack Way [3] - this is a dead give away to me to spot a dedicated maintainer who cares about growing the community, along with the project. Please give me your +/- 1s for this proposal. Thank you, Goutham Pacha Ravi (gouthamr) [1] http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html [2] https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ [3] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html From emilien at redhat.com Wed Feb 19 23:49:13 2020 From: emilien at redhat.com (Emilien Macchi) Date: Wed, 19 Feb 2020 18:49:13 -0500 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: On Wed, Feb 19, 2020 at 5:34 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Strong +1 : - his leadership in Transformation work (ie tripleo-ansible design, maintenance and review velocity) - his strong involvement in Mistral removal and understand on the underlying of TripleO. - his consistent and high rate of reviews across all TripleO projects. - his upstream contribution and participation in general, which is appreciated by everyone. Thanks Kevin for your hard work! -- Emilien Macchi -------------- next part -------------- An HTML attachment was scrubbed... URL: From viroel at gmail.com Thu Feb 20 00:15:50 2020 From: viroel at gmail.com (Douglas) Date: Wed, 19 Feb 2020 21:15:50 -0300 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: vkmc +1 carloss +1 Em qua, 19 de fev de 2020 20:45, Goutham Pacha Ravi escreveu: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xingyang105 at gmail.com Thu Feb 20 01:58:06 2020 From: xingyang105 at gmail.com (Xing Yang) Date: Wed, 19 Feb 2020 20:58:06 -0500 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: Big +1 for Victoria! +1 Carlos Thanks, Xing On Wed, Feb 19, 2020 at 6:46 PM Goutham Pacha Ravi wrote: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From aoren at infinidat.com Thu Feb 20 04:38:27 2020 From: aoren at infinidat.com (Amit Oren) Date: Thu, 20 Feb 2020 06:38:27 +0200 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: Big +1 from me to both Victoria and Carlos! Amit On Thu, Feb 20, 2020 at 1:47 AM Goutham Pacha Ravi wrote: > Hello Zorillas/Stackers, > > I'd like to make propose some core team additions. Earlier in the > cycle [1], I sought contributors who are interested in helping us > maintain and grow manila. I'm happy to report that we had an fairly > enthusiastic response. I'd like to propose two individuals who have > stepped up to join the core maintainers team. Bear with me as I seek > to support my proposals with my personal notes of endorsement: > > Victoria Martinez de la Cruz - Victoria has been contributing to > Manila since it's inception. She has played various roles during this > time and has contributed in significant ways to build this community. > She's been the go-to person to seek reviews and collaborate on for > CephFS integration, python-manilaclient, manila-ui maintenance and > support for the OpenStack client. She has also brought onboard and > mentored multiple interns on the team (Fun fact: She was recognized as > Mentor of Mentors [2] by this community). It gives me great joy that > she agreed to help maintain the project as a core maintainer. > > Carlos Eduardo - Carlos has made significant contributions to Manila > for the past two releases. He worked on several feature gaps with the > DHSS=True driver mode, and is now working on graduating experimental > features that the project has been building since the Newton release. > He performs meaningful reviews that drive good design discussions. I > am happy to note that he needed little mentoring to start reviewing > the OpenStack Way [3] - this is a dead give away to me to spot a > dedicated maintainer who cares about growing the community, along with > the project. > > Please give me your +/- 1s for this proposal. > > Thank you, > Goutham Pacha Ravi (gouthamr) > > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html > [2] > https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ > [3] > https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sshnaidm at redhat.com Thu Feb 20 05:31:25 2020 From: sshnaidm at redhat.com (Sagi Shnaidman) Date: Thu, 20 Feb 2020 07:31:25 +0200 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Thu, Feb 20, 2020 at 12:29 AM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -- Best regards Sagi Shnaidman -------------- next part -------------- An HTML attachment was scrubbed... URL: From soulxu at gmail.com Thu Feb 20 06:12:09 2020 From: soulxu at gmail.com (Alex Xu) Date: Thu, 20 Feb 2020 14:12:09 +0800 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: Hey, Eric, glad to work with you, although it isn't very long and end by mysterious decision we still don't know...but I enjoy the teamwork we have. Good luck man. Thanks Alex Eric Fried 于2020年2月20日周四 上午1:50写道: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. > > Thanks, > efried > > [1] > > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marios at redhat.com Thu Feb 20 06:34:47 2020 From: marios at redhat.com (Marios Andreou) Date: Thu, 20 Feb 2020 08:34:47 +0200 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Thu, Feb 20, 2020 at 7:33 AM Sagi Shnaidman wrote: > +1 > > On Thu, Feb 20, 2020 at 12:29 AM Wesley Hayutin > wrote: > >> Greetings, >> >> I'm sure by now you have all seen the contributions from Kevin to the >> tripleo-ansible project, transformation, mistral to ansible etc. In his >> short tenure in TripleO Kevin has accomplished a lot and is the number #3 >> contributor to tripleo for the ussuri cycle! >> >> First of all very well done, and thank you for all the contributions! >> Secondly, I'd like to propose Kevin to core.. >> >> Please vote by replying to this email. >> Thank you >> >> +1 >> > > > -- > Best regards > Sagi Shnaidman > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hjensas at redhat.com Thu Feb 20 06:54:31 2020 From: hjensas at redhat.com (Harald =?ISO-8859-1?Q?Jens=E5s?=) Date: Thu, 20 Feb 2020 07:54:31 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, 2020-02-19 at 15:27 -0700, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In > his short tenure in TripleO Kevin has accomplished a lot and is the > number #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 From fsbiz at yahoo.com Wed Feb 19 23:16:48 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Wed, 19 Feb 2020 23:16:48 +0000 (UTC) Subject: [ironic]: Failed to create config drive on disk References: <779062398.5405876.1582154208873.ref@mail.yahoo.com> Message-ID: <779062398.5405876.1582154208873@mail.yahoo.com> Hi, I'd very much appreciate some help on this. We have a medium-large ironic installation where the baremetal nodes are constantly doing provisionings and deprovisionings.We test a variety of images (Ubuntu, RedHat, Windows, etc.) in both BIOS and UEFI modes. Our approach so far is to configure all the baremetal nodes in CSM UEFI mode so that both BIOS and UEFI images can be run. And things have worked fairly well with this. Lately, I'm having this weird "Failed to create config drive on disk" issue and it is happening only with BIOS images on certain baremetal nodes. Here are the important snippets from the ironic conductor and the IPA that I've managed to narrow down. ================================Ironic conductor (before power-on)2020-02-18 13:06:33.261 DEBUG iDeploy boot mode is uefi for ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 ================================Ironic conductor (power-on)2020-02-18 13:07:18.541 INFO Successfully set node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 power state to power on by rebooting. 020-02-18 13:07:18.560 INFO Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "wait call-back" from state "deploying"; target provision state is "active" ================================== Ironic Python Agent: (I can provide the full log on request. Only relevant logs provided here for the sake of brevity). wipefs --force --all Feb 18 21:08:17 ironic-python-agent: DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): wipefs --force --all /dev/sda Feb 18 21:08:17 ironic-python-agent: CMD "wipefs --force --all /dev/sda" returned: 0 in 0.023s Feb 18 21:08:17 ironic-python-agentt: Execution completed, command line is "wipefs --force --all /dev/sda" Feb 18 21:08:17 ironic-python-agent: Command stdout is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:103Feb 18 21:08:17 ironic-python-agent: Command stderr is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:104 sgdisk -ZFeb 18 21:08:17 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:17.304 15063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): sgdisk -Z /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.321 15063 DEBUG oslo_concurrency.processutils [-] CMD "sgdisk -Z /dev/sda" returned: 0 in 1.017s Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "sgdisk -Z /dev/sda" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Command stdout is: "Creating new GPT entries in memory.Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. fuser /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.326 15063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): fuser /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 0 in 0.006s Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda"Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stdout is: " 15221" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stderr is: "/dev/sda: fuser /dev/sda Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.324 15063 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): fuser /dev/sda Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 1 in 0.012s Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda" Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 INFO ironic_lib.disk_utils [-] Disk metadata on /dev/sda successfully destroyed for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 start iscsi:Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_python_agent.extensions.iscsi [-] Starting ISCSI target with iqn iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 on device /dev/sda start_iscsi_targetFeb 18 21:08:19 host-10-33-23-71 kernel: Rounding down aligned max_sectors from 4294967295 to 4294967288Feb 18 21:08:19 host-10-33-23-71 kernel: db_root: cannot open: /etc/targetFeb 18 21:08:19 host-10-33-23-71 WARNING ironic_python_agent.extensions.iscsi [-] Linux-IO is not available, falling back to TGT. Error: Cannot set dbroot to /etc/target. Please check if this directory exists..: RTSLibError: Cannot set dbroot to /etc/target. Please check if this directory exists.Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtd Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op show" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op show" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""  Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""Feb 18 21:08:19 host-10-33-23-71 INFO root [-] Command iscsi.start_iscsi_target completed: Command name: start_iscsi_target, params: {u'wipe_disk_metadata': True, u'iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97', u'portal_port': 3260}, status: SUCCEEDED, result: {'iscsi_target_iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97'}.Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: ::ffff:10.33.24.87 - - [18/Feb/2020 21:08:19] "POST /v1/commands?wait=true HTTP/1.1" 200 386 ====================================Back to Ironic conductor020-02-18 13:08:25.953 346408 DEBUG RPC heartbeat called for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:25.997 346408 DEBUG Heartbeat from node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:26.033 346408 Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "deploying" from state "wait call-back"; target provision state is "active" Starting iscsi:2020-02-18 13:08:28.385 Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf iscsiadm -m node -p 10.33.23.71:3260 -T iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 --login  2020-02-18 13:08:28.613 CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf iscsiadm -m node -p 10.33.23.71:3260 -T iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 --login" returned: 0 in 0.229s 2020-02-18 13:08:28.615 Execution completed, command line is "iscsiadm -m node -p 10.33.23.71:3260 -T iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 --login" 2020-02-18 13:08:28.615 Command stdout is: "Logging in to [iface: default, target: iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97, portal: 10.33.23.71,3260] (multiple) Successful login to iSCSI:Login to [iface: default, target: iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97, portal: 10.33.23.71,3260] successful. qemu-img info2020-02-18 13:08:29.005 346408 DEBUG Running cmd (subprocess): /usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 -- env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk 2020-02-18 13:08:29.072 346408 DEBUG CMD "/usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 -- env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" returned: 0 in 0.067s 2020-02-18 13:08:29.073 346408 DEBUG Execution completed, command line is "env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" 2020-02-18 13:08:29.074 346408 DEBUG Command stdout is: "image: /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/diskfile format: rawvirtual size: 9.8G (10485760000 bytes)disk size: 5.4G copying image via dd: 2020-02-18 13:08:29.075 Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct 2020-02-18 13:08:49.020 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" returned: 0 in 19.945s 2020-02-18 13:08:49.021 346408 DEBUG Execution completed, command line is "dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" 2020-02-18 13:08:49.022 346408 DEBUG Command stdout is: "" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:08:49.022 346408 DEBUG Command stderr is: "10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 19.7712 s, 530 MB/s partprobe2020-02-18 13:08:49.023 346408 DEBUGRunning cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:03.341 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 14.318s 2020-02-18 13:09:03.341 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.342 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:03.342 346408 DEBUG Command stderr is: ""  lsblk:2020-02-18 13:09:03.343 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:03.519 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.176s 2020-02-18 13:09:03.519 346408 DEBUG Execution completed, command line is "lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.519 346408 DEBUG Command stdout is: "NAME="sdb" LABEL="" NAME="sdb1" LABEL="" " execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:03.520 346408 DEBUG iCommand stderr is: "" Adding configDrive partition:2020-02-18 13:09:03.772 346408 DEBUG Adding config drive partition 64 MiB to device: /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 create_config_drive_partition  blkid2020-02-18 13:09:03.773 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:03.959 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.186s 2020-02-18 13:09:03.960 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Execution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.960 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "dos 2020-02-18 13:09:03.960 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  parted2020-02-18 13:09:03.961 346408 DEBUG oRunning cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:04.136 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.175s 2020-02-18 13:09:04.137 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:04.137 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot; blkid2020-02-18 13:09:04.138 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf  blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:04.321 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.182s 2020-02-18 13:09:04.321 346408 DEBUG iExecution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.321 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "dos2020-02-18 13:09:04.322 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  partprobe2020-02-18 13:09:04.322 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:04.510 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.188s 2020-02-18 13:09:04.511 346408 DEBUG Execution completed, command line is "partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.512 346408 DEBUG Command stdout is: "/dev/sdb: msdos partitions 1 " 2020-02-18 13:09:04.512 346408 DEBUG Command stderr is: ""  blockdev --getsize642020-02-18 13:09:04.513 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:05.045 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.532s 2020-02-18 13:09:05.046 346408 DEBUG Execution completed, command line is "blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.046 346408 DEBUG Command stdout is: "1000204886016" 2020-02-18 13:09:05.046 346408 DEBUG Command stderr is: ""  parted/mkpart2020-02-18 13:09:05.047 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0 2020-02-18 13:09:05.241 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" returned: 0 in 0.194s 2020-02-18 13:09:05.241 346408 DEBUG Execution completed, command line is "parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" 2020-02-18 13:09:05.242 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.242 346408 DEBUG Command stderr is: "Warning: The resulting partition is not properly aligned for best performance. partprobe2020-02-18 13:09:05.329 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:05.516 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.187s2020-02-18 13:09:05.517 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.517 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.518 346408 DEBUG Command stderr is: ""  sgdisk -v:   IS THERE ANY ISSUE WITH THE BELOW OUTPUT2020-02-18 13:09:05.518 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 2020-02-18 13:09:05.707 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.189s 2020-02-18 13:09:05.708 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Execution completed, command line is  "sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:101 2020-02-18 13:09:05.708 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "***************************************************************Found invalid GPT and valid MBR; converting MBR to GPT formatin memory.***************************************************************Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Identified 1 problems! 2020-02-18 13:09:05.709 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  parted2020-02-18 13:09:05.709 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print 2020-02-18 13:09:05.899 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.189s 2020-02-18 13:09:05.899 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:05.900 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot;2:953806MiB:953870MiB:64.0MiB:::lba;" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:05.900 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  2020-02-18 13:09:05.927 346408 DEBUG Waiting for the config drive partition /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 on node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 to be ready for writing. create_config_drive_partition /usr/lib/python2.7/site-packages/ironic_lib/disk_utils.py:8692020-02-18 13:09:05.927 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Running cmd (subprocess): test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:05.945 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] u'test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2' failed. Retrying. execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:461 The code retries several times followed by the config drive failure error.Again, this happens on a few nodes only and happens only when I try to run BIOS based images.  UEFI based images provision just fine. Any help will be appreciated.thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From fsbiz at yahoo.com Wed Feb 19 23:27:31 2020 From: fsbiz at yahoo.com (fsbiz at yahoo.com) Date: Wed, 19 Feb 2020 23:27:31 +0000 (UTC) Subject: Fw: [ironic]: Failed to create config drive on disk In-Reply-To: <779062398.5405876.1582154208873@mail.yahoo.com> References: <779062398.5405876.1582154208873.ref@mail.yahoo.com> <779062398.5405876.1582154208873@mail.yahoo.com> Message-ID: <744752359.5442874.1582154851981@mail.yahoo.com> Hi, I'd very much appreciate some help on this. We have a medium-large ironic installation where the baremetal nodes are constantly doing provisionings and deprovisionings.Our approach so far is to configure all the baremetal nodes in CSM UEFI mode so that both BIOS and UEFI images can be run. And things have worked fairly well with this. Lately, I'm having this weird "Failed to create config drive on disk" issue and it is happening only with BIOS images on certain baremetal nodes. Here are the important snippets from the ironic conductor and the IPA that I've managed to narrow down. ================================== Ironic conductor (power-on) 2020-02-18 13:07:18.541 INFO Successfully set node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 power state to power on by rebooting. 020-02-18 13:07:18.560 INFO Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "wait call-back" from state "deploying"; target provision state is "active" ================================== Ironic Python Agent: (I can provide the full log on request. Only relevant logs provided here for the sake of brevity). wipefs --force --all Feb 18 21:08:17 ironic-python-agent: CMD "wipefs --force --all /dev/sda" returned: 0 in 0.023s  Feb 18 21:08:17 ironic-python-agentt: Execution completed, command line is "wipefs --force --all /dev/sda" Feb 18 21:08:17 ironic-python-agent: Command stdout is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:103Feb 18 21:08:17 ironic-python-agent: Command stderr is: "" execute /usr/share/ironic-python-agent/venv/lib/python2.7/site-packages/ironic_lib/utils.py:104 sgdisk -ZFeb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.321 15063 DEBUG oslo_concurrency.processutils [-] CMD "sgdisk -Z /dev/sda" returned: 0 in 1.017s  Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "sgdisk -Z /dev/sda" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.323 15063 DEBUG ironic_lib.utils [-] Command stdout is: "Creating new GPT entries in memory.Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. fuser /dev/sda Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 0 in 0.006s  Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda"Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stdout is: " 15221" Feb 18 21:08:18 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:18.332 15063 DEBUG ironic_lib.utils [-] Command stderr is: "/dev/sda: fuser /dev/sdaFeb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG oslo_concurrency.processutils [-] CMD "fuser /dev/sda" returned: 1 in 0.012s  Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_lib.utils [-] Execution completed, command line is "fuser /dev/sda" Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 INFO ironic_lib.disk_utils [-] Disk metadata on /dev/sda successfully destroyed for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 start iscsi:Feb 18 21:08:19 host-10-33-23-71 ironic-python-agent[15063]: 2020-02-18 21:08:19.336 15063 DEBUG ironic_python_agent.extensions.iscsi [-] Starting ISCSI target with iqn iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 on device /dev/sda start_iscsi_targetFeb 18 21:08:19 host-10-33-23-71 kernel: Rounding down aligned max_sectors from 4294967295 to 4294967288Feb 18 21:08:19 host-10-33-23-71 kernel: db_root: cannot open: /etc/targetFeb 18 21:08:19 host-10-33-23-71 WARNING ironic_python_agent.extensions.iscsi [-] Linux-IO is not available, falling back to TGT. Error: Cannot set dbroot to /etc/target. Please check if this directory exists..: RTSLibError: Cannot set dbroot to /etc/target. Please check if this directory exists.Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtd Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op show" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op show" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""  Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 Feb 18 21:08:19 host-10-33-23-71 DEBUG oslo_concurrency.processutils [-] CMD "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" returned: 0 in 0.002s Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Execution completed, command line is "tgtadm --lld iscsi --mode target --op new --tid 1 --targetname iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stdout is: "" Feb 18 21:08:19 host-10-33-23-71 DEBUG ironic_lib.utils [-] Command stderr is: ""Feb 18 21:08:19 host-10-33-23-71 INFO root [-] Command iscsi.start_iscsi_target completed: Command name: start_iscsi_target, params: {u'wipe_disk_metadata': True, u'iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97', u'portal_port': 3260}, status: SUCCEEDED, result: {'iscsi_target_iqn': u'iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97'}. ====================================Back to Ironic conductor020-02-18 13:08:25.953 346408 DEBUG RPC heartbeat called for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:25.997 346408 DEBUG Heartbeat from node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 heartbeat 2020-02-18 13:08:26.033 346408 Node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 moved to provision state "deploying" from state "wait call-back"; target provision state is "active" Successful login to iSCSI: Login to [iface: default, target: iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97, portal: 10.33.23.71,3260] successful. qemu-img info2020-02-18 13:08:29.072 346408 DEBUG CMD "/usr/bin/python2 -m oslo_concurrency.prlimit --as=1073741824 -- env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" returned: 0 in 0.067s  2020-02-18 13:08:29.073 346408 DEBUG Execution completed, command line is "env LC_ALL=C LANG=C qemu-img info /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk" 2020-02-18 13:08:29.074 346408 DEBUG Command stdout is: "image: /var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/diskfile format: rawvirtual size: 9.8G (10485760000 bytes)disk size: 5.4G copying image via dd: 2020-02-18 13:08:49.020 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" returned: 0 in 19.945s 2020-02-18 13:08:49.021 346408 DEBUG Execution completed, command line is "dd if=/var/lib/ironic/images/ea6f8eda-402c-4fb0-a2c4-61c83bc73f97/disk of=/dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 bs=1M oflag=direct" 2020-02-18 13:08:49.022 346408 DEBUG Command stdout is: "" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:08:49.022 346408 DEBUG Command stderr is: "10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 19.7712 s, 530 MB/s partprobe2020-02-18 13:09:03.341 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 14.318s 2020-02-18 13:09:03.341 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.342 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:03.342 346408 DEBUG Command stderr is: ""  lsblk:2020-02-18 13:09:03.519 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.176s 2020-02-18 13:09:03.519 346408 DEBUG Execution completed, command line is "lsblk -Po name,label /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.519 346408 DEBUG Command stdout is: "NAME="sdb" LABEL="" NAME="sdb1" LABEL="" " execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:03.520 346408 DEBUG iCommand stderr is: "" Adding configDrive partition:2020-02-18 13:09:03.772 346408 DEBUG Adding config drive partition 64 MiB to device: /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 for node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 create_config_drive_partition  blkid2020-02-18 13:09:03.959 346408 CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.186s  2020-02-18 13:09:03.960 346408 Execution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:03.960 346408 Command stdout is: "dos2020-02-18 13:09:03.960 346408 Command stderr is: ""  parted2020-02-18 13:09:03.961 346408 DEBUG Running cmd (subprocess): sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:04.136 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.175s 2020-02-18 13:09:04.137 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:04.137 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot; blkid2020-02-18 13:09:04.321 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.182s  2020-02-18 13:09:04.321 346408 DEBUG Execution completed, command line is "blkid -p -o value -s PTTYPE /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.321 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "dos2020-02-18 13:09:04.322 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  partprobe2020-02-18 13:09:04.510 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.188s 2020-02-18 13:09:04.511 346408 DEBUG Execution completed, command line is "partprobe -d -s /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:04.512 346408 DEBUG Command stdout is: "/dev/sdb: msdos partitions 1 " 2020-02-18 13:09:04.512 346408 DEBUG Command stderr is: ""  blockdev --getsize642020-02-18 13:09:05.045 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.532s  2020-02-18 13:09:05.046 346408 DEBUG Execution completed, command line is "blockdev --getsize64 /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.046 346408 DEBUG Command stdout is: "1000204886016" 2020-02-18 13:09:05.046 346408 DEBUG Command stderr is: ""  parted/mkpart2020-02-18 13:09:05.241 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" returned: 0 in 0.194s 2020-02-18 13:09:05.241 346408 DEBUG Execution completed, command line is "parted -a optimal -s -- /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 mkpart primary fat32 -64MiB -0" 2020-02-18 13:09:05.242 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.242 346408 DEBUG Command stderr is: "Warning: The resulting partition is not properly aligned for best performance. partprobe2020-02-18 13:09:05.516 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.187s 2020-02-18 13:09:05.517 346408 DEBUG Execution completed, command line is "partprobe /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" 2020-02-18 13:09:05.517 346408 DEBUG Command stdout is: "" 2020-02-18 13:09:05.518 346408 DEBUG Command stderr is: ""  sgdisk -v:   IS THERE ANY ISSUE WITH THE BELOW OUTPUT2020-02-18 13:09:05.707 346408 CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" returned: 0 in 0.189s  2020-02-18 13:09:05.708 346408 Execution completed, command line is  "sgdisk -v /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:101 2020-02-18 13:09:05.708 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stdout is: "***************************************************************Found invalid GPT and valid MBR; converting MBR to GPT formatin memory.***************************************************************Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Warning! Secondary partition table overlaps the last partition by33 blocks!You will need to delete this partition or resize it in another utility.Identified 1 problems! 2020-02-18 13:09:05.709 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  parted2020-02-18 13:09:05.899 346408 DEBUG CMD "sudo ironic-rootwrap /etc/ironic/rootwrap.conf  parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" returned: 0 in 0.189s 2020-02-18 13:09:05.899 346408 DEBUG Execution completed, command line is "parted -s -m /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1 unit MiB print" 2020-02-18 13:09:05.900 346408 DEBUG Command stdout is: "BYT;/dev/sdb:953870MiB:scsi:512:512:msdos:IET VIRTUAL-DISK:;1:1.00MiB:9999MiB:9998MiB:ext4::boot;2:953806MiB:953870MiB:64.0MiB:::lba;" execute /usr/lib/python2.7/site-packages/ironic_lib/utils.py:1032020-02-18 13:09:05.900 346408 DEBUG ironic_lib.utils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Command stderr is: ""  2020-02-18 13:09:05.927 346408 DEBUG Waiting for the config drive partition /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 on node ea6f8eda-402c-4fb0-a2c4-61c83bc73f97 to be ready for writing. create_config_drive_partition /usr/lib/python2.7/site-packages/ironic_lib/disk_utils.py:8692020-02-18 13:09:05.927 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] Running cmd (subprocess): test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2 execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:3722020-02-18 13:09:05.945 346408 DEBUG oslo_concurrency.processutils [req-34f931af-69e5-4ff4-b37b-6b6df8d7f560 - - - - -] u'test -e /dev/disk/by-path/ip-10.33.23.71:3260-iscsi-iqn.2008-10.org.openstack:ea6f8eda-402c-4fb0-a2c4-61c83bc73f97-lun-1-part2' failed. Retrying. execute /usr/lib/python2.7/site-packages/oslo_concurrency/processutils.py:461 The code retries several times followed by the config drive failure error.Again, this happens on a few nodes only and happens only when I try to run BIOS based images.  UEFI based images provision just fine. Any help will be appreciated.thanks,Fred. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodrigo.barbieri2010 at gmail.com Thu Feb 20 08:05:45 2020 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Thu, 20 Feb 2020 05:05:45 -0300 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: +1 to both On Thu, Feb 20, 2020 at 1:45 AM Amit Oren wrote: > Big +1 from me to both Victoria and Carlos! > > Amit > > On Thu, Feb 20, 2020 at 1:47 AM Goutham Pacha Ravi > wrote: > >> Hello Zorillas/Stackers, >> >> I'd like to make propose some core team additions. Earlier in the >> cycle [1], I sought contributors who are interested in helping us >> maintain and grow manila. I'm happy to report that we had an fairly >> enthusiastic response. I'd like to propose two individuals who have >> stepped up to join the core maintainers team. Bear with me as I seek >> to support my proposals with my personal notes of endorsement: >> >> Victoria Martinez de la Cruz - Victoria has been contributing to >> Manila since it's inception. She has played various roles during this >> time and has contributed in significant ways to build this community. >> She's been the go-to person to seek reviews and collaborate on for >> CephFS integration, python-manilaclient, manila-ui maintenance and >> support for the OpenStack client. She has also brought onboard and >> mentored multiple interns on the team (Fun fact: She was recognized as >> Mentor of Mentors [2] by this community). It gives me great joy that >> she agreed to help maintain the project as a core maintainer. >> >> Carlos Eduardo - Carlos has made significant contributions to Manila >> for the past two releases. He worked on several feature gaps with the >> DHSS=True driver mode, and is now working on graduating experimental >> features that the project has been building since the Newton release. >> He performs meaningful reviews that drive good design discussions. I >> am happy to note that he needed little mentoring to start reviewing >> the OpenStack Way [3] - this is a dead give away to me to spot a >> dedicated maintainer who cares about growing the community, along with >> the project. >> >> Please give me your +/- 1s for this proposal. >> >> Thank you, >> Goutham Pacha Ravi (gouthamr) >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html >> [2] >> https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ >> [3] >> https://docs.openstack.org/project-team-guide/review-the-openstack-way.html >> >> -- Rodrigo Barbieri MSc Computer Scientist OpenStack Manila Core Contributor Federal University of São Carlos -------------- next part -------------- An HTML attachment was scrubbed... URL: From jistr at redhat.com Thu Feb 20 08:30:00 2020 From: jistr at redhat.com (=?UTF-8?B?SmnFmcOtIFN0csOhbnNrw70=?=) Date: Thu, 20 Feb 2020 09:30:00 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On 19. 02. 20 23:27, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > From J.Horstmann at mittwald.de Thu Feb 20 08:36:01 2020 From: J.Horstmann at mittwald.de (Jan Horstmann) Date: Thu, 20 Feb 2020 08:36:01 +0000 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> Message-ID: <546d83f6859b960bce53911e425a7011df4512e3.camel@mittwald.de> We had the exact same problem with an external ceph cluster and keyrings in source control. I can confirm that it works fine. The lookup was introduced on purpose in order to make vault encrypted keyrings possible ([1]). [1]: https://review.opendev.org/#/c/689753/ On Wed, 2020-02-19 at 19:02 +0100, Radosław Piliszek wrote: > I just realized we also do a lookup on them and not sure if that works though. > > -yoctozepto > > śr., 19 lut 2020 o 18:52 Jason Anderson napisał(a): > > Oh wow, I did not know it could do this transparently. Thanks, I will > > have a look at that. I can update the docs to reference this approach as > > well if it works out. > > > > Cheers! > > /Jason > > > > On 2/19/20 11:46 AM, Radosław Piliszek wrote: > > > Hi Jason, > > > > > > Ansible autodecrypts files on copy so they can be stored encrypted. > > > It could go in docs. :-) > > > > > > -yoctozepto > > > > > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > > > > Hi Jason, > > > > > > > > I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > > > > > > > > Best regards, > > > > Michal > > > > > > > > On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > > > > > Hi all, > > > > > > > > > > My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). > > > > > > > > > > I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? > > > > > > > > > > Thanks, > > > > > /Jason > > > > > > > > > > [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > > > > > > > > > > > > > > > -- > > > > > Jason Anderson > > > > > > > > > > Chameleon DevOps Lead > > > > > Consortium for Advanced Science and Engineering, The University of Chicago > > > > > Mathematics & Computer Science Division, Argonne National Laboratory > > > > -- > > > > Michał Nasiadka > > > > mnasiadka at gmail.com -- Jan Horstmann Systementwickler | Infrastruktur _____ Mittwald CM Service GmbH & Co. KG Königsberger Straße 4-6 32339 Espelkamp Tel.: 05772 / 293-900 Fax: 05772 / 293-333 j.horstmann at mittwald.de https://www.mittwald.de Geschäftsführer: Robert Meyer, Florian Jürgens St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad Oeynhausen Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad Oeynhausen Informationen zur Datenverarbeitung im Rahmen unserer Geschäftstätigkeit gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From amotoki at gmail.com Thu Feb 20 08:37:06 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 Feb 2020 17:37:06 +0900 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <20200219180218.ahgxnglss3jrvqgp@firewall> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> Message-ID: On Thu, Feb 20, 2020 at 3:04 AM Nate Johnston wrote: > > On Wed, Feb 19, 2020 at 05:07:12PM +0100, Slawek Kaplonski wrote: > > Hi, > > > > Ussuri-2 milestone is behind us and we still don’t have anyone who would like to maintain neutron-fwaas project upstream. > > So I just sent patch [1] to add info to project’s docs that it is deprecated in Neutron stadium. > > > > Question, to TC members mostly - should I do also some other actions to officially deprecate this project? Any changes to governance repo or something like that? Thanks in advance for any help with that. > > Shall we follow the same process we used for LBaaS in [1] and [2], or does that > need to wait? In case of LBaaS, the repository was marked as 'deprecated' when the master branch was retired. When neutron-lbaas was deprecated (in the master branch), no change happened in the governance side. > I think there is a good chance we will not see another release of > neutron-fwaas code. I also would like to note that 'deprecation' does not mean we stop new releases. neutron-lbaas continued to cut releases even after the deprecation is enabled. IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens release and it was released till Stein. neutron-fwaas is being marked as deprecated, so I think we still need to release it until neutron-fwaas was retired in the master branch. Thanks, Akihiro > > Thanks > > Nate > > [1] https://review.opendev.org/#/c/705780/ > [2] https://review.opendev.org/#/c/658493/ > > > [1] https://review.opendev.org/708675 > > > > > On 20 Jan 2020, at 21:26, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > We are getting closer and closer to Ussuri-2 milestone which is our deadline to deprecate neutron-fwaas project if there will be no any volunteers to maintain this project. > > > So if You are interested in this project, please raise Your hand here or ping me on IRC about that. > > > > > >> On 6 Jan 2020, at 21:05, Slawek Kaplonski wrote: > > >> > > >> Hi, > > >> > > >> Just as a reminder, we are still looking for maintainers who want to keep neutron-fwaas project alive. As it was written in my previous email, we will mark this project as deprecated. > > >> So please reply to this email or contact me directly if You are interested in maintaining this project. > > >> > > >>> On 19 Nov 2019, at 11:26, Slawek Kaplonski wrote: > > >>> > > >>> Hi, > > >>> > > >>> Over the past couple of cycles we have noticed that new contributions and > > >>> maintenance efforts for neutron-fwaas project were almost non existent. > > >>> This impacts patches for bug fixes, new features and reviews. The Neutron > > >>> core team is trying to at least keep the CI of this project healthy, but we > > >>> don’t have enough knowledge about the details of the neutron-fwaas > > >>> code base to review more complex patches. > > >>> > > >>> During the PTG in Shanghai we discussed that with operators and TC members > > >>> during the forum session [1] and later within the Neutron team during the > > >>> PTG session [2]. > > >>> > > >>> During these discussions, with the help of operators and TC members, we reached > > >>> the conclusion that we need to have someone responsible for maintaining project. > > >>> This doesn’t mean that the maintainer needs to spend full time working on this > > >>> project. Rather, we need someone to be the contact person for the project, who > > >>> takes care of the project’s CI and review patches. Of course that’s only a > > >>> minimal requirement. If the new maintainer works on new features for the > > >>> project, it’s even better :) > > >>> > > >>> If we don’t have any new maintainer(s) before milestone Ussuri-2, which is > > >>> Feb 10 - Feb 14 according to [3], we will need to mark neutron-fwaas > > >>> as deprecated and in “V” cycle we will propose to move the project > > >>> from the Neutron stadium, hosted in the “openstack/“ namespace, to the > > >>> unofficial projects hosted in the “x/“ namespace. > > >>> > > >>> So if You are using this project now, or if You have customers who are > > >>> using it, please consider the possibility of maintaining it. Otherwise, > > >>> please be aware that it is highly possible that the project will be > > >>> deprecated and moved out from the official OpenStack projects. > > >>> > > >>> [1] > > >>> https://etherpad.openstack.org/p/PVG-Neutron-stadium-projects-the-path-forward > > >>> [2] https://etherpad.openstack.org/p/Shanghai-Neutron-Planning-restored - > > >>> Lines 379-421 > > >>> [3] https://releases.openstack.org/ussuri/schedule.html > > >>> > > >>> -- > > >>> Slawek Kaplonski > > >>> Senior software engineer > > >>> Red Hat > > >> > > >> — > > >> Slawek Kaplonski > > >> Senior software engineer > > >> Red Hat > > >> > > > > > > — > > > Slawek Kaplonski > > > Senior software engineer > > > Red Hat > > > > > > > — > > Slawek Kaplonski > > Senior software engineer > > Red Hat > > > > > > From ssbarnea at redhat.com Thu Feb 20 08:39:15 2020 From: ssbarnea at redhat.com (Sorin Sbarnea) Date: Thu, 20 Feb 2020 08:39:15 +0000 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 On Wed, 19 Feb 2020 at 22:31, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -- -- /sorin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mbultel at redhat.com Thu Feb 20 08:51:15 2020 From: mbultel at redhat.com (Mathieu Bultel) Date: Thu, 20 Feb 2020 09:51:15 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1 indeed On Wed, Feb 19, 2020 at 11:32 PM Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc. In his > short tenure in TripleO Kevin has accomplished a lot and is the number #3 > contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gfidente at redhat.com Thu Feb 20 08:54:28 2020 From: gfidente at redhat.com (Giulio Fidente) Date: Thu, 20 Feb 2020 09:54:28 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: <1126709d-2034-6a24-6146-d4d616c10a15@redhat.com> On 2/19/20 11:27 PM, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc.  In his > short tenure in TripleO Kevin has accomplished a lot and is the number > #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. this should have happened yesterday already thanks Kevin -- Giulio Fidente GPG KEY: 08D733BA From eblock at nde.ag Thu Feb 20 09:04:18 2020 From: eblock at nde.ag (Eugen Block) Date: Thu, 20 Feb 2020 09:04:18 +0000 Subject: VPNaaS with multiple endpoint groups In-Reply-To: <20200214073752.Horde.fEamBT4aCkEnszSfxNwdzso@webmail.nde.ag> Message-ID: <20200220090418.Horde.2-pw9T_UT902LggR7hsWDxl@webmail.nde.ag> Hi, I'll respond to my own question in case someone else is looking for something similar. It turned out that the customer tried to do this with IKEv1 instead v2 (I know, v2 is not really "new"). After configuring both sites to use v2 the tunnel excepted both peer endpoints. They also had a flaw in the network design which prevented a many-to-many connection through the same tunnel. They're reworking it, so for now this question can be considered as closed. Regards, Eugen Zitat von Eugen Block : > Hi all, > > is anyone here able to help with a vpn issue? > It's not really my strong suit but I'll try to explain. > In a Rocky environment (using openvswitch) a customer has setup a > VPN service successfully, but that only seems to work if there's > only one local and one peer endpoint group. According to the docs it > should work with multiple endpoint groups, as far as I could tell > the setup looks fine and matches the docs (don't create the subnet > when creating the vpn service but use said endpoint groups). > > What we're seeing is that as soon as the vpn site connection is > created with multiple endpoints only one of the destination IPs is > reachable. And it seems as if it's always the first in the list of > EPs (see below). > > This seems to be reflected in the iptables where we also only see > one of the required IP ranges. Also neutron reports duplicate rules > if we try to use both EPs: > > 2020-02-12 14:14:27.638 16275 WARNING > neutron.agent.linux.iptables_manager > [req-92ff6f06-3a92-4daa-aeea-9c02dc9a31c3 > ba9bf239530d461baea2f6f60bd301e6 850dad648ce94dbaa5c0ea2fb450bbda - > - -] Duplicate iptables rule detected. This may indicate a bug in > the iptables rule generation code. Line: -A > neutron-l3-agent-POSTROUTING -s X.X.252.0/24 -d Y.Y.0.0/16 -m policy > --dir out --pol ipsec -j ACCEPT > > These are the configured endpoints: > > root at control:~ # openstack vpn endpoint group list > +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ > | ID | Name | Type | > Endpoints > | > +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ > | 0f853567-e4bf-4019-9290-4cd9f94a9793 | peer-ep-group-1 | cidr | > [u'X.X.253.0/24'] > | > | 152a0f9e-ce49-4769-94f1-bc0bebedd3ec | peer-ep-group-2 | cidr | > [u'X.X.253.0/24', u'Y.Y.0.0/16'] > | > | 791ab8ef-e150-4ba0-ac2c-c044659f509e | local-ep-group1 | subnet | > [u'38efad5e-0f1e-4e36-8995-74a611bfef41'] > | > | 810b0bf2-d258-459b-9b57-ae5b491ea612 | local-ep-group2 | subnet | > [u'38efad5e-0f1e-4e36-8995-74a611bfef41', > u'9e35d80f-029e-4cc1-a30b-1753f7683e16'] | > | b5c79e08-41e4-441c-9ed3-9b02c2654173 | peer-ep-group-3 | cidr | > [u'Y.Y.0.0/16'] > | > +--------------------------------------+-----------------+--------+------------------------------------------------------------------------------------+ > > > Has anyone experience with this and could help me out? > > Another follow-up question: how can we gather some information > regarding the ipsec status? Althoug there is a active tunnel we > don't see anything with 'ipsec statusall', I've checked all > namespaces on the control node. > > Any help is highly appreciated! > > Best regards, > Eugen From sbauza at redhat.com Thu Feb 20 09:58:17 2020 From: sbauza at redhat.com (Sylvain Bauza) Date: Thu, 20 Feb 2020 10:58:17 +0100 Subject: [nova] Ussuri feature scrub In-Reply-To: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> References: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> Message-ID: On Mon, Feb 17, 2020 at 11:22 PM Eric Fried wrote: > Nova maintainers and contributors- > > { Please refer to this ML thread [1][2] for background. } > > Now that spec freeze has passed, I would like to assess the > Design:Approved blueprints and understand how many we could reasonably > expect to land in Ussuri. > > We completed 25 blueprints in Train. However, mriedem is gone, and it is > likely that I will drop off the radar after Q1. Obviously all > blueprints/releases/reviews/etc. are not created equal, but using > stackalytics review numbers as a rough heuristic, I expect about 20 > blueprints to get completed in Ussuri. If we figure that 5-ish of the > incompletes will be due to factors beyond our control, that would mean > we should Direction:Approve about 25. > > As of this writing: > - 30 blueprints are targeted for ussuri [3]. Of these, > - 7 are already implemented. Of the remaining 23, > - 2 are not yet Design:Approved. These will need an exception if they > are to proceed. And > - 19 (including the unapproved ones) have code in various stages. > > I would like to see us cut 5-ish of the 30. > > While I understand your concerns, I'd like us to stop thinking at the above fact as a problem. What's honestly the issue if we only have, say, 20 specs be implemented ? Also, why could we say which specs should be cut, if we already agreed them ? And which ones ? We are a community where everyone tries to work upstream when they can. And it's fine. -Sylvain I have made an etherpad [4] with the unimplemented blueprints listed > with owners and code links. I made notes on some of the ones I would > like to see prioritized, and a couple on which I'm more meh. If you have > a stake in Nova/Ussuri, I encourage you to weigh in. > > How will we ultimately decide? Will we actually cut anything? I don't > have the answers yet. Let's go through this exercise and see if anything > obvious falls out, and then we can figure out the next steps. > > Thanks, > efried > > [1] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-September/009832.html > [2] > > http://lists.openstack.org/pipermail/openstack-discuss/2019-October/thread.html#9835 > [3] https://blueprints.launchpad.net/nova/ussuri > [4] https://etherpad.openstack.org/p/nova-ussuri-planning > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bdobreli at redhat.com Thu Feb 20 10:21:53 2020 From: bdobreli at redhat.com (Bogdan Dobrelya) Date: Thu, 20 Feb 2020 11:21:53 +0100 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: <301af001-2faa-1efc-11b0-ef8869850dae@redhat.com> On 19.02.2020 23:27, Wesley Hayutin wrote: > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the > tripleo-ansible project, transformation, mistral to ansible etc.  In his > short tenure in TripleO Kevin has accomplished a lot and is the number > #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 +1 -- Best regards, Bogdan Dobrelya, Irc #bogdando From radoslaw.piliszek at gmail.com Thu Feb 20 10:26:41 2020 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 20 Feb 2020 11:26:41 +0100 Subject: [kolla-ansible] External Ceph keyring encryption In-Reply-To: <546d83f6859b960bce53911e425a7011df4512e3.camel@mittwald.de> References: <03ae459c-bcc5-6564-0b0b-eb1153e39126@uchicago.edu> <6824f3ae-846c-2130-e83a-296f0a64075e@uchicago.edu> <546d83f6859b960bce53911e425a7011df4512e3.camel@mittwald.de> Message-ID: Ah, yeah. I had this gut feeling I tested that and I was right (but forgetful). -yoctozepto czw., 20 lut 2020 o 09:50 Jan Horstmann napisał(a): > > We had the exact same problem with an external ceph cluster and > keyrings in source control. I can confirm that it works fine. > The lookup was introduced on purpose in order to make vault encrypted > keyrings possible ([1]). > > > [1]: https://review.opendev.org/#/c/689753/ > > On Wed, 2020-02-19 at 19:02 +0100, Radosław Piliszek wrote: > > I just realized we also do a lookup on them and not sure if that works though. > > > > -yoctozepto > > > > śr., 19 lut 2020 o 18:52 Jason Anderson napisał(a): > > > Oh wow, I did not know it could do this transparently. Thanks, I will > > > have a look at that. I can update the docs to reference this approach as > > > well if it works out. > > > > > > Cheers! > > > /Jason > > > > > > On 2/19/20 11:46 AM, Radosław Piliszek wrote: > > > > Hi Jason, > > > > > > > > Ansible autodecrypts files on copy so they can be stored encrypted. > > > > It could go in docs. :-) > > > > > > > > -yoctozepto > > > > > > > > śr., 19 lut 2020 o 18:36 Michał Nasiadka napisał(a): > > > > > Hi Jason, > > > > > > > > > > I don’t think it should be instead, we could support both modes - happy to help in reviewing/co-authoring. > > > > > > > > > > Best regards, > > > > > Michal > > > > > > > > > > On Wed, 19 Feb 2020 at 18:23, Jason Anderson wrote: > > > > > > Hi all, > > > > > > > > > > > > My understanding is that KA has dropped support for provisioning Ceph directly, and now requires an external Ceph cluster (side note: we should update the docs[1], which state it is only "sometimes necessary" to use an external cluster--I will try to submit something today). > > > > > > > > > > > > I think this works well, but the handling of keyrings cuts a bit against the grain of KA. The keyring files must be dropped in to the node_custom_config directory. This means that operators who prefer to keep their KA configuration in source control must have some mechanism for securing that, as it is unencrypted. What does everybody think about storing Ceph keyring secrets in passwords.yml instead, similar to how SSH keys are handled? > > > > > > > > > > > > Thanks, > > > > > > /Jason > > > > > > > > > > > > [1]: https://docs.openstack.org/kolla-ansible/latest/reference/storage/external-ceph-guide.html > > > > > > > > > > > > > > > > > > -- > > > > > > Jason Anderson > > > > > > > > > > > > Chameleon DevOps Lead > > > > > > Consortium for Advanced Science and Engineering, The University of Chicago > > > > > > Mathematics & Computer Science Division, Argonne National Laboratory > > > > > -- > > > > > Michał Nasiadka > > > > > mnasiadka at gmail.com > -- > Jan Horstmann > Systementwickler | Infrastruktur > _____ > > > Mittwald CM Service GmbH & Co. KG > Königsberger Straße 4-6 > 32339 Espelkamp > > Tel.: 05772 / 293-900 > Fax: 05772 / 293-333 > > j.horstmann at mittwald.de > https://www.mittwald.de > > Geschäftsführer: Robert Meyer, Florian Jürgens > > St.Nr.: 331/5721/1033, USt-IdNr.: DE814773217, HRA 6640, AG Bad > Oeynhausen > Komplementärin: Robert Meyer Verwaltungs GmbH, HRB 13260, AG Bad > Oeynhausen > > Informationen zur Datenverarbeitung im Rahmen unserer > Geschäftstätigkeit > gemäß Art. 13-14 DSGVO sind unter www.mittwald.de/ds abrufbar. From lyarwood at redhat.com Thu Feb 20 10:28:40 2020 From: lyarwood at redhat.com (Lee Yarwood) Date: Thu, 20 Feb 2020 10:28:40 +0000 Subject: [nova] context being introduced to nova.virt.driver.ComputeDriver.extend_volume Message-ID: <20200220102840.bmlwg6jfmjcunqio@lyarwood.usersys.redhat.com> Hello all, I'm looking to introduce the request context to the signature of extend_volume in the following change: virt: Pass request context to extend_volume https://review.opendev.org/#/c/706899/ Any out of tree drivers implementing extend_volume will also need to add the request context to their implementations once this lands. This is part of a wider bugfix series below: https://review.opendev.org/#/q/topic:bug/1861071 Also discussed on the ML in the following thread: http://lists.openstack.org/pipermail/openstack-discuss/2020-February/012551.html If there are any concerns or issues with this then please let me know! Cheers, -- Lee Yarwood A5D1 9385 88CB 7E5F BE64 6618 BCA6 6E33 F672 2D76 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: not available URL: From thierry at openstack.org Thu Feb 20 10:29:44 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Feb 2020 11:29:44 +0100 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: <7a7657fa-0bc1-d6db-49ee-94e3b49e6d85@openstack.org> Eric Fried wrote: > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. Thanks Eric for everything you did for Nova, but also to keep the OpenStack gate up and running for everyone. It was a pleasure and privilege to work with you. Good luck on your future endeavors ! > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. Another one bites the dust, but The show must go on. -- Thierry From thierry at openstack.org Thu Feb 20 10:33:45 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Feb 2020 11:33:45 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> Message-ID: <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> Akihiro Motoki wrote: > [...] > I also would like to note that 'deprecation' does not mean we stop new releases. > neutron-lbaas continued to cut releases even after the deprecation is enabled. > IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens > release and it was released till Stein. > neutron-fwaas is being marked as deprecated, so I think we still need > to release it until neutron-fwaas was retired in the master branch. If you release it in Ussuri, then nothing needs to get done governance-wise... it's still considered a deliverable from the Neutron team. I'd advise you to consider not releasing it in Victoria though, if we are not comfortable with its state and nobody picks it up between now and the victoria-2 milestone. -- Thierry Carrez (ttx) From skaplons at redhat.com Thu Feb 20 11:03:20 2020 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 20 Feb 2020 12:03:20 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> Message-ID: <71C5F9E0-34B2-4A23-948E-C5DDFB569416@redhat.com> Hi, > On 20 Feb 2020, at 11:33, Thierry Carrez wrote: > > Akihiro Motoki wrote: >> [...] >> I also would like to note that 'deprecation' does not mean we stop new releases. >> neutron-lbaas continued to cut releases even after the deprecation is enabled. >> IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens >> release and it was released till Stein. >> neutron-fwaas is being marked as deprecated, so I think we still need >> to release it until neutron-fwaas was retired in the master branch. > > If you release it in Ussuri, then nothing needs to get done governance-wise... it's still considered a deliverable from the Neutron team. > > I'd advise you to consider not releasing it in Victoria though, if we are not comfortable with its state and nobody picks it up between now and the victoria-2 milestone. Thx. I though that we will for sure release it in Ussuri still. And it’s good idea to add such note about not releasing for Victoria if there will be still nobody to take care of it. I will add it to the deprecation warning. And in such case, I think that deprecation process which worked for LBaaS and which Nate pointed to, will be good to apply here but in Victoria cycle, not now, right? > > -- > Thierry Carrez (ttx) > — Slawek Kaplonski Senior software engineer Red Hat From thierry at openstack.org Thu Feb 20 11:18:48 2020 From: thierry at openstack.org (Thierry Carrez) Date: Thu, 20 Feb 2020 12:18:48 +0100 Subject: [all][neutron][neutron-fwaas] FINAL CALL Maintainers needed In-Reply-To: <71C5F9E0-34B2-4A23-948E-C5DDFB569416@redhat.com> References: <20191119102615.oq46xojyhoybulna@skaplons-mac> <6C15A980-09DF-4585-BCF5-4F0916F5D0FC@redhat.com> <20200219180218.ahgxnglss3jrvqgp@firewall> <49aa47b1-8fd0-a382-7052-7b30c5ee0ef7@openstack.org> <71C5F9E0-34B2-4A23-948E-C5DDFB569416@redhat.com> Message-ID: <8a360821-bea6-977e-ee87-99b904d1918b@openstack.org> Slawek Kaplonski wrote: > Hi, > >> On 20 Feb 2020, at 11:33, Thierry Carrez wrote: >> >> Akihiro Motoki wrote: >>> [...] >>> I also would like to note that 'deprecation' does not mean we stop new releases. >>> neutron-lbaas continued to cut releases even after the deprecation is enabled. >>> IIRC neutron-lbaas was marked as deprecated Jan 2018 around the queens >>> release and it was released till Stein. >>> neutron-fwaas is being marked as deprecated, so I think we still need >>> to release it until neutron-fwaas was retired in the master branch. >> >> If you release it in Ussuri, then nothing needs to get done governance-wise... it's still considered a deliverable from the Neutron team. >> >> I'd advise you to consider not releasing it in Victoria though, if we are not comfortable with its state and nobody picks it up between now and the victoria-2 milestone. > > Thx. I though that we will for sure release it in Ussuri still. And it’s good idea to add such note about not releasing for Victoria if there will be still nobody to take care of it. I will add it to the deprecation warning. > And in such case, I think that deprecation process which worked for LBaaS and which Nate pointed to, will be good to apply here but in Victoria cycle, not now, right? Yes. -- Thierry From tpb at dyncloud.net Thu Feb 20 12:29:42 2020 From: tpb at dyncloud.net (Tom Barron) Date: Thu, 20 Feb 2020 07:29:42 -0500 Subject: [manila] Core Team additions In-Reply-To: References: Message-ID: <20200220122942.sb3yileepw2qqmb4@barron.net> +1 !!! On 19/02/20 15:44 -0800, Goutham Pacha Ravi wrote: >Hello Zorillas/Stackers, > >I'd like to make propose some core team additions. Earlier in the >cycle [1], I sought contributors who are interested in helping us >maintain and grow manila. I'm happy to report that we had an fairly >enthusiastic response. I'd like to propose two individuals who have >stepped up to join the core maintainers team. Bear with me as I seek >to support my proposals with my personal notes of endorsement: > >Victoria Martinez de la Cruz - Victoria has been contributing to >Manila since it's inception. She has played various roles during this >time and has contributed in significant ways to build this community. >She's been the go-to person to seek reviews and collaborate on for >CephFS integration, python-manilaclient, manila-ui maintenance and >support for the OpenStack client. She has also brought onboard and >mentored multiple interns on the team (Fun fact: She was recognized as >Mentor of Mentors [2] by this community). It gives me great joy that >she agreed to help maintain the project as a core maintainer. > >Carlos Eduardo - Carlos has made significant contributions to Manila >for the past two releases. He worked on several feature gaps with the >DHSS=True driver mode, and is now working on graduating experimental >features that the project has been building since the Newton release. >He performs meaningful reviews that drive good design discussions. I >am happy to note that he needed little mentoring to start reviewing >the OpenStack Way [3] - this is a dead give away to me to spot a >dedicated maintainer who cares about growing the community, along with >the project. > >Please give me your +/- 1s for this proposal. > >Thank you, >Goutham Pacha Ravi (gouthamr) > >[1] http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html >[2] https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ >[3] https://docs.openstack.org/project-team-guide/review-the-openstack-way.html > From zhengyupann at 163.com Thu Feb 20 12:48:42 2020 From: zhengyupann at 163.com (Zhengyu Pan) Date: Thu, 20 Feb 2020 20:48:42 +0800 (CST) Subject: [neutron]After update neutron-lbaas, How to not restart neutron-server to let new neutron-lbaas code work? Message-ID: <22ba60c2.95af.17062a4a75f.Coremail.zhengyupann@163.com> Hi, I modified neutron-lbaas load balancer code to support our special need. I have updated neutron-lbaas code in the controller node. But i don't want to restart neutron-server in case the customer's business was interrupted. How can i let netron-lbaas code work in the case of not restarting neutrn-sever in controller node? -- Best! ZhengYu! -------------- next part -------------- An HTML attachment was scrubbed... URL: From vhariria at redhat.com Thu Feb 20 13:14:04 2020 From: vhariria at redhat.com (Vida Haririan) Date: Thu, 20 Feb 2020 08:14:04 -0500 Subject: [manila] Core Team additions In-Reply-To: <20200220122942.sb3yileepw2qqmb4@barron.net> References: <20200220122942.sb3yileepw2qqmb4@barron.net> Message-ID: <5a365a6d-454d-c2b9-5d3d-eaa3a48b4c1b@redhat.com> +1 :) On 2/20/20 7:29 AM, Tom Barron wrote: > +1 !!! > > On 19/02/20 15:44 -0800, Goutham Pacha Ravi wrote: >> Hello Zorillas/Stackers, >> >> I'd like to make propose some core team additions. Earlier in the >> cycle [1], I sought contributors who are interested in helping us >> maintain and grow manila. I'm happy to report that we had an fairly >> enthusiastic response. I'd like to propose two individuals who have >> stepped up to join the core maintainers team. Bear with me as I seek >> to support my proposals with my personal notes of endorsement: >> >> Victoria Martinez de la Cruz - Victoria has been contributing to >> Manila since it's inception. She has played various roles during this >> time and has contributed in significant ways to build this community. >> She's been the go-to person to seek reviews and collaborate on for >> CephFS integration, python-manilaclient, manila-ui maintenance and >> support for the OpenStack client. She has also brought onboard and >> mentored multiple interns on the team (Fun fact: She was recognized as >> Mentor of Mentors [2] by this community). It gives me great joy that >> she agreed to help maintain the project as a core maintainer. >> >> Carlos Eduardo - Carlos has made significant contributions to Manila >> for the past two releases. He worked on several feature gaps with the >> DHSS=True driver mode, and is now working on graduating experimental >> features that the project has been building since the Newton release. >> He performs meaningful reviews that drive good design discussions. I >> am happy to note that he needed little mentoring to start reviewing >> the OpenStack Way [3] - this is a dead give away to me to spot a >> dedicated maintainer who cares about growing the community, along with >> the project. >> >> Please give me your +/- 1s for this proposal. >> >> Thank you, >> Goutham Pacha Ravi (gouthamr) >> >> [1] >> http://lists.openstack.org/pipermail/openstack-discuss/2019-October/009910.html >> [2] >> https://superuser.openstack.org/articles/openstack-community-contributor-awards-recognize-unsung-heroes/ >> [3] >> https://docs.openstack.org/project-team-guide/review-the-openstack-way.html >> > From amotoki at gmail.com Thu Feb 20 14:41:07 2020 From: amotoki at gmail.com (Akihiro Motoki) Date: Thu, 20 Feb 2020 23:41:07 +0900 Subject: [neutron] networking-bagpipe gate failure Message-ID: Hi, networking-bagpipe gate is broken now due to its dependencies. The situation is complicated so I am summarizing it and exploring the right solution. # what happens now Examples of the gate failure are [1] and [2], and the exact failure is found at [3]. It fails due to horizon dependency from networking-bgpvpn train release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master (horizon==18.0.0). The neutron team has not released a beta for ussuri, so requirements.txt tries to install networking-bgpvpn train which has capping of horizon version. The capping of horizon in networking-bgpvpn was introduced in [6] and we cut a release so it started to cause the failure like this. We've explored several workarounds to avoid it including specifying horizon in networking-bagpipe, specify horizon in required-projects in networking-bagpipe and dropping networking-bgpvpn in requirements.txt in networking-bagpipe, but all of them do not work. # possible solutions I am thinking two options. The one is to cut a beta release in neutron stadium for ussuri. The other is to uncap horizon in networking-bgpvpn train and release it. I believe both work but the first one would be better as it is time to release beta for Ussuri. Discussing it in the IRC, we are planning to release beta soon. (ovn-octavia-provider is also waiting for a beta release of neutron.) # Side notes Capping dependencies in stable branches is not what we usually do. Why we don't do this was discussed in the mailing list thread [4] and it is highlighted in [5]. Thanks, Akihiro Motoki (irc: amotoki) [1] https://review.opendev.org/#/c/708829/ [2] https://review.opendev.org/#/c/703949/ [3] https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html [6] https://review.opendev.org/#/c/699456/ From isanjayk5 at gmail.com Thu Feb 20 14:59:24 2020 From: isanjayk5 at gmail.com (Sanjay K) Date: Thu, 20 Feb 2020 20:29:24 +0530 Subject: [openstack-helm][neutron] neutron-dhcp-agent stuck in Init state Message-ID: Hi All, I am trying to deploy openstack services using helm charts in my k8s cluster. My cluster is having 4 nodes - 1 k8s master, 1 controller and 2 compute nodes. I am using helm v3.0.3 for openstack stein. I have already deployed the below charts in my cluster successfully in the respective order - ingress, mariadb, rabbitmq, memcached, keystone, glance and then neutron. For mariadb, rabbitmq and glance, I am using PV and PVC in my local storage while deploying. All are ok until I deploy neutron. After deployment, all neutron pods works well except *neutron-dhcp-agent-default pod *which remains in Init:0/2 state. The pod description shows it has dependencies shown as - DEPENDENCY_SERVICE: openstack:rabbitmq,openstack:neutron-server,openstack:nova-api DEPENDENCY_JOBS: neutron-rabbit-init pods state- NAME READY STATUS RESTARTS AGE glance-api-b575ff7c-qp7wt 1/1 Running 0 132m glance-bootstrap-zjtw9 0/1 Completed 0 132m glance-db-init-86xgk 0/1 Completed 0 132m glance-db-sync-z9p9l 0/1 Completed 0 132m glance-ks-endpoints-wwm59 0/3 Completed 0 132m glance-ks-service-vs92t 0/1 Completed 0 132m glance-ks-user-znn88 0/1 Completed 0 132m glance-metadefs-load-g685b 0/1 Completed 0 132m glance-rabbit-init-648w8 0/1 Completed 0 132m glance-storage-init-jdlx2 0/1 Completed 0 132m ingress-8f98f7d96-llkwl 1/1 Running 0 3h42m ingress-error-pages-84647d8fcb-6j6bz 1/1 Running 0 3h42m keystone-api-5785f4787-lz296 1/1 Running 0 154m keystone-bootstrap-6tcdx 0/1 Completed 0 154m keystone-credential-setup-d9js5 0/1 Completed 0 154m keystone-db-init-zsgbj 0/1 Completed 0 154m keystone-db-sync-z8hfk 0/1 Completed 0 154m keystone-domain-manage-nk48h 0/1 Completed 0 154m keystone-fernet-setup-4pzlj 0/1 Completed 0 154m keystone-rabbit-init-mlpsn 0/1 Completed 0 154m mariadb-ingress-669c67b6b5-w8jxb 1/1 Running 0 3h32m mariadb-ingress-error-pages-d77467d69-9njgt 1/1 Running 0 3h32m mariadb-server-0 1/1 Running 0 3h32m memcached-memcached-7b49f48865-tf6xc 1/1 Running 0 3h11m neutron-db-init-zpl87 0/1 Completed 0 109m neutron-db-sync-plsnc 0/1 Completed 0 109m neutron-dhcp-agent-default-sf76z 0/1 Init:0/2 0 109m neutron-ks-endpoints-jgmdv 0/3 Completed 0 109m neutron-ks-service-nr5w2 0/1 Completed 0 109m neutron-ks-user-lw4p4 0/1 Completed 0 109m neutron-l3-agent-default-lnzk8 1/1 Running 0 109m neutron-lb-agent-default-44v9x 1/1 Running 0 109m neutron-lb-agent-default-94xw4 1/1 Running 0 109m neutron-lb-agent-default-whms2 1/1 Running 0 109m neutron-metadata-agent-default-7rbrh 1/1 Running 0 109m neutron-rabbit-init-qkhqm 0/1 Completed 0 109m neutron-server-964fcffcb-gvv4k 1/1 Running 0 109m rabbitmq-cluster-wait-vjbzq 0/1 Completed 0 3h18m rabbitmq-rabbitmq-0 1/1 Running 0 3h18m Please guide me how to resolve this neutron-dhcp-agent pod so that I can deploy other services after that. I have plan to deploy libvirt, nova, cinder, horizon and ceilometer charts into my cluster. thank you for your help and support. best regards, Sanjay -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at fried.cc Thu Feb 20 15:12:13 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 20 Feb 2020 09:12:13 -0600 Subject: [nova] Ussuri feature scrub In-Reply-To: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> References: <9bd28489-2fb2-11a8-af44-4e3e42215d1c@fried.cc> Message-ID: <0ffb5d3d-b8bc-4d2c-ef73-0c3ac2558e08@fried.cc> > I would like to see us cut 5-ish of the 30. We agreed in the nova meeting today to drop this idea and just go with the existing "process" [1]. As of today, we have 29 approved blueprints, and one eligible for exception if the spec can be approved by EOB tomorrow. efried [1] http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-131 From james.slagle at gmail.com Thu Feb 20 15:16:38 2020 From: james.slagle at gmail.com (James Slagle) Date: Thu, 20 Feb 2020 10:16:38 -0500 Subject: [tripleo] proposal to make Kevin Carter core In-Reply-To: References: Message-ID: +1. Thanks Kevin for your contributions and leadership. On Wed, Feb 19, 2020 at 5:32 PM Wesley Hayutin wrote: > > Greetings, > > I'm sure by now you have all seen the contributions from Kevin to the tripleo-ansible project, transformation, mistral to ansible etc. In his short tenure in TripleO Kevin has accomplished a lot and is the number #3 contributor to tripleo for the ussuri cycle! > > First of all very well done, and thank you for all the contributions! > Secondly, I'd like to propose Kevin to core.. > > Please vote by replying to this email. > Thank you > > +1 -- -- James Slagle -- From openstack at fried.cc Thu Feb 20 15:19:56 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 20 Feb 2020 09:19:56 -0600 Subject: [nova][sfe] Support re-configure deleted_on_termination in server In-Reply-To: <63aa5ce6b60e4be7a0fff11abfce47e4@inspur.com> References: <63aa5ce6b60e4be7a0fff11abfce47e4@inspur.com> Message-ID: <413aa140-feba-072e-ddb8-74d73f1e825a@fried.cc> We discussed this in today's Nova meeting. We agreed to grant the exception if those can be resolved and the spec can get two +2s by EOB tomorrow (Friday 20200221). http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-62 On 2/17/20 8:57 PM, Brin Zhang(张百林) wrote: > Hi, nova: > >        We would like to have a spec freeze exception for the spec: > Support re-configure deleted_on_termination in server [1], and it’s PoC > code in [2] > >        > > I will attend the nova meeting on February 20 2020 1400 UTCas much as > possible. > >   > >          [1] https://review.opendev.org/#/c/580336/ > >          [2] https://review.opendev.org/#/c/693828/ > >   > > brinzhang > From openstack at fried.cc Thu Feb 20 15:18:22 2020 From: openstack at fried.cc (Eric Fried) Date: Thu, 20 Feb 2020 09:18:22 -0600 Subject: [nova][sfe] Support volume local cache In-Reply-To: References: Message-ID: <1d10d684-2f25-45f5-b3d3-59536aa0f863@fried.cc> We agreed in today's Nova meeting to grant this exception. http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-20-14.00.log.html#l-48 On 2/17/20 7:18 PM, Fang, Liang A wrote: > Hi > >   > > We would like to have a spec freeze exception for the spec: Support > volume local cache [1]. This is part of cross project contribution, with > another spec in cinder [2]. > >   > > I will attend the Nova meeting on February 20 2020 1400 UTC. > >   > > [1] https://review.opendev.org/#/c/689070/ > > [2] https://review.opendev.org/#/c/684556/ > >   > > Regards > > Liang > >   > From gmann at ghanshyammann.com Thu Feb 20 15:35:13 2020 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 20 Feb 2020 09:35:13 -0600 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: <170633d1ad4.116c4b33a148521.5355733457340253139@ghanshyammann.com> ---- On Wed, 19 Feb 2020 11:43:57 -0600 Eric Fried wrote ---- > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official day will be March 31st (though some portion of my > remaining time will be vacation -- TBD). Unfortunately this means I will > need to abdicate my position as Nova PTL mid-cycle. As noted in the last > meeting [1], I'm calling for a volunteer to take over for the remainder > of Ussuri. Feel free to approach me privately if you prefer. Thanks, Eric for all your contribution around OpenStack not just Nova. It was great working with you. You have good leadership skills which helped the community a lot. Bets of luck for your new position. -gmann > > Thanks, > efried > > [1] > http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-02-13-21.00.log.html#l-180 > > From elod.illes at est.tech Thu Feb 20 15:41:20 2020 From: elod.illes at est.tech (=?utf-8?B?RWzDtWQgSWxsw6lz?=) Date: Thu, 20 Feb 2020 15:41:20 +0000 Subject: [neutron] networking-bagpipe gate failure In-Reply-To: References: Message-ID: Thanks Akihiro for the summary! I think "possible solution 1" could work. Nevertheless I've pushed a revert [7] for the capping patch [6], since that is now completely unnecessary (given that horizon is added in upper constraints of Train). It is good to communicate the proper way of fixing these issues [4], especially since that changed recently as in the past e.g. neutron and horizon were not allowed to be added to upper-constraints [8]. Thanks, Előd [7] https://review.opendev.org/#/c/708865/ [8] https://review.opendev.org/#/c/631300/ On 2020. 02. 20. 15:41, Akihiro Motoki wrote: > Hi, > > networking-bagpipe gate is broken now due to its dependencies. > The situation is complicated so I am summarizing it and exploring the > right solution. > > # what happens now > > Examples of the gate failure are [1] and [2], and the exact failure is > found at [3]. > It fails due to horizon dependency from networking-bgpvpn train > release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master > (horizon==18.0.0). > The neutron team has not released a beta for ussuri, so > requirements.txt tries to install networking-bgpvpn train which has > capping of horizon version. > The capping of horizon in networking-bgpvpn was introduced in [6] and > we cut a release so it started to cause the failure like this. > > We've explored several workarounds to avoid it including specifying > horizon in networking-bagpipe, specify horizon in required-projects in > networking-bagpipe and dropping networking-bgpvpn in requirements.txt > in networking-bagpipe, but all of them do not work. > > # possible solutions > > I am thinking two options. > The one is to cut a beta release in neutron stadium for ussuri. > The other is to uncap horizon in networking-bgpvpn train and release it. > > I believe both work but the first one would be better as it is time to > release beta for Ussuri. > Discussing it in the IRC, we are planning to release beta soon. > (ovn-octavia-provider is also waiting for a beta release of neutron.) > > # Side notes > > Capping dependencies in stable branches is not what we usually do. > Why we don't do this was discussed in the mailing list thread [4] and > it is highlighted in [5]. > > Thanks, > Akihiro Motoki (irc: amotoki) > > [1] https://review.opendev.org/#/c/708829/ > [2] https://review.opendev.org/#/c/703949/ > [3] https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 > [4] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 > [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html > [6] https://review.opendev.org/#/c/699456/ > From katonalala at gmail.com Thu Feb 20 15:41:49 2020 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 20 Feb 2020 16:41:49 +0100 Subject: [neutron] networking-bagpipe gate failure In-Reply-To: References: Message-ID: Hi Akihiro, Thanks for summarizing. Just to make it written, networking-odl gate is suffering from the same problem. Regards Lajos Akihiro Motoki ezt írta (időpont: 2020. febr. 20., Cs, 15:45): > Hi, > > networking-bagpipe gate is broken now due to its dependencies. > The situation is complicated so I am summarizing it and exploring the > right solution. > > # what happens now > > Examples of the gate failure are [1] and [2], and the exact failure is > found at [3]. > It fails due to horizon dependency from networking-bgpvpn train > release (horizon>=14.0.0<17.0.0) and the upper-constraints.txt master > (horizon==18.0.0). > The neutron team has not released a beta for ussuri, so > requirements.txt tries to install networking-bgpvpn train which has > capping of horizon version. > The capping of horizon in networking-bgpvpn was introduced in [6] and > we cut a release so it started to cause the failure like this. > > We've explored several workarounds to avoid it including specifying > horizon in networking-bagpipe, specify horizon in required-projects in > networking-bagpipe and dropping networking-bgpvpn in requirements.txt > in networking-bagpipe, but all of them do not work. > > # possible solutions > > I am thinking two options. > The one is to cut a beta release in neutron stadium for ussuri. > The other is to uncap horizon in networking-bgpvpn train and release it. > > I believe both work but the first one would be better as it is time to > release beta for Ussuri. > Discussing it in the IRC, we are planning to release beta soon. > (ovn-octavia-provider is also waiting for a beta release of neutron.) > > # Side notes > > Capping dependencies in stable branches is not what we usually do. > Why we don't do this was discussed in the mailing list thread [4] and > it is highlighted in [5]. > > Thanks, > Akihiro Motoki (irc: amotoki) > > [1] https://review.opendev.org/#/c/708829/ > [2] https://review.opendev.org/#/c/703949/ > [3] > https://zuul.opendev.org/t/openstack/build/3bc82305168b4d8cad7e4964c7207e00/log/job-output.txt#1507 > [4] > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/thread.html#11229 > [5] > http://lists.openstack.org/pipermail/openstack-discuss/2019-November/011283.html > [6] https://review.opendev.org/#/c/699456/ > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jungleboyj at gmail.com Thu Feb 20 16:32:54 2020 From: jungleboyj at gmail.com (Jay Bryant) Date: Thu, 20 Feb 2020 10:32:54 -0600 Subject: [all][nova][ptl] Another one bites the dust In-Reply-To: References: Message-ID: Eric, Sorry that you are leaving the community.  Thanks for all you have done for OpenStack and Nova over the years.  It has been good to work with you. Best of luck in your future endeavors! Jay On 2/19/2020 11:43 AM, Eric Fried wrote: > Dear OpenStack- > > Due to circumstances beyond my control, my job responsibilities will be > changing shortly and I will be leaving the community. I have enjoyed my > time here immensely; I have never loved a job, my colleagues, or the > tools of the trade more than I have here. > > My last official da