From massimo.sgaravatto at gmail.com Fri Jul 1 06:04:30 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 1 Jul 2022 08:04:30 +0200 Subject: [glance][ops] [nova] Disabling an image In-Reply-To: References: <259825a80b6cce7df2743acf6792ad4c598019ab.camel@redhat.com> Message-ID: Converting the image from public to private seems indeed a good idea. Thanks a lot for the hint ! Cheers, Massimo On Thu, Jun 30, 2022 at 2:56 PM Sean Mooney wrote: > On Thu, 2022-06-30 at 14:37 +0200, Massimo Sgaravatto wrote: > > No: I really mean resize > i guess for resize we need to pcy the backing file which we preusmabel > are doing by redownloading the orginal image. it could technically be > copied form the souce > host instead but i think if you change the visiableity rahter then > blocking download that would > hide it form peopel lookign to create new vms with it in the image list > but allow it to consiute > to be used by exsiting instace for rebuild and resize. > > > > On Thu, Jun 30, 2022 at 1:42 PM Sean Mooney wrote: > > > > > On Thu, 2022-06-30 at 10:09 +0200, Massimo Sgaravatto wrote: > > > > Dear all > > > > > > > > What is the blessed method to avoid using an image for new virtual > > > machines > > > > without causing problems for existing instances using that image ? > > > > > > > > If I deactivate the image, I then have problems resizing instances > using > > > > that image [*]: it claims that image download is forbidden since the > > > image > > > > was deactivated > > > i think you mean rebuilding the instance not resizeing right? > > > resize should not need the image since it should use the image info we > > > embed in the nova > > > in the instance_system_metadata table. > > > > > > im not sure if there is a blessed way but i proably would have changed > the > > > visablity to private. > > > > > > > > > > > > > > Thanks, Massimo > > > > > > > > [*] > > > > > > > > > > > > | fault | {'code': 500, 'created': > > > > '2022-06-30T07:57:30Z', 'message': 'Not authorized for image > > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.', 'details': 'Traceback (most > > > recent > > > > call last):\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, in > > > > download\n context, 2, \'data\', args=(image_id,))\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, in > > > > call\n result = getattr(controller, method)(*args, **kwargs)\n > File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", line > > > 670, > > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line > 255, > > > in > > > > data\n resp, body = self.http_client.get(url)\n File > > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line > 395, in > > > > get\n return self.request(url, \'GET\', **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 380, > > > > in request\n return self._handle_response(resp)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 120, > > > > in _handle_response\n raise exc.from_response(resp, > > > > resp.content)\nglanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: > The > > > > requested image has been deactivated. Image data download is > > > > forbidden.\n\nDuring handling of the above exception, another > exception > > > > occurred:\n\nTraceback (most recent call last):\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 201, in > > > > decorated_function\n return function(self, context, *args, > **kwargs)\n > > > > File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", > line > > > > 5950, in finish_resize\n context, instance, migration)\n File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, > in > > > > __exit__\n self.force_reraise()\n File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, > in > > > > force_reraise\n raise self.value\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5932, in > > > > finish_resize\n migration, request_spec)\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5966, in > > > > _finish_resize_helper\n request_spec)\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5902, in > > > > _finish_resize\n self._set_instance_info(instance, old_flavor)\n > File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, > in > > > > __exit__\n self.force_reraise()\n File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, > in > > > > force_reraise\n raise self.value\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5890, in > > > > _finish_resize\n block_device_info, power_on)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line > > > 11343, > > > > in finish_migration\n > fallback_from_host=migration.source_compute)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > > > line > > > > 4703, in _create_image\n injection_info, fallback_from_host)\n > File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line > > > 4831, > > > > in _create_and_inject_local_root\n instance, size, > > > fallback_from_host)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > > > line > > > > 10625, in _try_fetch_image_cache\n > > > > trusted_certs=instance.trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", > > > line > > > > 275, in cache\n *args, **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", > > > line > > > > 638, in create_image\n prepare_template(target=base, *args, > > > **kwargs)\n > > > > File > "/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py", > > > > line 391, in inner\n return f(*args, **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", > > > line > > > > 271, in fetch_func_sync\n fetch_func(target=target, *args, > **kwargs)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/utils.py", > line > > > > 395, in fetch_image\n images.fetch_to_raw(context, image_id, > target, > > > > trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/images.py", line 115, in > > > > fetch_to_raw\n fetch(context, image_href, path_tmp, > trusted_certs)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/images.py", line > 106, > > > in > > > > fetch\n trusted_certs=trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1300, > in > > > > download\n trusted_certs=trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 379, in > > > > download\n _reraise_translated_image_exception(image_id)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1031, > in > > > > _reraise_translated_image_exception\n raise > > > > new_exc.with_traceback(exc_trace)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, in > > > > download\n context, 2, \'data\', args=(image_id,))\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, in > > > > call\n result = getattr(controller, method)(*args, **kwargs)\n > File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", line > > > 670, > > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line > 255, > > > in > > > > data\n resp, body = self.http_client.get(url)\n File > > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line > 395, in > > > > get\n return self.request(url, \'GET\', **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 380, > > > > in request\n return self._handle_response(resp)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 120, > > > > in _handle_response\n raise exc.from_response(resp, > > > > resp.content)\nnova.exception.ImageNotAuthorized: Not authorized for > > > image > > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.\n'} | > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Jul 1 06:46:23 2022 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 1 Jul 2022 08:46:23 +0200 Subject: [all][TC] Stats about rechecking patches without reason given In-Reply-To: References: <2224274.SIoALSJNh4@p1> <20220630130620.i5h47yddyxdypefq@yuggoth.org> <95c03dc437aef52231c2283c1d783ae8bbc99ff1.camel@redhat.com> <2709079.EGI8z0umEv@p1> Message-ID: <82fadda5-4219-9314-3128-054b0bc215f0@matthias-runge.de> On 30/06/2022 20:06, Dan Smith wrote: >>> Or vice versa, if there are 20 rechecks for 2 patches, even if neither >>> of them are bare, it's still weird and smth worth reconsidering from >>> project perspective. >> >> I think the idea is to create a culture of debugging and record >> keeping. Yes, I would expect after a few rechecks that maybe the root >> causes would be addressed in this case, but the first step in doing >> that is identifying the problem and making note of it. > > Right, that is the goal. Asking for a message at least sets the > expectation that people are looking at the reasons for the fails. Just > because they don't doesn't mean they aren't, or don't care, but I think > it helps reinforce the desired behavior. If nothing else, it also helps > observers realize "huh, I've seen a bunch of rechecks about $reason > lately, maybe we should look at that". So, what happens with the script, when you add 2 comments, one: "network error during package install, let's try again" and the next message "recheck". In my understanding, that would count as recheck without reason given. (by the script). Maybe it's worth to document how to give a better proof that someone looked into the logs and tried to get to the root cause of a previous CI failure? The other issue I see here is that with CI being flaky, chances seem to get better when doing a recheck. An extreme example: https://review.opendev.org/c/openstack/tripleo-heat-templates/+/844519 required 8 rechecks, no changes in the patch itself, and no dependencies. The CI failed always in different checks. Matthias From christian.rohmann at inovex.de Fri Jul 1 07:10:46 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 1 Jul 2022 09:10:46 +0200 Subject: [designate] How to avoid NXDOMAIN or stale data during cold start of a (new) machine In-Reply-To: References: <69ab8e54-f419-4cd1-f289-a0b5efb7f723@inovex.de> Message-ID: <81a3d69e-f96b-7607-6625-06fb465cd8f9@inovex.de> On 07/06/2022 02:04, Michael Johnson wrote: > There are two ways zones can be resynced: > 1. Using the "designate-manage pool update" command. This will force > an update/recreate of all of the zones. > 2. When a zone is in ERROR or PENDING for too long, the > WorkerPeriodicRecovery task in producer will attempt to repair the > zone. > > I don't think there is a periodic task that checks the BIND instances > for missing content at the moment. Adding one would have to be done > very carefully as it would be easy to get into race conditions when > new zones are being created and deployed. Just as an update: When playing with this issue of a cold start with no zones and "designate-manage pool update" no fixing it. We found that somebody just ran into the issue of (https://bugs.launchpad.net/designate/+bug/1958409/) and proposed a fix (rndc modzone -> rndc addzone). With this patch the "pool update" does cause all them missing zones to be created in a BIND instance that has either lost it's zones or has just been added to the pool. Regards Christian From amonster369 at gmail.com Fri Jul 1 09:26:58 2022 From: amonster369 at gmail.com (A Monster) Date: Fri, 1 Jul 2022 10:26:58 +0100 Subject: Problem while launching an instance directly from an image "Volume did not finish being created even after we waited 203 seconds or 61 attempts" Message-ID: I've deployed openstack using kolla, when I try to launch an instance directly from any image, after some time waiting for Block Device Mapping I get the following error : Build of instance 4cf01ba2-05b3-44e9-a685-8875d8c96b4e aborted: Volume > 01739e82-9e66-41f7-be74-dfbbdcd6746e did not finish being created even > after we waited 203 seconds or 61 attempts. And its status is creating. I've tried increasing *block_device_allocate_retries=400 *and *block_device_allocate_retries_interval=3 *, however I keep getting the same error. But when I create a volume from an image, then launch an instance from that same volume, it works just fine. Any suggestions for this issue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Fri Jul 1 13:36:30 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Fri, 1 Jul 2022 19:06:30 +0530 Subject: Zun connector for persistent shared files system Manila Message-ID: Hi, I am using zun for running containers and managing them. I deployed cinder also persistent storage. and it is working fine. I want to mount my Manila shares to be mounted on containers managed by Zun. I can see a Fuxi project and driver for this but it is discontinued now. With Cinder only one container can use the storage volume at a time. If I want to have a shared file system to be mounted on multiple containers simultaneously, it is not possible with cinder. Is there any alternative to Fuxi. is there any other mechanism to use docker Volume support for NFS as shown in the link below? https://docs.docker.com/storage/volumes/ Please advise and give a suggestion. Regards, Vaibhav -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Fri Jul 1 15:31:16 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 01 Jul 2022 16:31:16 +0100 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups Message-ID: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> tl;dr: Who should be able project-specific stable-maint groups on Gerrit: members of the projects-specific stable-maint group itself or members of stable- maint-core? A recent discussion on #openstack-sdks highlighted some discrepancies in the ownership of various project-specific "stable-maint" groups on Gerrit. As a reminder, any project that specifies "stable:follows-policy" is required to follow the stable branch policy (suprise!). This is documented rather well at [1]. We expect people who have the ability to merge patches to stable branches to understand and apply this policy. Initially the only people that could do this were peopled added to a central Gerrit group called "stable-maint-core" group, however, in recent years this responsibility has been devolved to the projects themselves. Each project with a stable branch now has a project- specific stable maintenance Gerrit group called PROJECTNAME-stable-maint (e.g. nova-stable-maint [2]. The issue here is who should *own* these groups. The owner of a Gerrit group is the only one that's able to modify it. In general, the owner of a Gerrit group is the group itself so for example the owner of python-openstackclient-core is python-openstackclient-core [3]. This means that if you're a member of the group then you can add or remove members, rename the group, set a description etc. However, _most_ PROJECTNAmE-stable-maint groups are owned by the old 'stable- maint-core' group, meaning only members of this global group can modify the project-specific groups. I say _most_ because this isn't applied across the board. The following projects are owned by 'stable-maint-core': * barbican-stable-maint * ceilometer-stable-maint * cinder-stable-maint * designate-stable-maint * glance-stable-maint * heat-stable-maint * horizon-stable-maint * ironic-stable-maint * keystone-stable-maint * manila-stable-maint * neutron-stable-maint * nova-stable-maint * oslo-stable-maint * python-openstackclient-stable-maint * sahara-stable-maint * swift-stable-maint * trove-stable-maint * zaqar-stable-maint However, the following stable groups "own themselves": * ansible-collections-openstack-stable-maint * aodh-stable-maint * congress-stable-maint * freezer-stable-maint * karbor-stable-maint * mistral-stable-maint * neutron-dynamic-routing-stable-maint * neutron-lib-stable-maint * neutron-vpnaas-stable-maint * octavia-stable-maint * openstacksdk-stable-maint * oslo-vmware-stable-maint * ovn-octavia-provider-stable-maint * panko-stable-maint * placement-stable-maint * sahara-dashboard-stable-maint * senlin-dashboard-stable-maint * senlin-stable-maint * telemetry-stable-maint This brings me to my question (finally!): do we want to resolve this discrepancy, and if so, how? Personally, I would lean towards delegating this entirely to the projects but I don't know if this requires TC involvement to do. If we want to insist on the stable-maint group owning all PROJECT-stable-maint branches then we have a lot of cleanup to do! Cheers, Stephen PS: This might be a good moment to do a cleanup of members of various stable branches that have since moved on... [1] https://docs.openstack.org/project-team-guide/stable-branches.html [2] https://review.opendev.org/admin/groups/21ce6c287ea33809980b2dec53915b07830cdb11 [3] https://review.opendev.org/admin/groups/aa0f197fcfbd4fcebeff921567ed3b48cd330a4c From smooney at redhat.com Fri Jul 1 16:15:36 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Jul 2022 17:15:36 +0100 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> Message-ID: <74160f8a845214b60b8f44615dd67c9c25757fdc.camel@redhat.com> On Fri, 2022-07-01 at 16:31 +0100, Stephen Finucane wrote: > tl;dr: Who should be able project-specific stable-maint groups on Gerrit: > members of the projects-specific stable-maint group itself or members of stable- > maint-core? > > A recent discussion on #openstack-sdks highlighted some discrepancies in the > ownership of various project-specific "stable-maint" groups on Gerrit. As a > reminder, any project that specifies "stable:follows-policy" is required to > follow the stable branch policy (suprise!). This is documented rather well at > [1]. We expect people who have the ability to merge patches to stable branches > to understand and apply this policy. Initially the only people that could do > this were peopled added to a central Gerrit group called "stable-maint-core" > group, however, in recent years this responsibility has been devolved to the > projects themselves. Each project with a stable branch now has a project- > specific stable maintenance Gerrit group called PROJECTNAME-stable-maint (e.g. > nova-stable-maint [2]. > > The issue here is who should *own* these groups. The owner of a Gerrit group is > the only one that's able to modify it. In general, the owner of a Gerrit group > is the group itself so for example the owner of python-openstackclient-core is > python-openstackclient-core [3]. This means that if you're a member of the group > then you can add or remove members, rename the group, set a description etc. > However, _most_ PROJECTNAmE-stable-maint groups are owned by the old 'stable- > maint-core' group, meaning only members of this global group can modify the > project-specific groups. I say _most_ because this isn't applied across the > board. The following projects are owned by 'stable-maint-core': > > * barbican-stable-maint > * ceilometer-stable-maint > * cinder-stable-maint > * designate-stable-maint > * glance-stable-maint > * heat-stable-maint > * horizon-stable-maint > * ironic-stable-maint > * keystone-stable-maint > * manila-stable-maint > * neutron-stable-maint > * nova-stable-maint > * oslo-stable-maint > * python-openstackclient-stable-maint > * sahara-stable-maint > * swift-stable-maint > * trove-stable-maint > * zaqar-stable-maint > > However, the following stable groups "own themselves": > > * ansible-collections-openstack-stable-maint > * aodh-stable-maint > * congress-stable-maint > * freezer-stable-maint > * karbor-stable-maint > * mistral-stable-maint > * neutron-dynamic-routing-stable-maint > * neutron-lib-stable-maint > * neutron-vpnaas-stable-maint > * octavia-stable-maint > * openstacksdk-stable-maint > * oslo-vmware-stable-maint > * ovn-octavia-provider-stable-maint > * panko-stable-maint > * placement-stable-maint > * sahara-dashboard-stable-maint > * senlin-dashboard-stable-maint > * senlin-stable-maint > * telemetry-stable-maint > > This brings me to my question (finally!): do we want to resolve this > discrepancy, and if so, how? Personally, I would lean towards delegating this > entirely to the projects but I don't know if this requires TC involvement to do. > If we want to insist on the stable-maint group owning all PROJECT-stable-maint > branches then we have a lot of cleanup to do! i would probably delegate them to be self owned too. one thing that might be workt looking att too is the convention when stackforge was still a thign was to create 2 projects PROJECT-core and PROJECT-release and the release group was the own of the stabel branch instead of PROJECT-stable-maint the release group is mainly for pushing signed tags but it was ofthen the same group of peopel that did that as did stabel backports?kuryr for example https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/kuryr.config still uses kuryr-release for stable branch merge rights as does morano. i dont see its use widely currently so maybe an non issue but i had tought at one point -release was encuraged instead of stable-maint when the neutron stadium ectra was being created. https://codesearch.opendev.org/?q=-release&i=nope&literal=nope&files=gerrit%2Facls%2Fopenstack&excludeFiles=&repos= in general i suspect that there are few usease of -release for stable for branch for porject under offcial governance. > > Cheers, > Stephen > > PS: This might be a good moment to do a cleanup of members of various stable > branches that have since moved on... > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > [2] https://review.opendev.org/admin/groups/21ce6c287ea33809980b2dec53915b07830cdb11 > [3] https://review.opendev.org/admin/groups/aa0f197fcfbd4fcebeff921567ed3b48cd330a4c > > From gmann at ghanshyammann.com Fri Jul 1 16:17:26 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 11:17:26 -0500 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> Message-ID: <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> ---- On Fri, 01 Jul 2022 10:31:16 -0500 Stephen Finucane wrote --- > tl;dr: Who should be able project-specific stable-maint groups on Gerrit: > members of the projects-specific stable-maint group itself or members of stable- > maint-core? > > A recent discussion on #openstack-sdks highlighted some discrepancies in the > ownership of various project-specific "stable-maint" groups on Gerrit. As a > reminder, any project that specifies "stable:follows-policy" is required to > follow the stable branch policy (suprise!). This is documented rather well at > [1]. We expect people who have the ability to merge patches to stable branches > to understand and apply this policy. Initially the only people that could do > this were peopled added to a central Gerrit group called "stable-maint-core" > group, however, in recent years this responsibility has been devolved to the > projects themselves. Each project with a stable branch now has a project- > specific stable maintenance Gerrit group called PROJECTNAME-stable-maint (e.g. > nova-stable-maint [2]. > > The issue here is who should *own* these groups. The owner of a Gerrit group is > the only one that's able to modify it. In general, the owner of a Gerrit group > is the group itself so for example the owner of python-openstackclient-core is > python-openstackclient-core [3]. This means that if you're a member of the group > then you can add or remove members, rename the group, set a description etc. > However, _most_ PROJECTNAmE-stable-maint groups are owned by the old 'stable- > maint-core' group, meaning only members of this global group can modify the > project-specific groups. I say _most_ because this isn't applied across the > board. The following projects are owned by 'stable-maint-core': > > * barbican-stable-maint > * ceilometer-stable-maint > * cinder-stable-maint > * designate-stable-maint > * glance-stable-maint > * heat-stable-maint > * horizon-stable-maint > * ironic-stable-maint > * keystone-stable-maint > * manila-stable-maint > * neutron-stable-maint > * nova-stable-maint > * oslo-stable-maint > * python-openstackclient-stable-maint > * sahara-stable-maint > * swift-stable-maint > * trove-stable-maint > * zaqar-stable-maint > > However, the following stable groups "own themselves": > > * ansible-collections-openstack-stable-maint > * aodh-stable-maint > * congress-stable-maint > * freezer-stable-maint > * karbor-stable-maint > * mistral-stable-maint > * neutron-dynamic-routing-stable-maint > * neutron-lib-stable-maint > * neutron-vpnaas-stable-maint > * octavia-stable-maint > * openstacksdk-stable-maint > * oslo-vmware-stable-maint > * ovn-octavia-provider-stable-maint > * panko-stable-maint > * placement-stable-maint > * sahara-dashboard-stable-maint > * senlin-dashboard-stable-maint > * senlin-stable-maint > * telemetry-stable-maint > > This brings me to my question (finally!): do we want to resolve this > discrepancy, and if so, how? Personally, I would lean towards delegating this > entirely to the projects but I don't know if this requires TC involvement to do. > If we want to insist on the stable-maint group owning all PROJECT-stable-maint > branches then we have a lot of cleanup to do! You understanding is right that it is delegated to project etirely. In Xena cycle, TC passed the resolution about decentralize the stable branch core team to projects - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html and We updated the project-team-guide also to have project manage the project specific stable core group - https://review.opendev.org/c/openstack/project-team-guide/+/834794 It is own by the project and they can manage this group the same way they manage the master branch core group. -gmann > > Cheers, > Stephen > > PS: This might be a good moment to do a cleanup of members of various stable > branches that have since moved on... > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > [2] https://review.opendev.org/admin/groups/21ce6c287ea33809980b2dec53915b07830cdb11 > [3] https://review.opendev.org/admin/groups/aa0f197fcfbd4fcebeff921567ed3b48cd330a4c > > > From gmann at ghanshyammann.com Fri Jul 1 16:23:05 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 11:23:05 -0500 Subject: [all][TC] Stats about rechecking patches without reason given In-Reply-To: <82fadda5-4219-9314-3128-054b0bc215f0@matthias-runge.de> References: <2224274.SIoALSJNh4@p1> <20220630130620.i5h47yddyxdypefq@yuggoth.org> <95c03dc437aef52231c2283c1d783ae8bbc99ff1.camel@redhat.com> <2709079.EGI8z0umEv@p1> <82fadda5-4219-9314-3128-054b0bc215f0@matthias-runge.de> Message-ID: <181ba92b460.1112f627621801.1622041958884190146@ghanshyammann.com> ---- On Fri, 01 Jul 2022 01:46:23 -0500 Matthias Runge wrote --- > On 30/06/2022 20:06, Dan Smith wrote: > >>> Or vice versa, if there are 20 rechecks for 2 patches, even if neither > >>> of them are bare, it's still weird and smth worth reconsidering from > >>> project perspective. > >> > >> I think the idea is to create a culture of debugging and record > >> keeping. Yes, I would expect after a few rechecks that maybe the root > >> causes would be addressed in this case, but the first step in doing > >> that is identifying the problem and making note of it. > > > > Right, that is the goal. Asking for a message at least sets the > > expectation that people are looking at the reasons for the fails. Just > > because they don't doesn't mean they aren't, or don't care, but I think > > it helps reinforce the desired behavior. If nothing else, it also helps > > observers realize "huh, I've seen a bunch of rechecks about $reason > > lately, maybe we should look at that". > > So, what happens with the script, when you add 2 comments, one: "network > error during package install, let's try again" and the next message > "recheck". In this case, you can always mentione the "recheck network error during package install, let's try again" or if you have added a lenthy text for failure and then want to recheck you can add a one line sumamry during recheck. Overall idea is not to literally count the bare recheck but to build a habbit among us that we should look at the failure before we just do recheck. > > In my understanding, that would count as recheck without reason given. > (by the script). Maybe it's worth to document how to give a better proof > that someone looked into the logs and tried to get to the root cause of > a previous CI failure? I think Dan has written a nice document about it including how to debug the failure, - https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures We welcome everyone to extend it to have more detail or example if any case specific to projects and it is not covered. -gmann > > The other issue I see here is that with CI being flaky, chances seem to > get better when doing a recheck. > > An extreme example: > https://review.opendev.org/c/openstack/tripleo-heat-templates/+/844519 > required 8 rechecks, no changes in the patch itself, and no > dependencies. The CI failed always in different checks. > > Matthias > > From fungi at yuggoth.org Fri Jul 1 17:40:24 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Jul 2022 17:40:24 +0000 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <74160f8a845214b60b8f44615dd67c9c25757fdc.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> <74160f8a845214b60b8f44615dd67c9c25757fdc.camel@redhat.com> Message-ID: <20220701174024.o7z7z73yslpju7ep@yuggoth.org> On 2022-07-01 17:15:36 +0100 (+0100), Sean Mooney wrote: [...] > i had tought at one point -release was encuraged instead of > stable-maint when the neutron stadium ectra was being created. [...] Perhaps lost to the annals of time, but these were the -drivers groups, most of which got renamed to -release. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jul 1 17:41:57 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Jul 2022 17:41:57 +0000 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> Message-ID: <20220701174157.dt655llarwntyuh7@yuggoth.org> On 2022-07-01 11:17:26 -0500 (-0500), Ghanshyam Mann wrote: [...] > In Xena cycle, TC passed the resolution about decentralize the > stable branch core team to projects > - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html > > and We updated the project-team-guide also to have project manage > the project specific stable core group > - https://review.opendev.org/c/openstack/project-team-guide/+/834794 > > It is own by the project and they can manage this group the same > way they manage the master branch core group. [...] Except that they aren't, at least not in any practical sense, and that's what was missed. Sounds like I'm free to make the groups in Gerrit all be self-owned? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Jul 1 18:30:37 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 13:30:37 -0500 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <20220701174157.dt655llarwntyuh7@yuggoth.org> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> <20220701174157.dt655llarwntyuh7@yuggoth.org> Message-ID: <181bb077700.c47b047425771.15694433784435277@ghanshyammann.com> ---- On Fri, 01 Jul 2022 12:41:57 -0500 Jeremy Stanley wrote --- > On 2022-07-01 11:17:26 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > In Xena cycle, TC passed the resolution about decentralize the > > stable branch core team to projects > > - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html > > > > and We updated the project-team-guide also to have project manage > > the project specific stable core group > > - https://review.opendev.org/c/openstack/project-team-guide/+/834794 > > > > It is own by the project and they can manage this group the same > > way they manage the master branch core group. > [...] > > Except that they aren't, at least not in any practical sense, and > that's what was missed. Sounds like I'm free to make the groups in > Gerrit all be self-owned? Yes, please. I think we thought there is no such restriction in those group untill Stephen brought this here. ---- On Fri, 01 Jul 2022 12:41:57 -0500 Jeremy Stanley wrote --- > On 2022-07-01 11:17:26 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > In Xena cycle, TC passed the resolution about decentralize the > > stable branch core team to projects > > - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html > > > > and We updated the project-team-guide also to have project manage > > the project specific stable core group > > - https://review.opendev.org/c/openstack/project-team-guide/+/834794 > > > > It is own by the project and they can manage this group the same > > way they manage the master branch core group. > [...] > > Except that they aren't, at least not in any practical sense, and > that's what was missed. Sounds like I'm free to make the groups in > Gerrit all be self-owned? Yes, please. I think we thought there is no such restriction in those group until Stephen brought this here. -gmann > -- > Jeremy Stanley > -gmann > -- > Jeremy Stanley > From geguileo at redhat.com Fri Jul 1 18:55:18 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 1 Jul 2022 20:55:18 +0200 Subject: [cinder] Spec Freeze Exception Request Message-ID: <20220701185518.6cid4paqrsnxnq6a@localhost> Hi, I would like to request a spec freeze exception for the new Cinder Quota System spec [1]. Analysis of the required spec changes needed to implement the second quota driver, as agreed in the PTG/mid-cycle, were non trivial. In the latest spec update I just pushed there are considerable changes: - General improvements to existing sections to increase readability. - Description of the additional driver and reasons why we decided to implement it. - Spell out through the entire spec the similarities and differences of both drivers. - Change in tracking of the reservations to accommodate the new driver. - Description on how switching from one driver to the other would work. - Updated the driver interface to accommodate the particularities of the new driver. - Updated the performance section with a very brief summary of the performance tests done with some code prototipe. - Updating the phases of the effort as well as the work items. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder-specs/+/819693 From gmann at ghanshyammann.com Fri Jul 1 19:02:05 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 14:02:05 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 01 July 2022: Reading: 5 min Message-ID: <181bb244781.c0271e6626303.1247651419629388523@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on 30 June. Most of the meeting discussions are summarized in this email. Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-06-30-15.00.log.html * Next TC weekly meeting will be on 7 July Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by 6 July. 2. What we completed this week: ========================= * Retired openstack-helm-deployments[2]. * Added Cinder Dell EMC PowerStore charm[3] 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[4], Two are completed and others items are in-progress. Open Reviews ----------------- * Three open reviews for ongoing activities[5]. Create the Environmental Sustainability SIG --------------------------------------------------- We discussed it in TC meeting but did not conclude anything as Kendal would like to wait for more feedback in review[6]. New ELK service dashboard: e-r service ----------------------------------------------- There are good progress on brining elastic-recheck back. From no onwards, we can track the progress in TacT SIG. Feel free to ping dpawlik on #openstack-infra for any query or help. Consistent and Secure Default RBAC ------------------------------------------- We have a good amount of discussion and review in the goal document updates[7] and I have updated the patch by resolving the review comments. We will have policy popup meeting on 5 July[8]. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[9]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[10]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[11][12]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847413 [3] https://review.opendev.org/c/openstack/governance/+/846890 [4] https://etherpad.opendev.org/p/tc-zed-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://review.opendev.org/c/openstack/governance-sigs/+/845336 [7] https://review.opendev.org/c/openstack/governance/+/847418 [8] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting [9] https://review.opendev.org/c/openstack/governance/+/836888 [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [11] https://etherpad.opendev.org/p/zuul-config-error-openstack [12] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From franck.vedel at univ-grenoble-alpes.fr Fri Jul 1 19:35:50 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 1 Jul 2022 21:35:50 +0200 Subject: [kolla-ansible][centos][yoga] Problems VpnaaS and containers Message-ID: Hello I hope to have some help on a point that was bothering me last year, and which still does not work. I was a year ago on Wallaby, after 2 updates, I'm currently on Yoga. I use kolla-ansible, and openstack-kolla/centos-source images. Last year, I found the following ebug: https://bugs.launchpad.net/neutron/+bug/1938571 I tried with the Yoga update: exactly the same problem "Command: ['ipsec', 'whack', '--status'] Exit code: 33 Stdout: Stderr: whack: Pluto is not running (no "/run/pluto/pluto.ctl") ; Stderr: ? Should I conclude that this bug will never be fixed and that it is impossible to have vpnaas functionality with centos images? Second question: let's imagine that I change the line kolla_base_distro: "centos" in kolla_base_distro: "ubuntu" Does this have a chance of working? I can't see all the possible problems. Thanks for your help. Franck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Sat Jul 2 05:05:54 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Sat, 2 Jul 2022 10:35:54 +0530 Subject: [cinder] Spec Freeze Exception Request In-Reply-To: <20220701185518.6cid4paqrsnxnq6a@localhost> References: <20220701185518.6cid4paqrsnxnq6a@localhost> Message-ID: Thanks Gorka for spelling out all the changes made to the spec since the initial submission in yoga, would make the review experience quite better. The quota issues have indeed been a pain point for OpenStack operators for a long time and it's really crucial to fix them. I'm OK with granting this an FFE (+1) On Sat, Jul 2, 2022 at 12:31 AM Gorka Eguileor wrote: > Hi, > > I would like to request a spec freeze exception for the new Cinder Quota > System spec [1]. > > Analysis of the required spec changes needed to implement the second > quota driver, as agreed in the PTG/mid-cycle, were non trivial. > > In the latest spec update I just pushed there are considerable changes: > > - General improvements to existing sections to increase readability. > > - Description of the additional driver and reasons why we decided to > implement it. > > - Spell out through the entire spec the similarities and differences of > both drivers. > > - Change in tracking of the reservations to accommodate the new driver. > > - Description on how switching from one driver to the other would work. > > - Updated the driver interface to accommodate the particularities of the > new driver. > > - Updated the performance section with a very brief summary of the > performance tests done with some code prototipe. > > - Updating the phases of the effort as well as the work items. > > Cheers, > Gorka. > > > [1]: https://review.opendev.org/c/openstack/cinder-specs/+/819693 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jul 4 04:46:56 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 4 Jul 2022 10:16:56 +0530 Subject: Problem while launching an instance directly from an image "Volume did not finish being created even after we waited 203 seconds or 61 attempts" In-Reply-To: References: Message-ID: Hi, On Fri, Jul 1, 2022 at 3:04 PM A Monster wrote: > I've deployed openstack using kolla, when I try to launch an instance > directly from any image, after some time waiting for Block Device Mapping I > get the following error : > > I'm confused here, when you say directly from image, do you mean ephemeral volumes (nova) or persistent volumes (cinder)? I will assume cinder volumes since we've BDM here and nova is triggering cinder to create bootable volumes. > Build of instance 4cf01ba2-05b3-44e9-a685-8875d8c96b4e aborted: Volume >> 01739e82-9e66-41f7-be74-dfbbdcd6746e did not finish being created even >> after we waited 203 seconds or 61 attempts. And its status is creating. > > > I've tried increasing *block_device_allocate_retries=400 *and *block_device_allocate_retries_interval=3 > *, however I keep getting the same error. > > But when I create a volume from an image, then launch an instance from > that same volume, it works just fine. > Any suggestions for this issue? > Which OpenStack release you're working on? I think the operation is failing asynchronously on the cinder side (probably in c-vol) and nova times out waiting for a response. I suggest you check cinder logs (c-api, c-sch, c-vol) for a more specific error message. Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jul 4 07:36:30 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 4 Jul 2022 09:36:30 +0200 Subject: [largescale-sig] Next meeting: July 6th, 15utc Message-ID: <9aa1acd4-4321-d36d-2482-6f4e417cd41d@openstack.org> Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC, before taking a break for July and most of August. You can check how that time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20220706T15 Feel free to add topics to the agenda: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From ralonsoh at redhat.com Mon Jul 4 08:33:42 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 4 Jul 2022 10:33:42 +0200 Subject: [neutron] Bug deputy Jun 27 to Jul 3 Message-ID: Hello Neutrinos: This is the bug list of the past week: Critical: ** https://bugs.launchpad.net/neutron/+bug/1980055 : neutron rally jobs broken with gnocchiclient 7.0.7 update.* Assigned to Yatin. Patch: https://review.opendev.org/c/openstack/neutron/+/847989 High: ** https://bugs.launchpad.net/neutron/+bug/1979958 : [regression] Unable to schedule segment.* Assigned to Bence. ** https://bugs.launchpad.net/neutron/+bug/1980346 : [CI] NetworkSegmentTests failing frequently in OSC CI* Assigned to Rodolfo Patch: https://review.opendev.org/c/openstack/neutron/+/848396 ** https://bugs.launchpad.net/neutron/+bug/1980421 : 'Socket /var/run/openvswitch/ovnnb_db.sock not found' during ovn_start* However, it seems to be a problem in the deployment script, solved with an extra timeout during the OVN startup. Patch: https://review.opendev.org/c/openstack/devstack/+/848548 Medium: ** https://bugs.launchpad.net/neutron/+bug/1980126 : [FT] Error in "test_dvr_router_lifecycle_ha_with_snat_with_fips"* Assigned to Rodolfo. Patch: https://review.opendev.org/c/openstack/neutron/+/848312 (WIP, test patch) ** https://bugs.launchpad.net/neutron/+bug/1980127 : [FT] Error in MySQL "test_walk_versions" with PostgreSQL* Assigned to Rodolfo. Patch: https://review.opendev.org/c/openstack/neutron/+/848146 ** https://bugs.launchpad.net/neutron/+bug/1980235 : StaticScheduler has not attribute schedule* Unassigned (frickler is commenting on the bug). ** https://bugs.launchpad.net/neutron/+bug/1980488 : [OVN] OVN fails to report to placement if other mech driver is configured.* Assigned to Rodolfo Patch: https://review.opendev.org/q/Ic9e8586991866ebca0b24bfe691e541c198d18d7 Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jul 4 08:33:51 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 4 Jul 2022 10:33:51 +0200 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: Message-ID: Just a quick follow up - I was permitted to share a pre-published version of the article I was citing in my email from June 4th. [1] Please enjoy responsibly. :-) [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf Cheers, Radek -yoctozepto On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek wrote: > > On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: > > > > Hi Rados?aw, > > Hi Andy, > > Sorry for the late reply, been busy vacationing and then dealing with COVID-19. > > > > First of all, wow, that looks very interesting and in fact very much > > > what I'm looking for. As I mentioned in the original message, the > > > things this solution lacks are not something blocking for me. > > > Regarding the approach to Guacamole, I know that it's preferable to > > > have guacamole extension (that provides the dynamic inventory) > > > developed rather than meddle with the internal database but I guess it > > > is a good start. > > > > An even better approach would be something like the Guacozy project > > (https://guacozy.readthedocs.io) > > I am not convinced. The project looks dead by now. [1] > It offers a different UI which may appeal to certain users but I think > sticking to vanilla Guacamole should do us right... For the time being > at least. ;-) > > > They were able to use the Guacmole JavaScript libraries directly to > > embed the HTML5 desktop within a React? app. I think this is a much > > better approach, and I'd love to be able to do something similar in > > the future. Would make the integration that much nicer. > > Well, as an example of embedding in the UI - sure. But it does not > invalidate the need to modify Guacamole's database or write an > extension to it so that it has the necessary creds. > > > > > > > Any "quickstart setting up" would be awesome to have at this stage. As > > > this is a Django app, I think I should be able to figure out the bits > > > and bolts to get it up and running in some shape but obviously it will > > > impede wider adoption. > > > > Yeah I agree. I'm in the process of documenting it, so I'll aim to get > > a quickstart guide together. > > > > I have a private repo with code to set up a development environment > > which uses Heat and Ansible - this might be the quickest way to get > > started. I'm happy to share this with you privately if you like. > > I'm interested. Please share it. > > > > On the note of adoption, if I find it usable, I can provide support > > > for it in Kolla [1] and help grow the project's adoption this way. > > > > Kolla could be useful. We're already using containers for this project > > now, and I have a helm chart for deploying to k8s. > > https://github.com/NeCTAR-RC/bumblebee-helm > > Nice! The catch is obviously that some orgs frown upon K8s because > they lack the necessary know-how. > Kolla by design avoids the use of K8s. OpenStack components are not > cloud-native anyway so benefits of using K8s are diminished (yet it > makes sense to use K8s if there is enough experience with it as it > makes certain ops more streamlined and simpler this way). > > > Also, an important part is making sure the images are set up correctly > > with XRDP, etc. Our images are built using Packer, and the config for > > them can be found at https://github.com/NeCTAR-RC/bumblebee-images > > Ack, thanks for sharing. > > > > Also, since this is OpenStack-centric, maybe you could consider > > > migrating to OpenDev at some point to collaborate with interested > > > parties using a common system? > > > Just food for thought at the moment. > > > > I think it would be more appropriate to start a new project. I think > > our codebase has too many assumptions about the underlying cloud. > > > > We inherited the code from another project too, so it's got twice the cruft. > > I see. Well, that's good to know at least. > > > > Writing to let you know I have also found the following related paper: [1] > > > and reached out to its authors in the hope to enable further > > > collaboration to happen. > > > The paper is not open access so I have only obtained it for myself and > > > am unsure if licensing permits me to share, thus I also asked the > > > authors to share their copy (that they have copyrights to). > > > I have obviously let them know of the existence of this thread. ;-) > > > Let's stay tuned. > > > > > > [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 > > > > This looks interesting. A collaboration would be good if there is > > enough interest in the community. > > I am looking forward to the collaboration happening. This could really > liven up the OpenStack VDI. > > [1] https://github.com/paidem/guacozy/ > > -yoctozepto From ekuvaja at redhat.com Mon Jul 4 11:05:21 2022 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 4 Jul 2022 12:05:21 +0100 Subject: [glance][ops] [nova] Disabling an image In-Reply-To: References: <259825a80b6cce7df2743acf6792ad4c598019ab.camel@redhat.com> Message-ID: On Fri, 1 Jul 2022 at 07:17, Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > Converting the image from public to private seems indeed a good idea. > Thanks a lot for the hint ! > Cheers, Massimo > > Hi Massimo, Turning it into private will cause the very same issue for anyone using the image who was consuming it outside of the project that owns the image. The "hidden" [0] flag was developed for this purpose. Even it does not prevent one to launch new instances from the said image, it will strongly discourage it as the image is not listed in the normal image listings. So if you have a new up to date version of the image, but the old one is still widely in use, turn the old image hidden and unless someone is specifically launching the instance with that old image ID, they will be directed towards your new version. As we don't currently have any mechanism separating a user making a call to Glance with one of the clients vs. Nova making the call on behalf of the user, we also have no means to ensure that the image would be consumable for housekeeping purposes while new instances would be prevented. So this was the most user friendly solution we came up with at the time. [0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/implemented/glance/operator-image-workflow.html - jokke On Thu, Jun 30, 2022 at 2:56 PM Sean Mooney wrote: > >> On Thu, 2022-06-30 at 14:37 +0200, Massimo Sgaravatto wrote: >> > No: I really mean resize >> i guess for resize we need to pcy the backing file which we preusmabel >> are doing by redownloading the orginal image. it could technically be >> copied form the souce >> host instead but i think if you change the visiableity rahter then >> blocking download that would >> hide it form peopel lookign to create new vms with it in the image list >> but allow it to consiute >> to be used by exsiting instace for rebuild and resize. >> > >> > On Thu, Jun 30, 2022 at 1:42 PM Sean Mooney wrote: >> > >> > > On Thu, 2022-06-30 at 10:09 +0200, Massimo Sgaravatto wrote: >> > > > Dear all >> > > > >> > > > What is the blessed method to avoid using an image for new virtual >> > > machines >> > > > without causing problems for existing instances using that image ? >> > > > >> > > > If I deactivate the image, I then have problems resizing instances >> using >> > > > that image [*]: it claims that image download is forbidden since the >> > > image >> > > > was deactivated >> > > i think you mean rebuilding the instance not resizeing right? >> > > resize should not need the image since it should use the image info we >> > > embed in the nova >> > > in the instance_system_metadata table. >> > > >> > > im not sure if there is a blessed way but i proably would have >> changed the >> > > visablity to private. >> > > >> > > >> > > > >> > > > Thanks, Massimo >> > > > >> > > > [*] >> > > > >> > > > >> > > > | fault | {'code': 500, 'created': >> > > > '2022-06-30T07:57:30Z', 'message': 'Not authorized for image >> > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.', 'details': 'Traceback (most >> > > recent >> > > > call last):\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, >> in >> > > > download\n context, 2, \'data\', args=(image_id,))\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, >> in >> > > > call\n result = getattr(controller, method)(*args, **kwargs)\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", >> line >> > > 670, >> > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line >> 255, >> > > in >> > > > data\n resp, body = self.http_client.get(url)\n File >> > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line >> 395, in >> > > > get\n return self.request(url, \'GET\', **kwargs)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 380, >> > > > in request\n return self._handle_response(resp)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 120, >> > > > in _handle_response\n raise exc.from_response(resp, >> > > > resp.content)\nglanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: >> The >> > > > requested image has been deactivated. Image data download is >> > > > forbidden.\n\nDuring handling of the above exception, another >> exception >> > > > occurred:\n\nTraceback (most recent call last):\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 201, in >> > > > decorated_function\n return function(self, context, *args, >> **kwargs)\n >> > > > File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", >> line >> > > > 5950, in finish_resize\n context, instance, migration)\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 227, in >> > > > __exit__\n self.force_reraise()\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 200, in >> > > > force_reraise\n raise self.value\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5932, in >> > > > finish_resize\n migration, request_spec)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5966, in >> > > > _finish_resize_helper\n request_spec)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5902, in >> > > > _finish_resize\n self._set_instance_info(instance, >> old_flavor)\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 227, in >> > > > __exit__\n self.force_reraise()\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 200, in >> > > > force_reraise\n raise self.value\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5890, in >> > > > _finish_resize\n block_device_info, power_on)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line >> > > 11343, >> > > > in finish_migration\n >> fallback_from_host=migration.source_compute)\n >> > > > File >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> > > line >> > > > 4703, in _create_image\n injection_info, fallback_from_host)\n >> File >> > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line >> > > 4831, >> > > > in _create_and_inject_local_root\n instance, size, >> > > fallback_from_host)\n >> > > > File >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> > > line >> > > > 10625, in _try_fetch_image_cache\n >> > > > trusted_certs=instance.trusted_certs)\n File >> > > > >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", >> > > line >> > > > 275, in cache\n *args, **kwargs)\n File >> > > > >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", >> > > line >> > > > 638, in create_image\n prepare_template(target=base, *args, >> > > **kwargs)\n >> > > > File >> "/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py", >> > > > line 391, in inner\n return f(*args, **kwargs)\n File >> > > > >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", >> > > line >> > > > 271, in fetch_func_sync\n fetch_func(target=target, *args, >> **kwargs)\n >> > > > File >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/utils.py", line >> > > > 395, in fetch_image\n images.fetch_to_raw(context, image_id, >> target, >> > > > trusted_certs)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/virt/images.py", line 115, in >> > > > fetch_to_raw\n fetch(context, image_href, path_tmp, >> trusted_certs)\n >> > > > File "/usr/lib/python3.6/site-packages/nova/virt/images.py", line >> 106, >> > > in >> > > > fetch\n trusted_certs=trusted_certs)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1300, >> in >> > > > download\n trusted_certs=trusted_certs)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 379, >> in >> > > > download\n _reraise_translated_image_exception(image_id)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1031, >> in >> > > > _reraise_translated_image_exception\n raise >> > > > new_exc.with_traceback(exc_trace)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, >> in >> > > > download\n context, 2, \'data\', args=(image_id,))\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, >> in >> > > > call\n result = getattr(controller, method)(*args, **kwargs)\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", >> line >> > > 670, >> > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line >> 255, >> > > in >> > > > data\n resp, body = self.http_client.get(url)\n File >> > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line >> 395, in >> > > > get\n return self.request(url, \'GET\', **kwargs)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 380, >> > > > in request\n return self._handle_response(resp)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 120, >> > > > in _handle_response\n raise exc.from_response(resp, >> > > > resp.content)\nnova.exception.ImageNotAuthorized: Not authorized for >> > > image >> > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.\n'} | >> > > >> > > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Jul 4 13:18:54 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 4 Jul 2022 15:18:54 +0200 Subject: [nova][placement] Spec review day tomorrow Message-ID: Hey members, Just a reminder we'll have a nova/placement spec review day happening on tomorrow July 5th. Sharpen your pens and prepare your specs for review, it would be greatly appreciated. -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Mon Jul 4 14:31:13 2022 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 4 Jul 2022 16:31:13 +0200 Subject: [neutron] change of API performance from Pike to Yoga Message-ID: Hi Neutrinos! Inspired by Julia Kreger's presentation on the summit [1] I wanted to gather some ideas about the change in Neutron API performance. For that I used Rally with Neutron's usual Rally task definition [2]. I measured against an all-in-one devstack - running always in a same sized VM, keeping its local.conf the same between versions as much as possible. Neutron was configured with ml2/ovs. Measuring other backends would also be interesting, but first I wanted to keep the config the same as I was going back to earlier versions as long as possible. Without much pain I managed to collect data starting from Yoga back to Pike. You can download all Rally reports in this tarball (6 MiB): https://drive.google.com/file/d/1TjFV7UWtX_sofjw3_njL6-6ezD7IPmsj/view?usp=sharing The tarball also contains data about how to reproduce these tests. It is currently available at my personal Google Drive. I will keep this around at least to the end of July. I would be happy to upload it to somewhere else better suited for long term storage. Let me also attach a single plot (I hope the mailing list configuration allows this) that shows the load_duration (actually the average of 3 runs each) for each Rally scenario by OpenStack release. Which I hope is the single picture summary of these test runs. However the Rally reports contain much more data, feel free to download and browse them. If the mailing list strips the attachment, the picture is included in the tarball too. Cheers, Bence (rubasov) [1] https://youtu.be/OqcnXxTbIxk [2] https://opendev.org/openstack/neutron/src/commit/a9912caf3fa1e258621965ea8c6295a2eac9887c/rally-jobs/task-neutron.yaml -------------- next part -------------- A non-text attachment was scrubbed... Name: load_duration.png Type: image/png Size: 532330 bytes Desc: not available URL: From vince.mulhollon at springcitysolutions.com Mon Jul 4 15:33:14 2022 From: vince.mulhollon at springcitysolutions.com (Vince Mulhollon) Date: Mon, 4 Jul 2022 10:33:14 -0500 Subject: A centralized list of usable vs unusable projects? Message-ID: Hi, Can anyone point me to a status board or other format identifying which projects are uninstallable or unusable perhaps organized by release? Specifically for Yoga on Kolla-Ansible although "in general" would also be useful. I have a test cluster, and I'm exercising Yoga using Ubuntu hosts and Kolla-Ansible, and the online docs imply every project is up and installable and usable. However, from my notes so far: "everyone knows" Freezer hasn't been installable for many years and requires an ElasticSearch version from last decade (Every project that uses ElasticSearch needs a different and incompatible version of ES, honestly just in general, that's not just an OpenStack phenomena), Murano has been uninstallable for years AND hard crashes all of Horizon if you try to install it, Monasca has been uninstallable for around a year, as near as I can tell, due to a crash loop in the log persister, every Watcher container crash loops for no apparent reason after every Kolla-Ansible installation, although I recently hand installed Watcher on a separate Yoga cluster and it "worked" or at least didn't crash loop, I will research this in more detail and add/update issues and storyboards. The docs claim Magnum works great with Docker Swarm and I have a specific enduser application for Swarm so I set up Magnum successfully, then learned "everyone knows" that Magnum doesn't actually work for Docker Swarm, so I was very annoyed at that. I haven't even tried Ceilometer-and-related on Yoga, although if Monasca is dead maybe I should add it to the testing plan. The vast majority of projects do "just work" which is super awesome of course. My point is, I can't find a status board type of page, or any sort of centralized list, it seems like it would be incredibly useful, so I'm making my own list of what works, and what does not work, for my own use, and I'd feel silly if there's already some service or page or dashboard implementing this effort that I couldn't find. I'd certainly contribute data toward such a status page, as I'm running tests on my test cluster anyway, may as well share the results. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jul 4 16:27:21 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Jul 2022 11:27:21 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 7 July 2022 at 1500 UTC Message-ID: <181ca09af2b.ac5a635c132555.2514523884960142985@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 7 July 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, 6 July at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From fungi at yuggoth.org Mon Jul 4 16:58:05 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jul 2022 16:58:05 +0000 Subject: A centralized list of usable vs unusable projects? In-Reply-To: References: Message-ID: <20220704165804.uprqmx3stmdopvrl@yuggoth.org> On 2022-07-04 10:33:14 -0500 (-0500), Vince Mulhollon wrote: > Can anyone point me to a status board or other format identifying > which projects are uninstallable or unusable perhaps organized by > release? [...] There is none, and attempting to create one has proven contentious in the past since "uninstallable" and "unusable" are more subjective than you might think at first. The TC has recently approved a new process for handling "inactive" projects which may be an indicator of these sorts of symptoms, though whether it's a leading or trailing indicator is hard to know without some initial data points: https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html > My point is, I can't find a status board type of page, or any sort > of centralized list, it seems like it would be incredibly useful, > so I'm making my own list of what works, and what does not work, > for my own use, and I'd feel silly if there's already some service > or page or dashboard implementing this effort that I couldn't > find. > > I'd certainly contribute data toward such a status page, as I'm > running tests on my test cluster anyway, may as well share the > results. It's certainly not a new idea, and you might be interested in this thread from the first time we tried to do that 7 years ago, for similar reasons: https://lists.openstack.org/pipermail/openstack-operators/2015-June/007181.html The effort started out enthusiastically enough, but once it started to get into subjective criteria about project usability and maturity, it rapidly lost steam and the people involved went on to other things. I'm not trying to say it's a bad idea (far from it), just that it's a lot more complicated of a topic that it might seem on the surface, and there are definitely past experiences we can learn from in order to hopefully not fall into the same traps. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Jul 5 03:00:30 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Jul 2022 22:00:30 -0500 Subject: [all][tc][policy][operator] RBAC discussion policy pop-up next meeting: Tuesday 5 July (biweekly meeting) Message-ID: <181cc4d5a06.fc3bb539143564.3388369513350374560@ghanshyammann.com> Hello Everyone, RBAC policy popup next meeting is schedule tomorrow on 5th July at 14:00 UTC Meeting details: https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting Feel free to add the topic you would to discuss in this etherpad: https://etherpad.opendev.org/p/rbac-zed-ptg#L213 -gmann From swogatpradhan22 at gmail.com Tue Jul 5 03:58:07 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 09:28:07 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby Message-ID: Hi, I am trying to setup openstack wallaby using the repo : centos-release-openstack-wallaby on top of centos 8 stream. I have deployed undercloud but the service ironic_pxe_tftp is not starting up. Previously the undercloud was failing but now the undercloud is deployed successfully but the service is not coming up. Error log from ironic_pxe_tftp: [root at undercloud ~]# podman logs 23427d845098 /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory My undercloud config: [DEFAULT] undercloud_hostname = undercloud.taashee.com container_images_file = containers-prepare-parameter.yaml local_ip = 192.168.30.50/24 undercloud_public_host = 192.168.30.39 undercloud_admin_host = 192.168.30.41 undercloud_nameservers = 8.8.8.8 pxe_enabled = true #undercloud_ntp_servers = overcloud_domain_name = taashee.com subnets = ctlplane-subnet local_subnet = ctlplane-subnet #undercloud_service_certificate = generate_service_certificate = true certificate_generation_ca = local local_interface = eno3 inspection_extras = false undercloud_debug = false enable_tempest = false enable_ui = false [auth] [ctlplane-subnet] cidr = 192.168.30.0/24 dhcp_start = 192.168.30.60 dhcp_end = 192.168.30.100 inspection_iprange = 192.168.30.110,192.168.30.150 gateway = 192.168.30.1 With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 04:41:22 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 13:41:22 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: The error indicates that you are running c9s containers on c8s containers. I'd suggest you check your ContainParameterParameters and ensure you are pulling the correct image (wallaby + centos 8 stream). On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan wrote: > Hi, > I am trying to setup openstack wallaby using the repo : > centos-release-openstack-wallaby on top of centos 8 stream. > > I have deployed undercloud but the service ironic_pxe_tftp is not starting > up. Previously the undercloud was failing but now the undercloud is > deployed successfully but the service is not coming up. > > Error log from ironic_pxe_tftp: > > [root at undercloud ~]# podman logs 23427d845098 > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > > My undercloud config: > > [DEFAULT] > undercloud_hostname = undercloud.taashee.com > container_images_file = containers-prepare-parameter.yaml > local_ip = 192.168.30.50/24 > undercloud_public_host = 192.168.30.39 > undercloud_admin_host = 192.168.30.41 > undercloud_nameservers = 8.8.8.8 > pxe_enabled = true > #undercloud_ntp_servers = > overcloud_domain_name = taashee.com > subnets = ctlplane-subnet > local_subnet = ctlplane-subnet > #undercloud_service_certificate = > generate_service_certificate = true > certificate_generation_ca = local > local_interface = eno3 > inspection_extras = false > undercloud_debug = false > enable_tempest = false > enable_ui = false > > [auth] > > [ctlplane-subnet] > cidr = 192.168.30.0/24 > dhcp_start = 192.168.30.60 > dhcp_end = 192.168.30.100 > inspection_iprange = 192.168.30.110,192.168.30.150 > gateway = 192.168.30.1 > > With regards, > > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 04:41:48 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 13:41:48 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: > The error indicates that you are running c9s containers on c8s containers. I mean to say c9s containers on c8s *hosts*. On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami wrote: > The error indicates that you are running c9s containers on c8s containers. > I'd suggest you check your ContainParameterParameters and ensure you are > pulling > the correct image (wallaby + centos 8 stream). > > On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan > wrote: > >> Hi, >> I am trying to setup openstack wallaby using the repo : >> centos-release-openstack-wallaby on top of centos 8 stream. >> >> I have deployed undercloud but the service ironic_pxe_tftp is not >> starting up. Previously the undercloud was failing but now the undercloud >> is deployed successfully but the service is not coming up. >> >> Error log from ironic_pxe_tftp: >> >> [root at undercloud ~]# podman logs 23427d845098 >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> >> My undercloud config: >> >> [DEFAULT] >> undercloud_hostname = undercloud.taashee.com >> container_images_file = containers-prepare-parameter.yaml >> local_ip = 192.168.30.50/24 >> undercloud_public_host = 192.168.30.39 >> undercloud_admin_host = 192.168.30.41 >> undercloud_nameservers = 8.8.8.8 >> pxe_enabled = true >> #undercloud_ntp_servers = >> overcloud_domain_name = taashee.com >> subnets = ctlplane-subnet >> local_subnet = ctlplane-subnet >> #undercloud_service_certificate = >> generate_service_certificate = true >> certificate_generation_ca = local >> local_interface = eno3 >> inspection_extras = false >> undercloud_debug = false >> enable_tempest = false >> enable_ui = false >> >> [auth] >> >> [ctlplane-subnet] >> cidr = 192.168.30.0/24 >> dhcp_start = 192.168.30.60 >> dhcp_end = 192.168.30.100 >> inspection_iprange = 192.168.30.110,192.168.30.150 >> gateway = 192.168.30.1 >> >> With regards, >> >> Swogat Pradhan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 5 05:27:26 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 10:57:26 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: i believe that is the issue, the current continer parameters file is trying to pull centos9 images. i changed the namespace, but i was unable to find the quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify that: (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml # Generated with the following on 2022-07-04T13:53:39.943715 # # openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml # parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_alertmanager_image: alertmanager ceph_alertmanager_namespace: quay.ceph.io/prometheus ceph_alertmanager_tag: v0.16.2 ceph_grafana_image: grafana ceph_grafana_namespace: quay.ceph.io/app-sre ceph_grafana_tag: 6.7.4 ceph_image: daemon ceph_namespace: quay.io/ceph ceph_node_exporter_image: node-exporter ceph_node_exporter_namespace: quay.ceph.io/prometheus ceph_node_exporter_tag: v0.17.0 ceph_prometheus_image: prometheus ceph_prometheus_namespace: quay.ceph.io/prometheus ceph_prometheus_tag: v2.7.2 ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 name_prefix: openstack- name_suffix: '' namespace: quay.io/tripleowallaby,iiuc neutron_driver: ovn rhel_containers: false tag: current-tripleo tag_from_label: rdo_version Is this how to specify it? On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami wrote: > > The error indicates that you are running c9s containers on c8s > containers. > I mean to say > > c9s containers on c8s *hosts*. > > On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami > wrote: > >> The error indicates that you are running c9s containers on c8s containers. >> I'd suggest you check your ContainParameterParameters and ensure you are >> pulling >> the correct image (wallaby + centos 8 stream). >> >> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan >> wrote: >> >>> Hi, >>> I am trying to setup openstack wallaby using the repo : >>> centos-release-openstack-wallaby on top of centos 8 stream. >>> >>> I have deployed undercloud but the service ironic_pxe_tftp is not >>> starting up. Previously the undercloud was failing but now the undercloud >>> is deployed successfully but the service is not coming up. >>> >>> Error log from ironic_pxe_tftp: >>> >>> [root at undercloud ~]# podman logs 23427d845098 >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> >>> My undercloud config: >>> >>> [DEFAULT] >>> undercloud_hostname = undercloud.taashee.com >>> container_images_file = containers-prepare-parameter.yaml >>> local_ip = 192.168.30.50/24 >>> undercloud_public_host = 192.168.30.39 >>> undercloud_admin_host = 192.168.30.41 >>> undercloud_nameservers = 8.8.8.8 >>> pxe_enabled = true >>> #undercloud_ntp_servers = >>> overcloud_domain_name = taashee.com >>> subnets = ctlplane-subnet >>> local_subnet = ctlplane-subnet >>> #undercloud_service_certificate = >>> generate_service_certificate = true >>> certificate_generation_ca = local >>> local_interface = eno3 >>> inspection_extras = false >>> undercloud_debug = false >>> enable_tempest = false >>> enable_ui = false >>> >>> [auth] >>> >>> [ctlplane-subnet] >>> cidr = 192.168.30.0/24 >>> dhcp_start = 192.168.30.60 >>> dhcp_end = 192.168.30.100 >>> inspection_iprange = 192.168.30.110,192.168.30.150 >>> gateway = 192.168.30.1 >>> >>> With regards, >>> >>> Swogat Pradhan >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 05:35:59 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 14:35:59 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Can you try namespace: quay.io/tripleowallaby instead ? On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan wrote: > i believe that is the issue, the current continer parameters file is > trying to pull centos9 images. > i changed the namespace, but i was unable to find the > quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify > that: > (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml > # Generated with the following on 2022-07-04T13:53:39.943715 > # > # openstack tripleo container image prepare default > --local-push-destination --output-env-file containers-prepare-parameter.yaml > # > > parameter_defaults: > ContainerImagePrepare: > - push_destination: true > set: > ceph_alertmanager_image: alertmanager > ceph_alertmanager_namespace: quay.ceph.io/prometheus > ceph_alertmanager_tag: v0.16.2 > ceph_grafana_image: grafana > ceph_grafana_namespace: quay.ceph.io/app-sre > ceph_grafana_tag: 6.7.4 > ceph_image: daemon > ceph_namespace: quay.io/ceph > ceph_node_exporter_image: node-exporter > ceph_node_exporter_namespace: quay.ceph.io/prometheus > ceph_node_exporter_tag: v0.17.0 > ceph_prometheus_image: prometheus > ceph_prometheus_namespace: quay.ceph.io/prometheus > ceph_prometheus_tag: v2.7.2 > ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 > name_prefix: openstack- > name_suffix: '' > namespace: quay.io/tripleowallaby,iiuc > neutron_driver: ovn > rhel_containers: false > tag: current-tripleo > tag_from_label: rdo_version > > Is this how to specify it? > > On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami > wrote: > >> > The error indicates that you are running c9s containers on c8s >> containers. >> I mean to say >> >> c9s containers on c8s *hosts*. >> >> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >> wrote: >> >>> The error indicates that you are running c9s containers on c8s >>> containers. >>> I'd suggest you check your ContainParameterParameters and ensure you are >>> pulling >>> the correct image (wallaby + centos 8 stream). >>> >>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan >>> wrote: >>> >>>> Hi, >>>> I am trying to setup openstack wallaby using the repo : >>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>> >>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>> starting up. Previously the undercloud was failing but now the undercloud >>>> is deployed successfully but the service is not coming up. >>>> >>>> Error log from ironic_pxe_tftp: >>>> >>>> [root at undercloud ~]# podman logs 23427d845098 >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> >>>> My undercloud config: >>>> >>>> [DEFAULT] >>>> undercloud_hostname = undercloud.taashee.com >>>> container_images_file = containers-prepare-parameter.yaml >>>> local_ip = 192.168.30.50/24 >>>> undercloud_public_host = 192.168.30.39 >>>> undercloud_admin_host = 192.168.30.41 >>>> undercloud_nameservers = 8.8.8.8 >>>> pxe_enabled = true >>>> #undercloud_ntp_servers = >>>> overcloud_domain_name = taashee.com >>>> subnets = ctlplane-subnet >>>> local_subnet = ctlplane-subnet >>>> #undercloud_service_certificate = >>>> generate_service_certificate = true >>>> certificate_generation_ca = local >>>> local_interface = eno3 >>>> inspection_extras = false >>>> undercloud_debug = false >>>> enable_tempest = false >>>> enable_ui = false >>>> >>>> [auth] >>>> >>>> [ctlplane-subnet] >>>> cidr = 192.168.30.0/24 >>>> dhcp_start = 192.168.30.60 >>>> dhcp_end = 192.168.30.100 >>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>> gateway = 192.168.30.1 >>>> >>>> With regards, >>>> >>>> Swogat Pradhan >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Tue Jul 5 05:38:52 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Tue, 5 Jul 2022 00:38:52 -0500 Subject: [event] OpenInfradays Mexico 2022 (Virtual) In-Reply-To: References: Message-ID: Hello Community, the CFP will close in 6 days, don?t forget to submit your proposals, we only need the title and abstracts, video talk needs to be submitted later on. Remember that this is a virtual event and a great opportunity to share and spread knowledge across the LATAM region. https://events.linuxfoundation.org/about/community/?_sft_lfevent-country=mx https://openinfradays.mx/ Cheers! --- Alvaro Soto Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Thu, Jun 23, 2022, 6:13 PM Alvaro Soto wrote: > You're all invited to participate in the CFP for OID-MX22 > https://openinfradays.mx > > Let me know if you have any questions. > > --- > Alvaro Soto Escobar > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 5 05:39:10 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 11:09:10 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: I had used the namespace: quay.io/tripleowallaby where I faced this issue. Which is why i started this thread. On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami wrote: > Can you try > > namespace: quay.io/tripleowallaby > > instead ? > > On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan > wrote: > >> i believe that is the issue, the current continer parameters file is >> trying to pull centos9 images. >> i changed the namespace, but i was unable to find the >> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify >> that: >> (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml >> # Generated with the following on 2022-07-04T13:53:39.943715 >> # >> # openstack tripleo container image prepare default >> --local-push-destination --output-env-file containers-prepare-parameter.yaml >> # >> >> parameter_defaults: >> ContainerImagePrepare: >> - push_destination: true >> set: >> ceph_alertmanager_image: alertmanager >> ceph_alertmanager_namespace: quay.ceph.io/prometheus >> ceph_alertmanager_tag: v0.16.2 >> ceph_grafana_image: grafana >> ceph_grafana_namespace: quay.ceph.io/app-sre >> ceph_grafana_tag: 6.7.4 >> ceph_image: daemon >> ceph_namespace: quay.io/ceph >> ceph_node_exporter_image: node-exporter >> ceph_node_exporter_namespace: quay.ceph.io/prometheus >> ceph_node_exporter_tag: v0.17.0 >> ceph_prometheus_image: prometheus >> ceph_prometheus_namespace: quay.ceph.io/prometheus >> ceph_prometheus_tag: v2.7.2 >> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >> name_prefix: openstack- >> name_suffix: '' >> namespace: quay.io/tripleowallaby,iiuc >> neutron_driver: ovn >> rhel_containers: false >> tag: current-tripleo >> tag_from_label: rdo_version >> >> Is this how to specify it? >> >> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >> wrote: >> >>> > The error indicates that you are running c9s containers on c8s >>> containers. >>> I mean to say >>> >>> c9s containers on c8s *hosts*. >>> >>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>> wrote: >>> >>>> The error indicates that you are running c9s containers on c8s >>>> containers. >>>> I'd suggest you check your ContainParameterParameters and ensure you >>>> are pulling >>>> the correct image (wallaby + centos 8 stream). >>>> >>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> Hi, >>>>> I am trying to setup openstack wallaby using the repo : >>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>> >>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>> is deployed successfully but the service is not coming up. >>>>> >>>>> Error log from ironic_pxe_tftp: >>>>> >>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> >>>>> My undercloud config: >>>>> >>>>> [DEFAULT] >>>>> undercloud_hostname = undercloud.taashee.com >>>>> container_images_file = containers-prepare-parameter.yaml >>>>> local_ip = 192.168.30.50/24 >>>>> undercloud_public_host = 192.168.30.39 >>>>> undercloud_admin_host = 192.168.30.41 >>>>> undercloud_nameservers = 8.8.8.8 >>>>> pxe_enabled = true >>>>> #undercloud_ntp_servers = >>>>> overcloud_domain_name = taashee.com >>>>> subnets = ctlplane-subnet >>>>> local_subnet = ctlplane-subnet >>>>> #undercloud_service_certificate = >>>>> generate_service_certificate = true >>>>> certificate_generation_ca = local >>>>> local_interface = eno3 >>>>> inspection_extras = false >>>>> undercloud_debug = false >>>>> enable_tempest = false >>>>> enable_ui = false >>>>> >>>>> [auth] >>>>> >>>>> [ctlplane-subnet] >>>>> cidr = 192.168.30.0/24 >>>>> dhcp_start = 192.168.30.60 >>>>> dhcp_end = 192.168.30.100 >>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>> gateway = 192.168.30.1 >>>>> >>>>> With regards, >>>>> >>>>> Swogat Pradhan >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Tue Jul 5 06:29:00 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Tue, 5 Jul 2022 16:29:00 +1000 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Hey, The tripleowallaby containers are all built on ubi8 at the moment: ? skopeo inspect docker:// quay.io/tripleowallaby/openstack-ironic-base:current-tripleo | jq .Labels.name "ubi8" The container image should be ok. If it isn't an environmental issue, we should be seeing the same problem in our CI environments. Wallaby in particular is getting a lot of attention in our CI environments at the moment. Are you able to inspect the container and share the output? sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels Brendan Shephard Software Engineer Red Hat APAC 193 N Quay Brisbane City QLD 4000 @RedHat Red Hat Red Hat On Tue, Jul 5, 2022 at 3:39 PM Swogat Pradhan wrote: > I had used the namespace: quay.io/tripleowallaby where I faced this issue. > Which is why i started this thread. > > On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami > wrote: > >> Can you try >> >> namespace: quay.io/tripleowallaby >> >> instead ? >> >> On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan >> wrote: >> >>> i believe that is the issue, the current continer parameters file is >>> trying to pull centos9 images. >>> i changed the namespace, but i was unable to find the >>> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify >>> that: >>> (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml >>> # Generated with the following on 2022-07-04T13:53:39.943715 >>> # >>> # openstack tripleo container image prepare default >>> --local-push-destination --output-env-file containers-prepare-parameter.yaml >>> # >>> >>> parameter_defaults: >>> ContainerImagePrepare: >>> - push_destination: true >>> set: >>> ceph_alertmanager_image: alertmanager >>> ceph_alertmanager_namespace: quay.ceph.io/prometheus >>> ceph_alertmanager_tag: v0.16.2 >>> ceph_grafana_image: grafana >>> ceph_grafana_namespace: quay.ceph.io/app-sre >>> ceph_grafana_tag: 6.7.4 >>> ceph_image: daemon >>> ceph_namespace: quay.io/ceph >>> ceph_node_exporter_image: node-exporter >>> ceph_node_exporter_namespace: quay.ceph.io/prometheus >>> ceph_node_exporter_tag: v0.17.0 >>> ceph_prometheus_image: prometheus >>> ceph_prometheus_namespace: quay.ceph.io/prometheus >>> ceph_prometheus_tag: v2.7.2 >>> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >>> name_prefix: openstack- >>> name_suffix: '' >>> namespace: quay.io/tripleowallaby,iiuc >>> neutron_driver: ovn >>> rhel_containers: false >>> tag: current-tripleo >>> tag_from_label: rdo_version >>> >>> Is this how to specify it? >>> >>> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >>> wrote: >>> >>>> > The error indicates that you are running c9s containers on c8s >>>> containers. >>>> I mean to say >>>> >>>> c9s containers on c8s *hosts*. >>>> >>>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>>> wrote: >>>> >>>>> The error indicates that you are running c9s containers on c8s >>>>> containers. >>>>> I'd suggest you check your ContainParameterParameters and ensure you >>>>> are pulling >>>>> the correct image (wallaby + centos 8 stream). >>>>> >>>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> I am trying to setup openstack wallaby using the repo : >>>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>>> >>>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>>> is deployed successfully but the service is not coming up. >>>>>> >>>>>> Error log from ironic_pxe_tftp: >>>>>> >>>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> >>>>>> My undercloud config: >>>>>> >>>>>> [DEFAULT] >>>>>> undercloud_hostname = undercloud.taashee.com >>>>>> container_images_file = containers-prepare-parameter.yaml >>>>>> local_ip = 192.168.30.50/24 >>>>>> undercloud_public_host = 192.168.30.39 >>>>>> undercloud_admin_host = 192.168.30.41 >>>>>> undercloud_nameservers = 8.8.8.8 >>>>>> pxe_enabled = true >>>>>> #undercloud_ntp_servers = >>>>>> overcloud_domain_name = taashee.com >>>>>> subnets = ctlplane-subnet >>>>>> local_subnet = ctlplane-subnet >>>>>> #undercloud_service_certificate = >>>>>> generate_service_certificate = true >>>>>> certificate_generation_ca = local >>>>>> local_interface = eno3 >>>>>> inspection_extras = false >>>>>> undercloud_debug = false >>>>>> enable_tempest = false >>>>>> enable_ui = false >>>>>> >>>>>> [auth] >>>>>> >>>>>> [ctlplane-subnet] >>>>>> cidr = 192.168.30.0/24 >>>>>> dhcp_start = 192.168.30.60 >>>>>> dhcp_end = 192.168.30.100 >>>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>>> gateway = 192.168.30.1 >>>>>> >>>>>> With regards, >>>>>> >>>>>> Swogat Pradhan >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 06:32:36 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 15:32:36 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Looking at the latest wallaby code, it seems we use dnsmasq instead of tftpd server[1] even for CentOS 8 and I guess you are still using the old version. Please check whether the following patch is included. [1] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/809213 . We've removed tftp-server from ironic-conductor image by[2] So the latest container does not include the binary even if you are using the correct one. [2] https://review.opendev.org/c/openstack/tripleo-common/+/812690 On Tue, Jul 5, 2022 at 3:29 PM Brendan Shephard wrote: > Hey, > > The tripleowallaby containers are all built on ubi8 at the moment: > ? skopeo inspect docker:// > quay.io/tripleowallaby/openstack-ironic-base:current-tripleo | jq > .Labels.name > "ubi8" > > The container image should be ok. If it isn't an environmental issue, we > should be seeing the same problem in our CI environments. Wallaby in > particular is getting a lot of attention in our CI environments at the > moment. > > Are you able to inspect the container and share the output? > sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels > > > > Brendan Shephard > > Software Engineer > > Red Hat APAC > > 193 N Quay > > Brisbane City QLD 4000 > @RedHat Red Hat > Red Hat > > > > > > On Tue, Jul 5, 2022 at 3:39 PM Swogat Pradhan > wrote: > >> I had used the namespace: quay.io/tripleowallaby where I faced this >> issue. >> Which is why i started this thread. >> >> On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami >> wrote: >> >>> Can you try >>> >>> namespace: quay.io/tripleowallaby >>> >>> instead ? >>> >>> On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan >>> wrote: >>> >>>> i believe that is the issue, the current continer parameters file is >>>> trying to pull centos9 images. >>>> i changed the namespace, but i was unable to find the >>>> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to >>>> specify that: >>>> (undercloud) [stack at undercloud ~]$ cat >>>> containers-prepare-parameter.yaml >>>> # Generated with the following on 2022-07-04T13:53:39.943715 >>>> # >>>> # openstack tripleo container image prepare default >>>> --local-push-destination --output-env-file containers-prepare-parameter.yaml >>>> # >>>> >>>> parameter_defaults: >>>> ContainerImagePrepare: >>>> - push_destination: true >>>> set: >>>> ceph_alertmanager_image: alertmanager >>>> ceph_alertmanager_namespace: quay.ceph.io/prometheus >>>> ceph_alertmanager_tag: v0.16.2 >>>> ceph_grafana_image: grafana >>>> ceph_grafana_namespace: quay.ceph.io/app-sre >>>> ceph_grafana_tag: 6.7.4 >>>> ceph_image: daemon >>>> ceph_namespace: quay.io/ceph >>>> ceph_node_exporter_image: node-exporter >>>> ceph_node_exporter_namespace: quay.ceph.io/prometheus >>>> ceph_node_exporter_tag: v0.17.0 >>>> ceph_prometheus_image: prometheus >>>> ceph_prometheus_namespace: quay.ceph.io/prometheus >>>> ceph_prometheus_tag: v2.7.2 >>>> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >>>> name_prefix: openstack- >>>> name_suffix: '' >>>> namespace: quay.io/tripleowallaby,iiuc >>>> neutron_driver: ovn >>>> rhel_containers: false >>>> tag: current-tripleo >>>> tag_from_label: rdo_version >>>> >>>> Is this how to specify it? >>>> >>>> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >>>> wrote: >>>> >>>>> > The error indicates that you are running c9s containers on c8s >>>>> containers. >>>>> I mean to say >>>>> >>>>> c9s containers on c8s *hosts*. >>>>> >>>>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> The error indicates that you are running c9s containers on c8s >>>>>> containers. >>>>>> I'd suggest you check your ContainParameterParameters and ensure you >>>>>> are pulling >>>>>> the correct image (wallaby + centos 8 stream). >>>>>> >>>>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> I am trying to setup openstack wallaby using the repo : >>>>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>>>> >>>>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>>>> is deployed successfully but the service is not coming up. >>>>>>> >>>>>>> Error log from ironic_pxe_tftp: >>>>>>> >>>>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> >>>>>>> My undercloud config: >>>>>>> >>>>>>> [DEFAULT] >>>>>>> undercloud_hostname = undercloud.taashee.com >>>>>>> container_images_file = containers-prepare-parameter.yaml >>>>>>> local_ip = 192.168.30.50/24 >>>>>>> undercloud_public_host = 192.168.30.39 >>>>>>> undercloud_admin_host = 192.168.30.41 >>>>>>> undercloud_nameservers = 8.8.8.8 >>>>>>> pxe_enabled = true >>>>>>> #undercloud_ntp_servers = >>>>>>> overcloud_domain_name = taashee.com >>>>>>> subnets = ctlplane-subnet >>>>>>> local_subnet = ctlplane-subnet >>>>>>> #undercloud_service_certificate = >>>>>>> generate_service_certificate = true >>>>>>> certificate_generation_ca = local >>>>>>> local_interface = eno3 >>>>>>> inspection_extras = false >>>>>>> undercloud_debug = false >>>>>>> enable_tempest = false >>>>>>> enable_ui = false >>>>>>> >>>>>>> [auth] >>>>>>> >>>>>>> [ctlplane-subnet] >>>>>>> cidr = 192.168.30.0/24 >>>>>>> dhcp_start = 192.168.30.60 >>>>>>> dhcp_end = 192.168.30.100 >>>>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>>>> gateway = 192.168.30.1 >>>>>>> >>>>>>> With regards, >>>>>>> >>>>>>> Swogat Pradhan >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 5 06:37:47 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 12:07:47 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Hi, Here is the output as requested: [root at undercloud ~]# sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels { "architecture": "x86_64", "build-date": "2020-09-01T19:43:46.041620", "com.redhat.build-host": "cpt-1008.osbs.prod.upshift.rdu2.redhat.com", "com.redhat.component": "ubi8-container", "com.redhat.license_terms": " https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI", "config_data": "{'command': ['/bin/bash', '-c', 'BIND_HOST=$(hiera ironic::pxe::tftp_bind_host -c /etc/puppet/hiera.yaml); /usr/sbin/in.tftpd --foreground --user root --address $BIND_HOST:69 --map-file /var/lib/ironic/tftpboot/map-file /var/lib/ironic/tftpboot'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '669301f635becb3ecffd248a4ac56f35'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': ' undercloud.ctlplane.taashee.com:8787/tripleowallaby/openstack-ironic-pxe:current-tripleo', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 90, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ironic_pxe_tftp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ironic:/var/lib/kolla/config_files/src:ro', '/var/lib/ironic:/var/lib/ironic/:shared,z', '/var/log/containers/ironic:/var/log/ironic:z', '/var/log/containers/httpd/ironic-pxe:/var/log/httpd:z']}", "config_id": "tripleo_step4", "container_name": "ironic_pxe_tftp", "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.", "distribution-scope": "public", "io.buildah.version": "1.19.9", "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.", "io.k8s.display-name": "Red Hat Universal Base Image 8", "io.openshift.expose-services": "", "io.openshift.tags": "base rhel8", "maintainer": "OpenStack TripleO team", "managed_by": "tripleo_ansible", "name": "ubi8", "release": "347", "summary": "Provides the latest release of Red Hat Universal Base Image 8.", "tcib_managed": "true", "url": " https://access.redhat.com/containers/#/registry.access.redhat.com/ubi8/images/8.2-347 ", "vcs-ref": "663db861f0ff7a9c526c1c169a62c14c01a32dcc", "vcs-type": "git", "vendor": "Red Hat, Inc.", "version": "8.2" } On Tue, Jul 5, 2022 at 11:59 AM Brendan Shephard wrote: > Hey, > > The tripleowallaby containers are all built on ubi8 at the moment: > ? skopeo inspect docker:// > quay.io/tripleowallaby/openstack-ironic-base:current-tripleo | jq > .Labels.name > "ubi8" > > The container image should be ok. If it isn't an environmental issue, we > should be seeing the same problem in our CI environments. Wallaby in > particular is getting a lot of attention in our CI environments at the > moment. > > Are you able to inspect the container and share the output? > sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels > > > > Brendan Shephard > > Software Engineer > > Red Hat APAC > > 193 N Quay > > Brisbane City QLD 4000 > @RedHat Red Hat > Red Hat > > > > > > On Tue, Jul 5, 2022 at 3:39 PM Swogat Pradhan > wrote: > >> I had used the namespace: quay.io/tripleowallaby where I faced this >> issue. >> Which is why i started this thread. >> >> On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami >> wrote: >> >>> Can you try >>> >>> namespace: quay.io/tripleowallaby >>> >>> instead ? >>> >>> On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan >>> wrote: >>> >>>> i believe that is the issue, the current continer parameters file is >>>> trying to pull centos9 images. >>>> i changed the namespace, but i was unable to find the >>>> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to >>>> specify that: >>>> (undercloud) [stack at undercloud ~]$ cat >>>> containers-prepare-parameter.yaml >>>> # Generated with the following on 2022-07-04T13:53:39.943715 >>>> # >>>> # openstack tripleo container image prepare default >>>> --local-push-destination --output-env-file containers-prepare-parameter.yaml >>>> # >>>> >>>> parameter_defaults: >>>> ContainerImagePrepare: >>>> - push_destination: true >>>> set: >>>> ceph_alertmanager_image: alertmanager >>>> ceph_alertmanager_namespace: quay.ceph.io/prometheus >>>> ceph_alertmanager_tag: v0.16.2 >>>> ceph_grafana_image: grafana >>>> ceph_grafana_namespace: quay.ceph.io/app-sre >>>> ceph_grafana_tag: 6.7.4 >>>> ceph_image: daemon >>>> ceph_namespace: quay.io/ceph >>>> ceph_node_exporter_image: node-exporter >>>> ceph_node_exporter_namespace: quay.ceph.io/prometheus >>>> ceph_node_exporter_tag: v0.17.0 >>>> ceph_prometheus_image: prometheus >>>> ceph_prometheus_namespace: quay.ceph.io/prometheus >>>> ceph_prometheus_tag: v2.7.2 >>>> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >>>> name_prefix: openstack- >>>> name_suffix: '' >>>> namespace: quay.io/tripleowallaby,iiuc >>>> neutron_driver: ovn >>>> rhel_containers: false >>>> tag: current-tripleo >>>> tag_from_label: rdo_version >>>> >>>> Is this how to specify it? >>>> >>>> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >>>> wrote: >>>> >>>>> > The error indicates that you are running c9s containers on c8s >>>>> containers. >>>>> I mean to say >>>>> >>>>> c9s containers on c8s *hosts*. >>>>> >>>>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> The error indicates that you are running c9s containers on c8s >>>>>> containers. >>>>>> I'd suggest you check your ContainParameterParameters and ensure you >>>>>> are pulling >>>>>> the correct image (wallaby + centos 8 stream). >>>>>> >>>>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> I am trying to setup openstack wallaby using the repo : >>>>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>>>> >>>>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>>>> is deployed successfully but the service is not coming up. >>>>>>> >>>>>>> Error log from ironic_pxe_tftp: >>>>>>> >>>>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> >>>>>>> My undercloud config: >>>>>>> >>>>>>> [DEFAULT] >>>>>>> undercloud_hostname = undercloud.taashee.com >>>>>>> container_images_file = containers-prepare-parameter.yaml >>>>>>> local_ip = 192.168.30.50/24 >>>>>>> undercloud_public_host = 192.168.30.39 >>>>>>> undercloud_admin_host = 192.168.30.41 >>>>>>> undercloud_nameservers = 8.8.8.8 >>>>>>> pxe_enabled = true >>>>>>> #undercloud_ntp_servers = >>>>>>> overcloud_domain_name = taashee.com >>>>>>> subnets = ctlplane-subnet >>>>>>> local_subnet = ctlplane-subnet >>>>>>> #undercloud_service_certificate = >>>>>>> generate_service_certificate = true >>>>>>> certificate_generation_ca = local >>>>>>> local_interface = eno3 >>>>>>> inspection_extras = false >>>>>>> undercloud_debug = false >>>>>>> enable_tempest = false >>>>>>> enable_ui = false >>>>>>> >>>>>>> [auth] >>>>>>> >>>>>>> [ctlplane-subnet] >>>>>>> cidr = 192.168.30.0/24 >>>>>>> dhcp_start = 192.168.30.60 >>>>>>> dhcp_end = 192.168.30.100 >>>>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>>>> gateway = 192.168.30.1 >>>>>>> >>>>>>> With regards, >>>>>>> >>>>>>> Swogat Pradhan >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Tue Jul 5 06:55:31 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Tue, 5 Jul 2022 11:25:31 +0430 Subject: import external compute Message-ID: hello is there any way that i can import an already existing kolla-ansible compute to my openstack ? migrating instances manually take a lot of time from me -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Jul 5 07:24:08 2022 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 5 Jul 2022 15:24:08 +0800 Subject: Zun connector for persistent shared files system Manila In-Reply-To: References: Message-ID: Hi Vaibhav, In current state, only Cinder is supported. In theory, Manila can be added as another storage backend. I will check if anyone interests to contribute this feature. Best regards, Hongbin On Fri, Jul 1, 2022 at 9:40 PM Vaibhav wrote: > Hi, > > I am using zun for running containers and managing them. > I deployed cinder also persistent storage. and it is working fine. > > I want to mount my Manila shares to be mounted on containers managed by > Zun. > > I can see a Fuxi project and driver for this but it is discontinued now. > > With Cinder only one container can use the storage volume at a time. If I > want to have a shared file system to be mounted on multiple containers > simultaneously, it is not possible with cinder. > > Is there any alternative to Fuxi. is there any other mechanism to use > docker Volume support for NFS as shown in the link below? > https://docs.docker.com/storage/volumes/ > > Please advise and give a suggestion. > > Regards, > Vaibhav > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jul 5 07:38:02 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 05 Jul 2022 09:38:02 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: References: Message-ID: <23344295.OIdGMMxvHE@p1> Hi, Dnia poniedzia?ek, 4 lipca 2022 16:31:13 CEST Bence Romsics pisze: > Hi Neutrinos! > > Inspired by Julia Kreger's presentation on the summit [1] I wanted to > gather some ideas about the change in Neutron API performance. > > For that I used Rally with Neutron's usual Rally task definition [2]. > I measured against an all-in-one devstack - running always in a same > sized VM, keeping its local.conf the same between versions as much as > possible. Neutron was configured with ml2/ovs. Measuring other > backends would also be interesting, but first I wanted to keep the > config the same as I was going back to earlier versions as long as > possible. > > Without much pain I managed to collect data starting from Yoga back to Pike. > > You can download all Rally reports in this tarball (6 MiB): > https://drive.google.com/file/d/1TjFV7UWtX_sofjw3_njL6-6ezD7IPmsj/view?usp=sharing > > The tarball also contains data about how to reproduce these tests. It > is currently available at my personal Google Drive. I will keep this > around at least to the end of July. I would be happy to upload it to > somewhere else better suited for long term storage. > > Let me also attach a single plot (I hope the mailing list > configuration allows this) that shows the load_duration (actually the > average of 3 runs each) for each Rally scenario by OpenStack release. > Which I hope is the single picture summary of these test runs. However > the Rally reports contain much more data, feel free to download and > browse them. If the mailing list strips the attachment, the picture is > included in the tarball too. > > Cheers, > Bence (rubasov) > > [1] https://youtu.be/OqcnXxTbIxk > [2] https://opendev.org/openstack/neutron/src/commit/a9912caf3fa1e258621965ea8c6295a2eac9887c/rally-jobs/task-neutron.yaml > Thx Bence for that. So from just brief look at load_duration.png file it seems that we are improving API performance in last cycles :) I was also thinking about doing something similar to what Julia described in Berlin (but I still didn't had time for it). But I was thinking that instead of using rally, maybe we can do something similar like Ironic is doing and have some simple script which will populate neutron db with many resources, like e.g. 2-3k ports/networks/trunks etc. and then measure time of e.g. doing "list" of those resources. That way we will IMHO measure only neutron API performance and Neutron - DB interactions, without relying on the backends and other components, like e.g. Nova to spawn actual VM. Wdyt about it? Is it worth to do or it will be better to rely on the rally only? -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From lokendrarathour at gmail.com Tue Jul 5 08:28:29 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 5 Jul 2022 13:58:29 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Also Swogat, in our case, we made these container images offline, it made me stay with one stable release for my research. maybe later images got some changes comparing older one. Best Regards, Lokendra On Tue, Jul 5, 2022 at 10:57 AM Swogat Pradhan wrote: > i believe that is the issue, the current continer parameters file is > trying to pull centos9 images. > i changed the namespace, but i was unable to find the > quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify > that: > (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml > # Generated with the following on 2022-07-04T13:53:39.943715 > # > # openstack tripleo container image prepare default > --local-push-destination --output-env-file containers-prepare-parameter.yaml > # > > parameter_defaults: > ContainerImagePrepare: > - push_destination: true > set: > ceph_alertmanager_image: alertmanager > ceph_alertmanager_namespace: quay.ceph.io/prometheus > ceph_alertmanager_tag: v0.16.2 > ceph_grafana_image: grafana > ceph_grafana_namespace: quay.ceph.io/app-sre > ceph_grafana_tag: 6.7.4 > ceph_image: daemon > ceph_namespace: quay.io/ceph > ceph_node_exporter_image: node-exporter > ceph_node_exporter_namespace: quay.ceph.io/prometheus > ceph_node_exporter_tag: v0.17.0 > ceph_prometheus_image: prometheus > ceph_prometheus_namespace: quay.ceph.io/prometheus > ceph_prometheus_tag: v2.7.2 > ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 > name_prefix: openstack- > name_suffix: '' > namespace: quay.io/tripleowallaby,iiuc > neutron_driver: ovn > rhel_containers: false > tag: current-tripleo > tag_from_label: rdo_version > > Is this how to specify it? > > On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami > wrote: > >> > The error indicates that you are running c9s containers on c8s >> containers. >> I mean to say >> >> c9s containers on c8s *hosts*. >> >> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >> wrote: >> >>> The error indicates that you are running c9s containers on c8s >>> containers. >>> I'd suggest you check your ContainParameterParameters and ensure you are >>> pulling >>> the correct image (wallaby + centos 8 stream). >>> >>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan >>> wrote: >>> >>>> Hi, >>>> I am trying to setup openstack wallaby using the repo : >>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>> >>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>> starting up. Previously the undercloud was failing but now the undercloud >>>> is deployed successfully but the service is not coming up. >>>> >>>> Error log from ironic_pxe_tftp: >>>> >>>> [root at undercloud ~]# podman logs 23427d845098 >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> >>>> My undercloud config: >>>> >>>> [DEFAULT] >>>> undercloud_hostname = undercloud.taashee.com >>>> container_images_file = containers-prepare-parameter.yaml >>>> local_ip = 192.168.30.50/24 >>>> undercloud_public_host = 192.168.30.39 >>>> undercloud_admin_host = 192.168.30.41 >>>> undercloud_nameservers = 8.8.8.8 >>>> pxe_enabled = true >>>> #undercloud_ntp_servers = >>>> overcloud_domain_name = taashee.com >>>> subnets = ctlplane-subnet >>>> local_subnet = ctlplane-subnet >>>> #undercloud_service_certificate = >>>> generate_service_certificate = true >>>> certificate_generation_ca = local >>>> local_interface = eno3 >>>> inspection_extras = false >>>> undercloud_debug = false >>>> enable_tempest = false >>>> enable_ui = false >>>> >>>> [auth] >>>> >>>> [ctlplane-subnet] >>>> cidr = 192.168.30.0/24 >>>> dhcp_start = 192.168.30.60 >>>> dhcp_end = 192.168.30.100 >>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>> gateway = 192.168.30.1 >>>> >>>> With regards, >>>> >>>> Swogat Pradhan >>>> >>> -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Tue Jul 5 08:35:50 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Tue, 5 Jul 2022 09:35:50 +0100 Subject: QoS Cinder, Zed Release Message-ID: To whom it may concern, I am helping a colleague of mine with the following pieces of work: 820027 (https://review.opendev.org/c/openstack/cinder/+/820027 ), 820030 (https://review.opendev.org/c/openstack/cinder-specs/+/820030 ). I was wondering whether it is not too late to include the aforementioned within the Zed release? Is there anyone who can advise on this matter? Best Regards, Sergey -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jul 5 10:30:54 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Jul 2022 11:30:54 +0100 Subject: import external compute In-Reply-To: References: Message-ID: <54f7dd504e71c190dabe765d186ee52438acd0ce.camel@redhat.com> On Tue, 2022-07-05 at 11:25 +0430, Parsa Aminian wrote: > hello > is there any way that i can import an already existing kolla-ansible > compute to my openstack ? migrating instances manually take a lot of time > from me not really no nova does not have a way to import exisitng comptue and more importantly the instances form a differnt deployment. we cant assume the flaovr for exampel would be the same so an import of a compute is a non trivial thing you would be be imporitng a comptue and all insntance on the compute which belong to multiple tenants consumeing resouce form a differnt glnace/neturon/cinder ectra then the new cloud. so there is no simple way to import the compute but you coudl perhaps look at os-migrate to "import" or migrate the instnaces. os-migrate is intended to automate moving instnace between clouds but its not an entirly lossless process as ips will change in the process. https://github.com/os-migrate/os-migrate From geguileo at redhat.com Tue Jul 5 11:27:13 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 5 Jul 2022 13:27:13 +0200 Subject: [cinder] Re: QoS Cinder, Zed Release In-Reply-To: References: Message-ID: <20220705112713.savzcsfir2oppzs7@localhost> On 05/07, Sergey Drozdov wrote: > To whom it may concern, > > I am helping a colleague of mine with the following pieces of work: 820027 (https://review.opendev.org/c/openstack/cinder/+/820027 ), 820030 (https://review.opendev.org/c/openstack/cinder-specs/+/820030 ). I was wondering whether it is not too late to include the aforementioned within the Zed release? Is there anyone who can advise on this matter? > > Best Regards, > Sergey Hi Sergey, Cinder is in spec freeze, and the spec freeze exception request period ended last Friday, though I personally don't see much problem in merging this particular spec later, since I don't even think a spec is really necessary here since it's just implementing a standard feature, but that's just my opinion. I have reviewed both spec and patch and added some comments. Cheers, Gorka. From bence.romsics at gmail.com Tue Jul 5 12:19:43 2022 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 5 Jul 2022 14:19:43 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: <23344295.OIdGMMxvHE@p1> References: <23344295.OIdGMMxvHE@p1> Message-ID: Hi, > So from just brief look at load_duration.png file it seems that we are improving API performance in last cycles :) I believe the same. :-) > I was also thinking about doing something similar to what Julia described in Berlin (but I still didn't had time for it). But I was thinking that instead of using rally, maybe we can do something similar like Ironic is doing and have some simple script which will populate neutron db with many resources, like e.g. 2-3k ports/networks/trunks etc. and then measure time of e.g. doing "list" of those resources. > That way we will IMHO measure only neutron API performance and Neutron - DB interactions, without relying on the backends and other components, like e.g. Nova to spawn actual VM. Wdyt about it? Is it worth to do or it will be better to rely on the rally only? I think both approaches have their uses. These rally reports are hopefully useful for users of Neutron API, users planning an upgrade or as feedback for maintainers who worked on performance related issues in the last few cycles. But rally reports do not give much information on where to look when we want to make further improvements. But I believe Julia's approach can be used to narrow down or even identify where to make further code changes. Also she targeted testing _at scale_, what our current rally tests don't do. In short, I believe both approaches have their uses and rally tests probably cannot (easily) replace what the Ironic team did with the tests Julia described in her presentation. Cheers, Bence From adivya1.singh at gmail.com Tue Jul 5 12:52:52 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 5 Jul 2022 18:22:52 +0530 Subject: Regarding Policy.json entries for glance image update not working for a user In-Reply-To: References: Message-ID: hi Brian, Regarding the Policy.Json, it is working fine for 3 Controllers have a individual Glance Container But I have another scenario, where only One controller holds the Glance Image , but the same steps I do for the same, it fails with error code 403. Regards Adivya Singh On Wed, Jun 15, 2022 at 2:04 AM Brian Rosmaita wrote: > On 6/14/22 2:18 PM, Adivya Singh wrote: > > Hi Takashi, > > > > when a user upload images which is a member , The image will be set to > > private. > > > > This is what he is asking for access to make it public, The above rule > > applies for only public images > Alan and Takashi have both given you good advice: > > - By default, Glance assumes that your custom policy file is named > "policy.yaml". If it doesn't have that name, Glance will assume it does > not exist and will use the defaults defined in code. You can change the > filename glance will look for in your glance-api.conf -- look for > [oslo_policy]/policy_file > > - We recommend that you use YAML instead of JSON to write your policy > file because YAML allows comments, which you will find useful in > documenting any changes you make to the file > > - You want to keep the permissions on modify_image at their default > value, because otherwise users won't be able to do simple things like > add image properties to their own images > > - Some image properties can affect the system or other users. Glance > will not allow *any* user to modify some system properties (for example, > 'id', 'status'), and it requires additional permission along with > modify_image to set 'public' or 'community' for image visibility. > > - It's also possible to configure property protections to require > additional permission to CRUD specific properties (the default setting > is *not* to do this). > > For your particular use case, where you want a specific user to be able > to publicize_image, I would encourage you to think more carefully about > what exactly you want to accomplish. Traditionally, images with > 'public' visibility are provided by the cloud operator, and this gives > image consumers some confidence that there's nothing malicious on the > image. Public images are accessible to all users, and they will show up > in the default image-list call for all users, so if a public image > contains something nasty, it can spread very quickly. > > Glance provides four levels of image visibility: > > - private: only visible to users in the project that owns the image > > - shared: visible to users in the project that owns the image *plus* any > projects that are added to the image as "members". (A shared image with > no members is effectively a private image.) See [0] for info about how > image sharing is designed and what API calls are associated with it. > There are a bunch of policies around this; the defaults are basically > what you'd expect, with the image owner being able to add and delete > members, and image members being able to 'accept' or 'reject' shared > images. > > - community: accessible to everyone, but only visible if you look for > them. See [1] for an explanation of what that means. The ability to > set 'community' visibility on an image is controlled by the > "communitize_image" policy (default is admin-or-owner). > > - public: accessible to everyone, and easily visible to all users. > Controlled by the "publicize_image" policy (default is admin-only). > > You're running your own cloud, so you can configure things however you > like, but I encourage you to think carefully before handing out > publicize_image permission, and consider whether one of the other > visibilities can accomplish what you want. > > For more info, the introductory section on "Images" in the api-ref [2] > has a useful discussion of image properties and image visibility. > > The final thing I want to stress is that you should be sure to test > carefully any policies you define in a custom policy file. You are > actually having a good problem, that is, someone can't do something you > would like them to. The way worse problem happens when in addition to > that someone being able to do what you want them to, a whole bunch of > other users can also do that same thing. > > OK, so to get to your particular issue: > > - you don't want to change the "modify_image" policy in the way you > proposed in your email, because no one (other than the person having the > 'user role) will be able to do any kind of image updates. > > - if you decide to give that user publicize_image permissions, be > careful how you do it. For example, > "publicize_image": "role:user" > won't allow an admin to make images public (unless you also give each > admin the 'user' role). If you look at most of the policies in the Xena > policy.yaml.sample, they begin "role:admin or ...". > > - the reason you were seeing the 403 when you tried to do > openstack image set --public > as the user with the 'user' property is that you were allowed to > modify_image but when you tried to change the visibility, you did not > have permission (because the default for that is role:admin) > > Hope this helps! Once you get this figured out, you may want to put up > a patch to update the Glance documentation around policies. I think > everything said above is in there somewhere, but it may not be in the > most obvious places. > > Actually, there is one more thing. The above all applies to Xena, but > there's been some work around policies in Yoga and more happening in > Zed, so be sure to read the Glance release notes when you eventually > upgrade. > > > [0] > > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html > [1] > > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html#sharing-images-with-all-users > [2] https://docs.openstack.org/api-ref/image/v2/index.html#images > > > > > regards > > Adivya Singh > > > > On Tue, Jun 14, 2022 at 10:54 AM Takashi Kajinami > > wrote: > > > > Glance has a separate policy rule (publicize_image) for > > creating/updating public images., > > and you should define that policy rule instead of modify_image. > > > > https://docs.openstack.org/glance/xena/admin/policies.html > > > > ~~~ > > |publicize_image| - Create or update public images > > ~~~ > > > > AFAIK The modify_image policy defaults to rule:default and is > > allowed for any users > > as long as the target image is owned by that user. > > > > > > On Tue, Jun 14, 2022 at 2:01 PM Adivya Singh > > > wrote: > > > > Hi Brian, > > > > Please find the response > > > > 1> i am using Xena release version 24.0.1 > > > > Now the scenario is line below, my customer wants to have > > their login access on setting up the properties of an image > > to the public. now what i did is > > > > 1> i created a role in openstack using the admin credential > > name as "user" > > 2> i assigned that user to a role user. > > 3> i assigned those user to those project id, which they > > want to access as a user role > > > > Then i went to Glance container which is controller by lxc > > and made a policy.yaml file as below > > > > root at aio1-glance-container-724aa778:/etc/glance# cat > policy.yaml > > > > "modify_image": "role:user" > > > > then i went to utility container and try to set the > > properties of a image using openstack command > > > > openstack image set --public > > > > and then i got this error > > > > HTTP 403 Forbidden: You are not authorized to complete > > publicize_image action. > > > > Even when i am trying the upload image with this user , i > > get the above error only > > > > export OS_ENDPOINT_TYPE=internalURL > > export OS_INTERFACE=internalURL > > export OS_USERNAME=adsingh > > export OS_PASSWORD='adsingh' > > export OS_PROJECT_NAME=adsingh > > export OS_TENANT_NAME=adsingh > > export OS_AUTH_TYPE=password > > export OS_AUTH_URL=https://:5000/v3 > > export OS_NO_CACHE=1 > > export OS_USER_DOMAIN_NAME=Default > > export OS_PROJECT_DOMAIN_NAME=Default > > export OS_REGION_NAME=RegionOne > > > > Regards > > Adivya Singh > > > > > > > > On Mon, Jun 13, 2022 at 6:41 PM Alan Bishop > > > wrote: > > > > > > > > On Mon, Jun 13, 2022 at 6:00 AM Brian Rosmaita > > > > wrote: > > > > On 6/13/22 8:29 AM, Adivya Singh wrote: > > > hi Team, > > > > > > Any thoughts on this > > > > H Adivya, > > > > Please supply some more information, for example: > > > > - which openstack release you are using > > - the full API request you are making to modify the > > image > > - the full API response you receive > > - whether the user with "role:user" is in the same > > project that owns the > > image > > - debug level log extract for this call if you have > it > > - anything else that could be relevant, for example, > > have you modified > > any other policies, and if so, what values are you > > using now? > > > > > > Also bear in mind that the default policy_file name is > > "policy.yaml" (not .json). You either > > need to provide a policy.yaml file, or override the > > policy_file setting if you really want to > > use policy.json. > > > > Alan > > > > cheers, > > brian > > > > > > > > Regards > > > Adivya Singh > > > > > > On Sat, Jun 11, 2022 at 12:40 AM Adivya Singh > > > > > > > >> wrote: > > > > > > Hi Team, > > > > > > I have a use case where I have to give a user > > restriction on > > > updating the image properties as a member. > > > > > > I have created a policy Json file and give > > the modify_image rule to > > > the particular role, but still it is not > working > > > > > > "modify_image": "role:user", This role is > > created in OpenStack. > > > > > > but still it is failing while updating > > properties with a > > > particular user assigned to a role as "access > > denied" and > > > unauthorized access > > > > > > Regards > > > Adivya Singh > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hiwkby at yahoo.com Tue Jul 5 13:48:34 2022 From: hiwkby at yahoo.com (Hirotaka Wakabayashi) Date: Tue, 5 Jul 2022 13:48:34 +0000 (UTC) Subject: Issues With trove guest agent References: <657723071.1550527.1657028914038.ref@mail.yahoo.com> Message-ID: <657723071.1550527.1657028914038@mail.yahoo.com> Hello, Hernando! I think Trove calls nova to inject files here: https://opendev.org/openstack/trove/src/branch/master/trove/taskmanager/models.py#L999 When cloud-init fails to inject Trove's configuration files, I think you should check the following parameters. 1. `use_nova_server_config_drive` configuration on trove.conf. This value should be True if you use config_drive to inject files. 2. `DIB_CLOUD_INIT_DATASOURCES` environment when buiding your own image. This value should contains `OpenStack` if you set `use_nova_server_config_drive` as False. and I think you should also check cloud-init log using journalctl. Thanks, Hirotaka On Tuesday, June 28, 2022, 09:55:24 PM GMT+9, wrote: Send openstack-discuss mailing list submissions to ??? openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit ??? http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to ??? openstack-discuss-request at lists.openstack.org You can reach the person managing the list at ??? openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: ? 1. Re: Regarding Floating IP is existing Setup (Adivya Singh) ? 2. Issues With trove guest agent (Hernando Ariza Perez) ? 3. Openstack keystone LDAP integration | openstack user list ? ? ? --domain domain.com | Internal server error (HTTP 500) ? ? ? (Swogat Pradhan) ---------------------------------------------------------------------- Message: 1 Date: Tue, 28 Jun 2022 18:12:51 +0530 From: Adivya Singh To: Slawek Kaplonski ,? OpenStack Discuss ??? Subject: Re: Regarding Floating IP is existing Setup Message-ID: ??? Content-Type: text/plain; charset="utf-8" hi Slawek, it happens with a given router namespace at a time Regards Adivya Singh On Fri, Jun 24, 2022 at 10:13 PM Adivya Singh wrote: > Hi, > > Thanks for the advice and the link, > > What i saw when i do testing using tcpdump was "ARP" was not working, and > it is not able to associate the FLoating IP with the MAC address of the > interface in the VM,? When i do the associate and disassociate the VM , it > works fine > > But the Router NameSpace got changed. > > Regards > Adivya Singh > > On Thu, Jun 23, 2022 at 1:22 PM Slawek Kaplonski > wrote: > >> Hi, >> >> Dnia wtorek, 21 czerwca 2022 13:55:51 CEST Adivya Singh pisze: >> > hi Eugen, >> > >> > The current setup is 3 controller nodes,? The Load is not much? on each >> > controller and the number of DHCP agent is always set to 2 as per the >> > standard in the neutron.conf, The L3 agent seems to be stables as other >> > router namespace works fine under it, Only few router? Namespace get >> > affected under the agent. >> >> Is it that problem happens for new floating IPs or for the FIPs which >> were working fine and then suddenly stopped working? If the latter, was >> there any action which triggered the issue to happen? >> Is there e.g. only one FIP broken in the router or maybe when it happens, >> then all FIPs which uses same router are broken? >> >> Can You also try to analyze with e.g. tcpdump where traffic is dropped >> exactly? You can check >> http://kaplonski.pl/blog/neutron-where-is-my-packet-2/ for some more >> detailed description how traffic should go from the external network to >> Your instance. >> >> > >> > Most of the template having issue , Have all instance having FLoating >> IP, a >> > Stack with a single floating IP have chance of issue very less >> > >> > Regards >> > Adivya Singh >> > >> > On Tue, Jun 21, 2022 at 1:18 PM Eugen Block wrote: >> > >> > > Hi, >> > > >> > > this sounds very familiar to me, I had to deal with something similar >> > > a couple of times in a heavily used cluster with 2 control nodes. What >> > > does your setup look like, is it a HA setup? I would start checking >> > > the DHCP and L3 agents. After increasing dhcp_agents_per_network to 2 >> > > in neutron.conf and restarting the services this didn't occur again >> > > (yet). This would impact floating IPs as well, sometimes I had to >> > > disable and enable the affected router(s). If you only have one >> > > control node a different approach is necessary. Do you see a high load >> > > on the control node? >> > > >> > > >> > > Zitat von Adivya Singh : >> > > >> > > > hi Team, >> > > > >> > > > We got a issue in Xena release, where we set the environment in >> Ubuntu >> > > > Platform, But later we get some issues in Floating IP not reachable. >> > > > >> > > > In a Network node, not all router namespace are Impacted and only >> few of >> > > > them get affected, So we can not comment Network node has a issue. >> > > > >> > > > The L3 agent where the Router is tied up, Worked just fine, as other >> > > > routers work Fine. >> > > > >> > > > and the one having issue in Floating IP, if i unassigned and >> assigned it >> > > > starts working most of the time. >> > > > >> > > > Any thoughts on this >> > > > >> > > > Regards >> > > > Adivya Singh >> > > >> > > >> > > >> > > >> > > >> > >> >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 27 Jun 2022 13:54:59 -0500 From: Hernando Ariza Perez To: openstack-discuss at lists.openstack.org Subject: Issues With trove guest agent Message-ID: ??? Content-Type: text/plain; charset="utf-8" Dear Trove community, My name is Hernando, I?m writing this email because I spend a lot of time trying to make trove yoga version works, the service functionality seems that is working, however the guest agent not, I built an image following the build process in the documentation ( https://docs.openstack.org/trove/ussuri/admin/building_guest_images.html ), also I used the image that you have in http://tarballs.openstack.org/trove/images/ So right now, when I create a data store instance, the instances is active normally and I can reach it, I go inside it, and I saw that the guest agent service doesn?t pull the datastore docker container image. I put the guest agent config file manually in the image, because trove task manager never put it via cloud unit, it only put the guest info conf. So this case is weird because I?m didn?t get any log error, that?s why I?m here. Well could you please provide me some guides about this process? Maybe an example of the new trove config files, I?ll really appreciate this help. Thanks for read, Regards Hernando Clareth -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Tue, 28 Jun 2022 14:09:57 +0530 From: Swogat Pradhan To: OpenStack Discuss Subject: Openstack keystone LDAP integration | openstack user list ??? --domain domain.com | Internal server error (HTTP 500) Message-ID: ??? Content-Type: text/plain; charset="utf-8" Description of problem: I am trying to integrate AD server in keystone and facing 'Internal server error' domain configuration: [stack at hkg2director ~]$ cat workplace/keystone_domain_specific_ldap_backend.yaml # This is an example template on how to configure keystone domain specific LDAP # backends. This will configure a domain called tripleoldap will the attributes # specified. parameter_defaults: ? KeystoneLDAPDomainEnable: true ? KeystoneLDAPBackendConfigs: ? ? domain.com: ? ? ? url: ldap://172.25.161.211 ? ? ? user: cn=Openstack,ou=Admins,dc=domain,dc=com ? ? ? password: password ? ? ? suffix: dc=domain,dc=com ? ? ? user_tree_dn: ou=APAC,dc=domain,dc=com ? ? ? user_filter: "(|(memberOf=cn=openstackadmin,ou=Groups,dc=domain,dc=com)(memberOf=cn=openstackeditor,ou=Groups,dc=domain,dc=com)(memberOf=cn=openstackviewer,ou=Groups,dc=domain,dc=com)" ? ? ? user_objectclass: person ? ? ? user_id_attribute: cn ? ? ? group_tree_dn: ou=Groups,dc=domain,dc=com ? ? ? group_objectclass: Groups ? ? ? group_id_attribute: cn When i issue the command: $ openstack user list --domain domain.com Output: Internal server error (HTTP 500) Keystone_wsgi_error.log: [Tue Jun 28 06:46:49.112848 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] mod_wsgi (pid=45): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/keystone'. [Tue Jun 28 06:46:49.121797 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] Traceback (most recent call last): [Tue Jun 28 06:46:49.122202 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 2464, in __call__ [Tue Jun 28 06:46:49.122218 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.wsgi_app(environ, start_response) [Tue Jun 28 06:46:49.122231 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/werkzeug/middleware/proxy_fix.py", line 187, in __call__ [Tue Jun 28 06:46:49.122238 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.app(environ, start_response) [Tue Jun 28 06:46:49.122248 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122254 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122264 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122270 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122284 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/oslo_middleware/base.py", line 124, in __call__ [Tue Jun 28 06:46:49.122294 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self.application) [Tue Jun 28 06:46:49.122304 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122310 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122320 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122326 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122337 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__ [Tue Jun 28 06:46:49.122344 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return resp(environ, start_response) [Tue Jun 28 06:46:49.122354 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122364 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122374 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122382 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122392 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/oslo_middleware/base.py", line 124, in __call__ [Tue Jun 28 06:46:49.122400 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self.application) [Tue Jun 28 06:46:49.122413 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122421 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122432 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122439 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122463 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122470 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122481 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122490 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122500 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/osprofiler/web.py", line 112, in __call__ [Tue Jun 28 06:46:49.122507 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return request.get_response(self.application) [Tue Jun 28 06:46:49.122517 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122525 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122535 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122542 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122552 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122562 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122572 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122579 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122589 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/oslo_middleware/request_id.py", line 58, in __call__ [Tue Jun 28 06:46:49.122596 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self.application) [Tue Jun 28 06:46:49.122605 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122612 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122622 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122630 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122670 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/url_normalize.py", line 38, in __call__ [Tue Jun 28 06:46:49.122696 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.app(environ, start_response) [Tue Jun 28 06:46:49.122729 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122743 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122753 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122761 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122772 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 341, in __call__ [Tue Jun 28 06:46:49.122786 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self._app) [Tue Jun 28 06:46:49.122800 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122807 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122817 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122824 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122835 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/werkzeug/middleware/dispatcher.py", line 78, in __call__ [Tue Jun 28 06:46:49.122845 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return app(environ, start_response) [Tue Jun 28 06:46:49.122856 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 2450, in wsgi_app [Tue Jun 28 06:46:49.122863 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = self.handle_exception(e) [Tue Jun 28 06:46:49.122874 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.122883 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.122893 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.122900 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.122910 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.122921 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.122932 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] [Previous line repeated 27 more times] [Tue Jun 28 06:46:49.122943 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1867, in handle_exception [Tue Jun 28 06:46:49.122952 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] reraise(exc_type, exc_value, tb) [Tue Jun 28 06:46:49.122964 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 38, in reraise [Tue Jun 28 06:46:49.122971 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] raise value.with_traceback(tb) [Tue Jun 28 06:46:49.122981 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app [Tue Jun 28 06:46:49.122988 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = self.full_dispatch_request() [Tue Jun 28 06:46:49.122998 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request [Tue Jun 28 06:46:49.123007 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] rv = self.handle_user_exception(e) [Tue Jun 28 06:46:49.123018 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.123025 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.123035 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.123044 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.123059 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.123066 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.123077 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] [Previous line repeated 27 more times] [Tue Jun 28 06:46:49.123089 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception [Tue Jun 28 06:46:49.123097 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] reraise(exc_type, exc_value, tb) [Tue Jun 28 06:46:49.123107 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 38, in reraise [Tue Jun 28 06:46:49.123118 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] raise value.with_traceback(tb) [Tue Jun 28 06:46:49.123129 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request [Tue Jun 28 06:46:49.123137 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] rv = self.dispatch_request() [Tue Jun 28 06:46:49.123147 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request [Tue Jun 28 06:46:49.123154 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.view_functions[rule.endpoint](**req.view_args) [Tue Jun 28 06:46:49.123165 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 468, in wrapper [Tue Jun 28 06:46:49.123175 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = resource(*args, **kwargs) [Tue Jun 28 06:46:49.123186 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/views.py", line 89, in view [Tue Jun 28 06:46:49.123193 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.dispatch_request(*args, **kwargs) [Tue Jun 28 06:46:49.123204 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 583, in dispatch_request [Tue Jun 28 06:46:49.123211 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = meth(*args, **kwargs) [Tue Jun 28 06:46:49.123222 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/api/users.py", line 183, in get [Tue Jun 28 06:46:49.123232 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self._list_users() [Tue Jun 28 06:46:49.123245 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/api/users.py", line 215, in _list_users [Tue Jun 28 06:46:49.123252 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] domain_scope=domain, hints=hints) [Tue Jun 28 06:46:49.123263 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped [Tue Jun 28 06:46:49.123273 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] __ret_val = __f(*args, **kwargs) [Tue Jun 28 06:46:49.123282 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 414, in wrapper [Tue Jun 28 06:46:49.123289 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return f(self, *args, **kwargs) [Tue Jun 28 06:46:49.123299 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 424, in wrapper [Tue Jun 28 06:46:49.123308 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return f(self, *args, **kwargs) [Tue Jun 28 06:46:49.123327 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 1108, in list_users [Tue Jun 28 06:46:49.123337 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] ref_list = self._handle_shadow_and_local_users(driver, hints) [Tue Jun 28 06:46:49.123351 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 1091, in _handle_shadow_and_local_users [Tue Jun 28 06:46:49.123358 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return driver.list_users(hints) + fed_res [Tue Jun 28 06:46:49.123368 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/core.py", line 85, in list_users [Tue Jun 28 06:46:49.123376 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.user.get_all_filtered(hints) [Tue Jun 28 06:46:49.123387 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/core.py", line 328, in get_all_filtered [Tue Jun 28 06:46:49.123394 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] for user in self.get_all(query, hints)] [Tue Jun 28 06:46:49.123406 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/core.py", line 320, in get_all [Tue Jun 28 06:46:49.123413 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] hints=hints) [Tue Jun 28 06:46:49.123425 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 1949, in get_all [Tue Jun 28 06:46:49.123432 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return super(EnabledEmuMixIn, self).get_all(ldap_filter, hints) [Tue Jun 28 06:46:49.123443 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 1637, in get_all [Tue Jun 28 06:46:49.123453 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] for x in self._ldap_get_all(hints, ldap_filter)] [Tue Jun 28 06:46:49.123464 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/common/driver_hints.py", line 42, in wrapper [Tue Jun 28 06:46:49.123472 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return f(self, hints, *args, **kwargs) [Tue Jun 28 06:46:49.123482 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 1590, in _ldap_get_all [Tue Jun 28 06:46:49.123489 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] attrs) [Tue Jun 28 06:46:49.123500 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 986, in search_s [Tue Jun 28 06:46:49.123507 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] attrlist, attrsonly) [Tue Jun 28 06:46:49.123517 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 679, in wrapper [Tue Jun 28 06:46:49.123524 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return func(self, conn, *args, **kwargs) [Tue Jun 28 06:46:49.123535 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 814, in search_s [Tue Jun 28 06:46:49.123542 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] attrsonly) [Tue Jun 28 06:46:49.123552 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 870, in search_s [Tue Jun 28 06:46:49.123559 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout) [Tue Jun 28 06:46:49.123578 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 1286, in search_ext_s [Tue Jun 28 06:46:49.123586 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self._apply_method_s(SimpleLDAPObject.search_ext_s,*args,**kwargs) [Tue Jun 28 06:46:49.123596 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 1224, in _apply_method_s [Tue Jun 28 06:46:49.123603 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return func(self,*args,**kwargs) [Tue Jun 28 06:46:49.123613 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 863, in search_ext_s [Tue Jun 28 06:46:49.123621 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] msgid = self.search_ext(base,scope,filterstr,attrlist,attrsonly,serverctrls,clientctrls,timeout,sizelimit) [Tue Jun 28 06:46:49.123631 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 859, in search_ext [Tue Jun 28 06:46:49.123650 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] timeout,sizelimit, [Tue Jun 28 06:46:49.123664 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 340, in _ldap_call [Tue Jun 28 06:46:49.123672 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] reraise(exc_type, exc_value, exc_traceback) [Tue Jun 28 06:46:49.123690 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/compat.py", line 46, in reraise [Tue Jun 28 06:46:49.123701 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] raise exc_value [Tue Jun 28 06:46:49.123713 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 324, in _ldap_call [Tue Jun 28 06:46:49.123720 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] result = func(*args,**kwargs) [Tue Jun 28 06:46:49.123754 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] ldap.FILTER_ERROR: {'result': -7, 'desc': 'Bad search filter', 'ctrls': []} Version-Release number of selected component (if applicable): How reproducible: Configure domain in keystone. Steps to Reproduce: 1. setup 3 groups in ldap 2. create a user 3. configure ldap in keystone Actual results: When i issue the command: $ openstack user list --domain domain.com Output: Internal server error (HTTP 500) Expected results: When i issue the command: $ openstack user list --domain domain.com Output: should display users in the groups -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 44, Issue 167 ************************************************** From derekokeeffe85 at yahoo.ie Tue Jul 5 14:38:16 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 5 Jul 2022 14:38:16 +0000 (UTC) Subject: Listing instances that use a security group References: <987935218.3775030.1657031896834.ref@mail.yahoo.com> Message-ID: <987935218.3775030.1657031896834@mail.yahoo.com> Hi all, Is there a cli command to list all the VM's that have a specific security group attached, I need to delete some groups as a tidy up but I only get a warning that it's in use by an instance (of which there's 200) so I'd rather not go through them 1 by 1 in Horizon or show each one on the cli separately. An sql query would be acceptable also but nova db, select * from security_groups;?select * from instances; &?select * from security_group_instance_association; doesn't give me the required results that I can refine to search deeper. Thanks in advance for any info. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jul 5 15:37:35 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Jul 2022 16:37:35 +0100 Subject: Listing instances that use a security group In-Reply-To: <987935218.3775030.1657031896834@mail.yahoo.com> References: <987935218.3775030.1657031896834.ref@mail.yahoo.com> <987935218.3775030.1657031896834@mail.yahoo.com> Message-ID: so security groups are a netuon concept with some legacy support in nova. the way i woudl apporoch this is to list all ports via the neutrion api/cli that have the security group assocaited with it then extract the device-id form the port which is the nova server uuid looking at https://docs.openstack.org/api-ref/network/v2/index.html?expanded=list-ports-detail#list-ports security group does not appear to be one of the request parmaters of the port list api however security_groups supported by osc so not sure if the api doc is out of date so you shoudl be able to do this openstack port list --security-group you shoudl technialy be able to use -c device_id to get the list of vms uuid form that set of ports but im not sure that the openstack clinet will corrrectly inlcude the device_id filed in the api request in that case """openstack port list --security-group -c device_id -f value | sort | uniq""" should print a list of server of unique server uuids using that secuirty group if the openstack client is correctly askign for the device_id filed to be retured as aprt of the request. its is part fo the port list api responce by default. so you might need to usee --debug to get the api request url and then use curl to call the api direclty if the clinet does not supprot this properly On Tue, 2022-07-05 at 14:38 +0000, Derek O keeffe wrote: > Hi all, > Is there a cli command to list all the VM's that have a specific security group attached, I need to delete some groups as a tidy up but I only get a warning that it's in use by an instance (of which there's 200) so I'd rather not go through them 1 by 1 in Horizon or show each one on the cli separately. An sql query would be acceptable also but nova db, select * from security_groups;?select * from instances; &?select * from security_group_instance_association; doesn't give me the required results that I can refine to search deeper. > Thanks in advance for any info. > Regards,Derek > From rdhasman at redhat.com Tue Jul 5 16:06:14 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 5 Jul 2022 21:36:14 +0530 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: References: Message-ID: Hi Arnaud, We discussed this in last week's cinder meeting and unfortunately we haven't tested it thoroughly so we don't have any performance numbers to share. What we can tell is the reason why RBD requires a higher number of native threads. RBD calls C code which could potentially block green threads hence blocking the main operation therefore all of the calls in RBD to execute operations are wrapped to use native threads so depending on the operations we want to perform concurrently, we can set the value of backend_native_threads_pool_size for RBD. Thanks and regards Rajat Dhasmana On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > Hey all, > > Is there any recommendation on the number of threads to use when using > RBD backend (option backend_native_threads_pool_size)? > The doc is saying that 20 is the default but it should be increased, > specially for the RBD driver, but up to which value? > > Is there anyone tuning this parameter in their openstack deployments? > > If yes, maybe we can add some recommendations on openstack large-scale > doc about it? > > Cheers, > > Arnaud. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Tue Jul 5 16:40:39 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 5 Jul 2022 17:40:39 +0100 Subject: Listing instances that use a security group In-Reply-To: References: Message-ID: Hi Sean, Thanks for that. I will try tomorrow and let you know how it went. Regards, Derek > On 5 Jul 2022, at 16:42, Sean Mooney wrote: > > ?so security groups are a netuon concept with some legacy support in nova. > > the way i woudl apporoch this is to list all ports via the neutrion api/cli that have the security group assocaited with it > then extract the device-id form the port which is the nova server uuid > > looking at https://docs.openstack.org/api-ref/network/v2/index.html?expanded=list-ports-detail#list-ports > > security group does not appear to be one of the request parmaters of the port list api > however security_groups supported by osc so not sure if the api doc is out of date > > so you shoudl be able to do this > > openstack port list --security-group > > you shoudl technialy be able to use -c device_id to get the list of vms uuid form that set of ports but im not sure that the > openstack clinet will corrrectly inlcude the device_id filed in the api request in that case > > """openstack port list --security-group -c device_id -f value | sort | uniq""" > > should print a list of server of unique server uuids using that secuirty group if the openstack client is correctly askign for the device_id filed to > be retured as aprt of the request. its is part fo the port list api responce by default. > > so you might need to usee --debug to get the api request url and then use curl to call the api direclty if the clinet does not supprot this properly > > >> On Tue, 2022-07-05 at 14:38 +0000, Derek O keeffe wrote: >> Hi all, >> Is there a cli command to list all the VM's that have a specific security group attached, I need to delete some groups as a tidy up but I only get a warning that it's in use by an instance (of which there's 200) so I'd rather not go through them 1 by 1 in Horizon or show each one on the cli separately. An sql query would be acceptable also but nova db, select * from security_groups; select * from instances; & select * from security_group_instance_association; doesn't give me the required results that I can refine to search deeper. >> Thanks in advance for any info. >> Regards,Derek >> > > From arnaud.morin at gmail.com Tue Jul 5 17:19:28 2022 From: arnaud.morin at gmail.com (Arnaud) Date: Tue, 05 Jul 2022 19:19:28 +0200 Subject: =?US-ASCII?Q?Re=3A_=5Blarge-scale=5D=5Bcinder=5D_backend=5Fnative=5F?= =?US-ASCII?Q?threads=5Fpool=5Fsize_option_with_rbd_backend?= In-Reply-To: References: Message-ID: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> Hey, Thanks for your answer! OK, I understand the why ;) also because we hit some issues on our deployment. So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). We also disabled the periodic task to compute usage and use the less precise way from db. First question here: do you think we are going the right path? One thing we are not yet sure is how to calculate correctly the number of threads to use. Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? Thanks! Arnaud Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: >Hi Arnaud, > >We discussed this in last week's cinder meeting and unfortunately we >haven't tested it thoroughly so we don't have any performance numbers to >share. >What we can tell is the reason why RBD requires a higher number of native >threads. RBD calls C code which could potentially block green threads hence >blocking the main operation therefore all of the calls in RBD to execute >operations are wrapped to use native threads so depending on the operations >we want to >perform concurrently, we can set the value of >backend_native_threads_pool_size for RBD. > >Thanks and regards >Rajat Dhasmana > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > >> Hey all, >> >> Is there any recommendation on the number of threads to use when using >> RBD backend (option backend_native_threads_pool_size)? >> The doc is saying that 20 is the default but it should be increased, >> specially for the RBD driver, but up to which value? >> >> Is there anyone tuning this parameter in their openstack deployments? >> >> If yes, maybe we can add some recommendations on openstack large-scale >> doc about it? >> >> Cheers, >> >> Arnaud. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodrigo.barbieri2010 at gmail.com Tue Jul 5 18:37:54 2022 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Tue, 5 Jul 2022 15:37:54 -0300 Subject: [manila] Stepping down from manila core Message-ID: Hello fellow zorillas, It has been a long time since I started to hope every day I'd able to dedicate more time to manila core activities and so far that hasn't happened and I don't see it happening in the near foreseeable future. I had been following the meetings notes weekly until ~2 months ago but I recently ended up dropping those as well. Therefore I am stepping down from the manila core role. I would like to thank everyone that I worked closely with from 2014 to 2019 on this project. I hold this project and all of you dear to my heart and I am extremely glad and grateful to have worked with you and met you on summits/PTGs, as the life memories around Manila are among the best I have from that period. If someday circumstances change, the manila project and its community will be ones I will be very happy to go back to working closely with again. Kind regards, -- Rodrigo Barbieri MSc Computer Scientist -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Jul 5 20:44:40 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 5 Jul 2022 15:44:40 -0500 Subject: October 2022 PTG Dates & Registration Message-ID: Hello Everyone! As you may have seen, we announced the next PTG[1] which will take place October 17-20, 2022 in Columbus, Ohio! Registration is now open[2]. We have also secured a limited, discounted hotel block for PTG attendees [3]. If your organization is interested in sponsoring the event, information on packages and pricing are now available on the PTG website [1] or feel free to reach out directly to ptg at openinfra.dev. Can't wait to SEE you all there! -Kendall Nelson (diablo_rojo) [1] https://openinfra.dev/ptg/ [2] https://openinfra-ptg.eventbrite.com/ [3] https://www.hyatt.com/en-US/group-booking/CMHRC/G-L0RT -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Jul 6 08:20:14 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 6 Jul 2022 10:20:14 +0200 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> Message-ID: <20220706082014.qu34mzuhmb3uma2j@localhost> On 05/07, Arnaud wrote: > Hey, > > Thanks for your answer! > OK, I understand the why ;) also because we hit some issues on our deployment. > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). Hi Arnaud, Deferred deletion should reduce the number of required native threads since delete calls will complete faster. > We also disabled the periodic task to compute usage and use the less precise way from db. Are you referring to the 'rbd_exclusive_cinder_pool' configuration option? Because that should already have the optimum default (True value). > First question here: do you think we are going the right path? > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? Native threads on the RBD driver are not only used for deletion, they are used for *all* RBD calls. We haven't defined any particular method to calculate the optimum number of threads on a system, but I can think of 2 possible avenues to explore: - Performance testing: Run a set of tests with a high number of concurrent requests and different operations and see how Cinder performs. I wouldn't bother with individual attach and detach to VM operations because those are noops on the Cinder side, creating volume from image with either different images or cache disabled would be better. - Reporting native thread usage: To really know if the number of native threads is sufficient or not you could modify the Cinder volume manager (and possibly also eventlet.tpool to gather statistics on the number of used/free native threads and number of queued requests that are waiting for a native thread to pick them up. Cheers, Gorka. > > Thanks! > > Arnaud > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > >Hi Arnaud, > > > >We discussed this in last week's cinder meeting and unfortunately we > >haven't tested it thoroughly so we don't have any performance numbers to > >share. > >What we can tell is the reason why RBD requires a higher number of native > >threads. RBD calls C code which could potentially block green threads hence > >blocking the main operation therefore all of the calls in RBD to execute > >operations are wrapped to use native threads so depending on the operations > >we want to > >perform concurrently, we can set the value of > >backend_native_threads_pool_size for RBD. > > > >Thanks and regards > >Rajat Dhasmana > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > >> Hey all, > >> > >> Is there any recommendation on the number of threads to use when using > >> RBD backend (option backend_native_threads_pool_size)? > >> The doc is saying that 20 is the default but it should be increased, > >> specially for the RBD driver, but up to which value? > >> > >> Is there anyone tuning this parameter in their openstack deployments? > >> > >> If yes, maybe we can add some recommendations on openstack large-scale > >> doc about it? > >> > >> Cheers, > >> > >> Arnaud. > >> > >> From rdhasman at redhat.com Wed Jul 6 08:22:28 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 6 Jul 2022 13:52:28 +0530 Subject: QoS Cinder, Zed Release In-Reply-To: References: Message-ID: Hi Sergey, As Gorka said, we generally don't require spec for driver features but we encourage to register a blueprint on launchpad[1] so as to keep track of all the features. Having said that, the spec would act as a good point of documentation so we can consider it merging. It's a good point for discussion which i will add it in the cinder meeting agenda today[2] but even if it doesn't merge this cycle (since we're already in the spec freeze exception phase), we can do it next cycle without blocking reviews of the main feature. [1] https://blueprints.launchpad.net/cinder [2] https://etherpad.opendev.org/p/cinder-zed-meetings Thanks and regards Rajat Dhasmana On Tue, Jul 5, 2022 at 2:19 PM Sergey Drozdov wrote: > To whom it may concern, > > I am helping a colleague of mine with the following pieces of work: 820027 > (https://review.opendev.org/c/openstack/cinder/+/820027), 820030 ( > https://review.opendev.org/c/openstack/cinder-specs/+/820030). I was > wondering whether it is not too late to include the aforementioned within > the Zed release? Is there anyone who can advise on this matter? > > Best Regards, > Sergey > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Wed Jul 6 09:11:41 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Wed, 6 Jul 2022 09:11:41 +0000 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: <20220706082014.qu34mzuhmb3uma2j@localhost> References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> <20220706082014.qu34mzuhmb3uma2j@localhost> Message-ID: Yes, I was talking about rbd_exclusive_cinder_pool. It was false by default on our side because we are still running cinder stein release :( Thank you for the answer about the calculation methods! Do you mind if I copy paste your answer to the large-scale documentaiton ([1])? Cheers, Arnaud. [1] https://docs.openstack.org/large-scale/ On 06.07.22 - 10:20, Gorka Eguileor wrote: > On 05/07, Arnaud wrote: > > Hey, > > > > Thanks for your answer! > > OK, I understand the why ;) also because we hit some issues on our deployment. > > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). > > Hi Arnaud, > > Deferred deletion should reduce the number of required native threads > since delete calls will complete faster. > > > > We also disabled the periodic task to compute usage and use the less precise way from db. > > Are you referring to the 'rbd_exclusive_cinder_pool' configuration > option? Because that should already have the optimum default (True > value). > > > > First question here: do you think we are going the right path? > > > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? > > Native threads on the RBD driver are not only used for deletion, they > are used for *all* RBD calls. > > We haven't defined any particular method to calculate the optimum number > of threads on a system, but I can think of 2 possible avenues to > explore: > > - Performance testing: Run a set of tests with a high number of > concurrent requests and different operations and see how Cinder > performs. I wouldn't bother with individual attach and detach to VM > operations because those are noops on the Cinder side, creating volume > from image with either different images or cache disabled would be > better. > > - Reporting native thread usage: To really know if the number of native > threads is sufficient or not you could modify the Cinder volume > manager (and possibly also eventlet.tpool to gather statistics on the > number of used/free native threads and number of queued requests that > are waiting for a native thread to pick them up. > > Cheers, > Gorka. > > > > > Thanks! > > > > Arnaud > > > > > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > > >Hi Arnaud, > > > > > >We discussed this in last week's cinder meeting and unfortunately we > > >haven't tested it thoroughly so we don't have any performance numbers to > > >share. > > >What we can tell is the reason why RBD requires a higher number of native > > >threads. RBD calls C code which could potentially block green threads hence > > >blocking the main operation therefore all of the calls in RBD to execute > > >operations are wrapped to use native threads so depending on the operations > > >we want to > > >perform concurrently, we can set the value of > > >backend_native_threads_pool_size for RBD. > > > > > >Thanks and regards > > >Rajat Dhasmana > > > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > > > >> Hey all, > > >> > > >> Is there any recommendation on the number of threads to use when using > > >> RBD backend (option backend_native_threads_pool_size)? > > >> The doc is saying that 20 is the default but it should be increased, > > >> specially for the RBD driver, but up to which value? > > >> > > >> Is there anyone tuning this parameter in their openstack deployments? > > >> > > >> If yes, maybe we can add some recommendations on openstack large-scale > > >> doc about it? > > >> > > >> Cheers, > > >> > > >> Arnaud. > > >> > > >> > From amonster369 at gmail.com Wed Jul 6 10:05:55 2022 From: amonster369 at gmail.com (A Monster) Date: Wed, 6 Jul 2022 11:05:55 +0100 Subject: Problem while launching an instance directly from an image "Volume did not finish being created even after we waited 203 seconds or 61 attempts" Message-ID: Hello, I talking about launching instance from persistent volume ( cinder ), I've checked cinder logs but I've found no errors, the only error I could find is in nova compute log which says Build of instance 4cf01ba2-05b3-44e9-a685-8875d8c96b4e aborted: Volume 01739e82-9e66-41f7-be74-dfbbdcd6746e did not finish being created even after we waited 203 seconds or 61 attempts. And its status is creating. I am using openstack xena deployed with kolla ansible on centos 8 stream. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jul 6 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 6 Jul 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 07-06-2022 Message-ID: This is a bug report from 06-29-2022 to 07-06-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Low - https://bugs.launchpad.net/cinder/+bug/1980268 "creating a bootable volume fails async when vol_size < virtual_size of image." Fix proposed to master. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at garloff.de Wed Jul 6 11:15:38 2022 From: openstack at garloff.de (Kurt Garloff) Date: Wed, 6 Jul 2022 13:15:38 +0200 Subject: openstackclient: booting from image-created volume w/ delete_on_termination Message-ID: Hi openstack CLI wizards, Having flavors without disks, I want volumes to be created from images on the fly that I can boot from. But I don't want to do bookkeeping for these volumes; they should disappear once the server disappears. On the command line with nova, I can do this: nova boot --nic net-name=$MYNET --key-name $MYKEY --flavor $MYFLAVOR \ --block-device "id=$MYIMGIFD,source=image,dest=volume,size=$MYSIZE,shutdown=remove,bootindex=0" $MYNAME I did not find a way to do this with openstack server create. Is there one? Here's what I tried: --image $MYIMGID --boot-from-volume $MYSIZE works, except that there is no way to specify delete_on_termination --block-device-mapping "sda=$MYIMGID:image:$MYSIZE:true" does complain that no --image or --volume have been passed. The option does not appear to be used for bootable volumes. I have seen a BP (merged with Ussuri) that would allow a PUT call to update delete_on_termination, but I don't see it usable with the openstackclient ... https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/destroy-instance-with-datavolume.html Anything obvious I missed? Thanks, -- Kurt Garloff Cologne, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jul 6 11:56:58 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 6 Jul 2022 13:56:58 +0200 Subject: openstackclient: booting from image-created volume w/ delete_on_termination In-Reply-To: References: Message-ID: Hey! delete_on_termination is a flag that is provided to nova during volume attachment basically. So in case of attaching volume to existing server, you can provide that flag to attachment: openstack server add volume --enable-delete-on-termination --os-compute-api-version 2.79 $UUID It's more complex when we're talking about server creation though. What openstackclient allows you to do, is to provide --block-device flag, where you can provide extra specs to the mapping. You can check doc on how to use it here https://docs.openstack.org/python-openstackclient/yoga/cli/command-objects/server.html#cmdoption-openstack-server-create-block-device For API call it would be block_device_mapping_v2 option provided with request which supports kind of same format: https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server ??, 6 ???. 2022 ?. ? 13:21, Kurt Garloff : > > Hi openstack CLI wizards, > > Having flavors without disks, I want volumes to be created from images on the fly that I can boot from. > But I don't want to do bookkeeping for these volumes; they should disappear once the server disappears. > > On the command line with nova, I can do this: > nova boot --nic net-name=$MYNET --key-name $MYKEY --flavor $MYFLAVOR \ > --block-device "id=$MYIMGIFD,source=image,dest=volume,size=$MYSIZE,shutdown=remove,bootindex=0" $MYNAME > > I did not find a way to do this with openstack server create. > Is there one? > > Here's what I tried: > --image $MYIMGID --boot-from-volume $MYSIZE > works, except that there is no way to specify delete_on_termination > > --block-device-mapping "sda=$MYIMGID:image:$MYSIZE:true" > does complain that no --image or --volume have been passed. > The option does not appear to be used for bootable volumes. > > I have seen a BP (merged with Ussuri) that would allow a PUT call to update delete_on_termination, but I don't see it usable with the openstackclient ... > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/destroy-instance-with-datavolume.html > > Anything obvious I missed? > > Thanks, > > -- > Kurt Garloff > Cologne, Germany From rosmaita.fossdev at gmail.com Wed Jul 6 12:51:33 2022 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 6 Jul 2022 08:51:33 -0400 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: References: Message-ID: <494ef22b-22e4-be64-4827-e7c294b9e34c@gmail.com> On 6/30/22 9:39 AM, Herve Beraud wrote: > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member > of the oslo core team. > > During the last months Takashi has been a significant contributor to the > oslo projects. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. I'm not an oslo core, but tkajinam is an enthusiastic contributor who pays close attention to what's going on in openstack as a whole, so +1 from me. > > Thanks. > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > From geguileo at redhat.com Wed Jul 6 12:52:43 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 6 Jul 2022 14:52:43 +0200 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> <20220706082014.qu34mzuhmb3uma2j@localhost> Message-ID: <20220706125243.vtzv5ynkn7syd7xt@localhost> On 06/07, Arnaud Morin wrote: > Yes, I was talking about rbd_exclusive_cinder_pool. > It was false by default on our side because we are still running cinder > stein release :( > > Thank you for the answer about the calculation methods! > Do you mind if I copy paste your answer to the large-scale > documentaiton ([1])? Feel free to use it any way you want. :-) > > Cheers, > Arnaud. > > [1] https://docs.openstack.org/large-scale/ > > On 06.07.22 - 10:20, Gorka Eguileor wrote: > > On 05/07, Arnaud wrote: > > > Hey, > > > > > > Thanks for your answer! > > > OK, I understand the why ;) also because we hit some issues on our deployment. > > > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). > > > > Hi Arnaud, > > > > Deferred deletion should reduce the number of required native threads > > since delete calls will complete faster. > > > > > > > We also disabled the periodic task to compute usage and use the less precise way from db. > > > > Are you referring to the 'rbd_exclusive_cinder_pool' configuration > > option? Because that should already have the optimum default (True > > value). > > > > > > > First question here: do you think we are going the right path? > > > > > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > > > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? > > > > Native threads on the RBD driver are not only used for deletion, they > > are used for *all* RBD calls. > > > > We haven't defined any particular method to calculate the optimum number > > of threads on a system, but I can think of 2 possible avenues to > > explore: > > > > - Performance testing: Run a set of tests with a high number of > > concurrent requests and different operations and see how Cinder > > performs. I wouldn't bother with individual attach and detach to VM > > operations because those are noops on the Cinder side, creating volume > > from image with either different images or cache disabled would be > > better. > > > > - Reporting native thread usage: To really know if the number of native > > threads is sufficient or not you could modify the Cinder volume > > manager (and possibly also eventlet.tpool to gather statistics on the > > number of used/free native threads and number of queued requests that > > are waiting for a native thread to pick them up. > > > > Cheers, > > Gorka. > > > > > > > > Thanks! > > > > > > Arnaud > > > > > > > > > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > > > >Hi Arnaud, > > > > > > > >We discussed this in last week's cinder meeting and unfortunately we > > > >haven't tested it thoroughly so we don't have any performance numbers to > > > >share. > > > >What we can tell is the reason why RBD requires a higher number of native > > > >threads. RBD calls C code which could potentially block green threads hence > > > >blocking the main operation therefore all of the calls in RBD to execute > > > >operations are wrapped to use native threads so depending on the operations > > > >we want to > > > >perform concurrently, we can set the value of > > > >backend_native_threads_pool_size for RBD. > > > > > > > >Thanks and regards > > > >Rajat Dhasmana > > > > > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > > > > > >> Hey all, > > > >> > > > >> Is there any recommendation on the number of threads to use when using > > > >> RBD backend (option backend_native_threads_pool_size)? > > > >> The doc is saying that 20 is the default but it should be increased, > > > >> specially for the RBD driver, but up to which value? > > > >> > > > >> Is there anyone tuning this parameter in their openstack deployments? > > > >> > > > >> If yes, maybe we can add some recommendations on openstack large-scale > > > >> doc about it? > > > >> > > > >> Cheers, > > > >> > > > >> Arnaud. > > > >> > > > >> > > > From gthiemonge at redhat.com Wed Jul 6 13:36:08 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Wed, 6 Jul 2022 15:36:08 +0200 Subject: [Octavia] Proposing Tom Weininger as core reviewer In-Reply-To: References: Message-ID: Hi, Thanks for the feedback, I have added Tom to the Octavia core reviewer group! Greg On Tue, Jun 28, 2022 at 7:50 PM Michael Johnson wrote: > +1, Tom has been doing great work. > > Michael > > On Tue, Jun 28, 2022 at 7:29 AM Adam Harwell wrote: > >> +1 from me as well! >> >> On Tue, Jun 28, 2022 at 6:42 AM Anna Taraday >> wrote: >> >>> +1 for Tom >>> >>> Thank you for your hard work! >>> >>> On Tue, Jun 28, 2022 at 5:35 PM Gregory Thiemonge >>> wrote: >>> >>>> Hi Folks, >>>> >>>> I would like to propose Tom Weininger as a core reviewer for the >>>> Octavia project. >>>> Since he joined the project, Tom has been a major contributor to >>>> Octavia, and he is an excellent reviewer. >>>> >>>> Please send your feedback in this thread, if there is no objection >>>> until next week, we will add him to the list of core reviewers. >>>> >>>> Thanks >>>> Gregory >>>> >>> >>> >>> -- >>> >>> Ann Taraday >>> >>> Senior Software Engineer >>> ataraday at mirantis.com >>> >>> -- >> Thanks, >> --Adam >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Wed Jul 6 16:30:49 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Wed, 6 Jul 2022 22:00:49 +0530 Subject: Zun connector for persistent shared files system Manila In-Reply-To: References: Message-ID: Hi Hongbin, Thanks a lot. I saw earlier fuxi driver was there. but it is discontinued now. it seems to be good to refix it for Manila. Also, there is docker support for NFS volumes. https://docs.docker.com/storage/volumes/ Can something be done to have it. I am ready to test if somebody is ready for development. and help in development if your team guides me some hook points. Regards, Vaibhav On Tue, Jul 5, 2022 at 12:54 PM Hongbin Lu wrote: > Hi Vaibhav, > > In current state, only Cinder is supported. In theory, Manila can be added > as another storage backend. I will check if anyone interests to contribute > this feature. > > Best regards, > Hongbin > > On Fri, Jul 1, 2022 at 9:40 PM Vaibhav wrote: > >> Hi, >> >> I am using zun for running containers and managing them. >> I deployed cinder also persistent storage. and it is working fine. >> >> I want to mount my Manila shares to be mounted on containers managed by >> Zun. >> >> I can see a Fuxi project and driver for this but it is discontinued now. >> >> With Cinder only one container can use the storage volume at a time. If I >> want to have a shared file system to be mounted on multiple containers >> simultaneously, it is not possible with cinder. >> >> Is there any alternative to Fuxi. is there any other mechanism to use >> docker Volume support for NFS as shown in the link below? >> https://docs.docker.com/storage/volumes/ >> >> Please advise and give a suggestion. >> >> Regards, >> Vaibhav >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Jul 6 17:33:26 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 6 Jul 2022 19:33:26 +0200 Subject: [largescale-sig] Next meeting: July 6th, 15utc In-Reply-To: <9aa1acd4-4321-d36d-2482-6f4e417cd41d@openstack.org> References: <9aa1acd4-4321-d36d-2482-6f4e417cd41d@openstack.org> Message-ID: Hi everyone, Here is the summary of our SIG meeting today. We discussed our next OpenInfra Live episode as well as the completion of the transition of our documentation to docs.openstack.org. You can read the meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-07-06-15.00.html We'll now pause meetings for the European summer. Our next IRC meeting will be August 31, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry From rosmaita.fossdev at gmail.com Wed Jul 6 19:33:19 2022 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 6 Jul 2022 15:33:19 -0400 Subject: Regarding Policy.json entries for glance image update not working for a user In-Reply-To: References: Message-ID: On 7/5/22 8:52 AM, Adivya Singh wrote: > hi Brian, > > Regarding?the Policy.Json, it is working fine for 3 Controllers have a > individual Glance Container > > But I have another scenario, where only One controller holds the Glance > Image , but the same steps I do for the same, it fails with error code 403. Since the 403 is the default behavior, it sounds to me like your custom policy.yaml file isn't being found in your single-controller setup. Check the [oslo_policy]/policy_file config option in your glance-api.conf to make sure it's got the correct value. That's all I can think of at the moment; maybe someone else will have a better idea. cheers, brian > > Regards > Adivya Singh > > On Wed, Jun 15, 2022 at 2:04 AM Brian Rosmaita > > wrote: > > On 6/14/22 2:18 PM, Adivya Singh wrote: > > Hi Takashi, > > > > when a user upload images which is a member , The image?will be > set to > > private. > > > > This is what he is asking for access to make it public,? The > above rule > > applies for only public images > Alan and Takashi have both given you good advice: > > - By default, Glance assumes that your custom policy file is named > "policy.yaml".? If it doesn't have that name, Glance will assume it > does > not exist and will use the defaults defined in code.? You can change > the > filename glance will look for in your glance-api.conf -- look for > [oslo_policy]/policy_file > > - We recommend that you use YAML instead of JSON to write your policy > file because YAML allows comments, which you will find useful in > documenting any changes you make to the file > > - You want to keep the permissions on modify_image at their default > value, because otherwise users won't be able to do simple things like > add image properties to their own images > > - Some image properties can affect the system or other users.? Glance > will not allow *any* user to modify some system properties (for > example, > 'id', 'status'), and it requires additional permission along with > modify_image to set 'public' or 'community' for image visibility. > > - It's also possible to configure property protections to require > additional permission to CRUD specific properties (the default setting > is *not* to do this). > > For your particular use case, where you want a specific user to be able > to publicize_image, I would encourage you to think more carefully about > what exactly you want to accomplish.? Traditionally, images with > 'public' visibility are provided by the cloud operator, and this gives > image consumers some confidence that there's nothing malicious on the > image.? Public images are accessible to all users, and they will > show up > in the default image-list call for all users, so if a public image > contains something nasty, it can spread very quickly. > > Glance provides four levels of image visibility: > > - private: only visible to users in the project that owns the image > > - shared: visible to users in the project that owns the image *plus* > any > projects that are added to the image as "members".? (A shared image > with > no members is effectively a private image.)? See [0] for info about how > image sharing is designed and what API calls are associated with it. > There are a bunch of policies around this; the defaults are basically > what you'd expect, with the image owner being able to add and delete > members, and image members being able to 'accept' or 'reject' shared > images. > > - community: accessible to everyone, but only visible if you look for > them.? See [1] for an explanation of what that means.? The ability to > set 'community' visibility on an image is controlled by the > "communitize_image" policy (default is admin-or-owner). > > - public: accessible to everyone, and easily visible to all users. > Controlled by the "publicize_image" policy (default is admin-only). > > You're running your own cloud, so you can configure things however you > like, but I encourage you to think carefully before handing out > publicize_image permission, and consider whether one of the other > visibilities can accomplish what you want. > > For more info, the introductory section on "Images" in the api-ref [2] > has a useful discussion of image properties and image visibility. > > The final thing I want to stress is that you should be sure to test > carefully any policies you define in a custom policy file.? You are > actually having a good problem, that is, someone can't do something you > would like them to.? The way worse problem happens when in addition to > that someone being able to do what you want them to, a whole bunch of > other users can also do that same thing. > > OK, so to get to your particular issue: > > - you don't want to change the "modify_image" policy in the way you > proposed in your email, because no one (other than the person having > the > 'user role) will be able to do any kind of image updates. > > - if you decide to give that user publicize_image permissions, be > careful how you do it.? For example, > ? ?"publicize_image": "role:user" > won't allow an admin to make images public (unless you also give each > admin the 'user' role).? If you look at most of the policies in the > Xena > policy.yaml.sample, they begin "role:admin or ...". > > - the reason you were seeing the 403 when you tried to do > ? ? ?openstack image set --public > as the user with the 'user' property is that you were allowed to > modify_image but when you tried to change the visibility, you did not > have permission (because the default for that is role:admin) > > Hope this helps!? Once you get this figured out, you may want to put up > a patch to update the Glance documentation around policies.? I think > everything said above is in there somewhere, but it may not be in the > most obvious places. > > Actually, there is one more thing.? The above all applies to Xena, but > there's been some work around policies in Yoga and more happening in > Zed, so be sure to read the Glance release notes when you eventually > upgrade. > > > [0] > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html > > [1] > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html#sharing-images-with-all-users > > [2] https://docs.openstack.org/api-ref/image/v2/index.html#images > > > > > > regards > > Adivya Singh > > > > On Tue, Jun 14, 2022 at 10:54 AM Takashi Kajinami > > > >> wrote: > > > >? ? ?Glance has a separate policy rule (publicize_image) for > >? ? ?creating/updating public images., > >? ? ?and you should define that policy rule instead of modify_image. > > > > https://docs.openstack.org/glance/xena/admin/policies.html > > >? ? ? > > >? ? ?~~~ > >? ? ?|publicize_image| - Create or update public images > >? ? ?~~~ > > > >? ? ?AFAIK The modify_image policy defaults to rule:default and is > >? ? ?allowed for any users > >? ? ?as long as the target image is owned by that user. > > > > > >? ? ?On Tue, Jun 14, 2022 at 2:01 PM Adivya Singh > >? ? ? > >> > wrote: > > > >? ? ? ? ? ?Hi Brian, > > > >? ? ? ? ?Please find the response > > > >? ? ? ? ? ? ?1> i am using Xena release version 24.0.1 > > > >? ? ? ? ? ? ?Now the scenario?is line below, my customer wants to have > >? ? ? ? ? ? ?their login access on setting up the properties of an > image > >? ? ? ? ? ? ?to the public. now what i did is > > > >? ? ? ? ? ? ?1> i created a role in openstack using the admin > credential > >? ? ? ? ? ? ?name as "user" > >? ? ? ? ? ? ?2> i assigned that user to a role user. > >? ? ? ? ? ? ?3> i assigned those user to those project id, which they > >? ? ? ? ? ? ?want to access as a user role > > > >? ? ? ? ? ? ?Then i went to Glance container which is controller > by lxc > >? ? ? ? ? ? ?and made a policy.yaml file as below > > > >? ? ? ? ? ? ?root at aio1-glance-container-724aa778:/etc/glance# cat > policy.yaml > > > >? ? ? ? ? ? ? ?"modify_image": "role:user" > > > >? ? ? ? ? ? ?then i went to utility container and try to set the > >? ? ? ? ? ? ?properties of a image using openstack command > > > >? ? ? ? ? ? ?openstack image set --public > > > >? ? ? ? ? ? ?and then i got this error > > > >? ? ? ? ? ? ?HTTP 403 Forbidden: You are not authorized to complete > >? ? ? ? ? ? ?publicize_image action. > > > >? ? ? ? ? ? ?Even when i am trying the upload image with this user , i > >? ? ? ? ? ? ?get the above error only > > > >? ? ? ? ? ? ?export OS_ENDPOINT_TYPE=internalURL > >? ? ? ? ? ? ?export OS_INTERFACE=internalURL > >? ? ? ? ? ? ?export OS_USERNAME=adsingh > >? ? ? ? ? ? ?export OS_PASSWORD='adsingh' > >? ? ? ? ? ? ?export OS_PROJECT_NAME=adsingh > >? ? ? ? ? ? ?export OS_TENANT_NAME=adsingh > >? ? ? ? ? ? ?export OS_AUTH_TYPE=password > >? ? ? ? ? ? ?export OS_AUTH_URL=https:// horizon>:5000/v3 > >? ? ? ? ? ? ?export OS_NO_CACHE=1 > >? ? ? ? ? ? ?export OS_USER_DOMAIN_NAME=Default > >? ? ? ? ? ? ?export OS_PROJECT_DOMAIN_NAME=Default > >? ? ? ? ? ? ?export OS_REGION_NAME=RegionOne > > > >? ? ? ? ? ? ?Regards > >? ? ? ? ? ? ?Adivya Singh > > > > > > > >? ? ? ? ? ? ?On Mon, Jun 13, 2022 at 6:41 PM Alan Bishop > >? ? ? ? ? ? ? > >> wrote: > > > > > > > >? ? ? ? ? ? ? ? ?On Mon, Jun 13, 2022 at 6:00 AM Brian Rosmaita > >? ? ? ? ? ? ? ? ? > >? ? ? ? ? ? ? ? ? >> wrote: > > > >? ? ? ? ? ? ? ? ? ? ?On 6/13/22 8:29 AM, Adivya Singh wrote: > >? ? ? ? ? ? ? ? ? ? ? > hi Team, > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > Any thoughts on this > > > >? ? ? ? ? ? ? ? ? ? ?H Adivya, > > > >? ? ? ? ? ? ? ? ? ? ?Please supply some more information, for example: > > > >? ? ? ? ? ? ? ? ? ? ?- which openstack release you are using > >? ? ? ? ? ? ? ? ? ? ?- the full API request you are making to > modify the > >? ? ? ? ? ? ? ? ? ? ?image > >? ? ? ? ? ? ? ? ? ? ?- the full API response you receive > >? ? ? ? ? ? ? ? ? ? ?- whether the user with "role:user" is in the > same > >? ? ? ? ? ? ? ? ? ? ?project that owns the > >? ? ? ? ? ? ? ? ? ? ?image > >? ? ? ? ? ? ? ? ? ? ?- debug level log extract for this call if > you have it > >? ? ? ? ? ? ? ? ? ? ?- anything else that could be relevant, for > example, > >? ? ? ? ? ? ? ? ? ? ?have you modified > >? ? ? ? ? ? ? ? ? ? ?any other policies, and if so, what values > are you > >? ? ? ? ? ? ? ? ? ? ?using now? > > > > > >? ? ? ? ? ? ? ? ?Also bear in mind that the default policy_file > name is > >? ? ? ? ? ? ? ? ?"policy.yaml" (not .json). You either > >? ? ? ? ? ? ? ? ?need to provide a policy.yaml file, or override the > >? ? ? ? ? ? ? ? ?policy_file setting if you really want to > >? ? ? ? ? ? ? ? ?use policy.json. > > > >? ? ? ? ? ? ? ? ?Alan > > > >? ? ? ? ? ? ? ? ? ? ?cheers, > >? ? ? ? ? ? ? ? ? ? ?brian > > > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > Regards > >? ? ? ? ? ? ? ? ? ? ? > Adivya Singh > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > On Sat, Jun 11, 2022 at 12:40 AM Adivya Singh > >? ? ? ? ? ? ? ? ? ? ? > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >>> wrote: > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?Hi Team, > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?I have a use case where I have to give > a user > >? ? ? ? ? ? ? ? ? ? ?restriction on > >? ? ? ? ? ? ? ? ? ? ? >? ? ?updating the image properties as a member. > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?I have created a policy Json file and give > >? ? ? ? ? ? ? ? ? ? ?the modify_image rule to > >? ? ? ? ? ? ? ? ? ? ? >? ? ?the particular role, but still it is > not working > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?"modify_image": "role:user", This role is > >? ? ? ? ? ? ? ? ? ? ?created in OpenStack. > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?but still it is failing while updating > >? ? ? ? ? ? ? ? ? ? ?properties with a > >? ? ? ? ? ? ? ? ? ? ? >? ? ?particular?user assigned to a role as > "access > >? ? ? ? ? ? ? ? ? ? ?denied" and > >? ? ? ? ? ? ? ? ? ? ? >? ? ?unauthorized access > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?Regards > >? ? ? ? ? ? ? ? ? ? ? >? ? ?Adivya Singh > >? ? ? ? ? ? ? ? ? ? ? > > > > > > > From openstack at garloff.de Wed Jul 6 20:51:13 2022 From: openstack at garloff.de (Kurt Garloff) Date: Wed, 6 Jul 2022 22:51:13 +0200 Subject: openstackclient: booting from image-created volume w/ delete_on_termination In-Reply-To: References: Message-ID: Hi Dmitriy, thanks for your response! It looks like this option is new in Wallaby and it looks like it would address my use-case. I'll grab newer client utils and see whether it works. -- Kurt On 06.07.22 13:56, Dmitriy Rabotyagov wrote: > Hey! > > delete_on_termination is a flag that is provided to nova during volume > attachment basically. > So in case of attaching volume to existing server, you can provide > that flag to attachment: > > openstack server add volume --enable-delete-on-termination > --os-compute-api-version 2.79 $UUID > > It's more complex when we're talking about server creation though. > What openstackclient allows you to do, is to provide --block-device > flag, where you can provide extra specs to the mapping. You can check > doc on how to use it here > https://docs.openstack.org/python-openstackclient/yoga/cli/command-objects/server.html#cmdoption-openstack-server-create-block-device > > For API call it would be block_device_mapping_v2 option provided with > request which supports kind of same format: > https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server > > ??, 6 ???. 2022 ?. ? 13:21, Kurt Garloff : >> Hi openstack CLI wizards, >> >> Having flavors without disks, I want volumes to be created from images on the fly that I can boot from. >> But I don't want to do bookkeeping for these volumes; they should disappear once the server disappears. >> >> On the command line with nova, I can do this: >> nova boot --nic net-name=$MYNET --key-name $MYKEY --flavor $MYFLAVOR \ >> --block-device "id=$MYIMGIFD,source=image,dest=volume,size=$MYSIZE,shutdown=remove,bootindex=0" $MYNAME >> >> I did not find a way to do this with openstack server create. >> Is there one? [...] From gmann at ghanshyammann.com Wed Jul 6 21:12:31 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Jul 2022 16:12:31 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 7 July 2022 at 1500 UTC In-Reply-To: <181ca09af2b.ac5a635c132555.2514523884960142985@ghanshyammann.com> References: <181ca09af2b.ac5a635c132555.2514523884960142985@ghanshyammann.com> Message-ID: <181d55b7d9f.d9489bd0147866.9174330688028119568@ghanshyammann.com> Hello Everyone, Below is the agenda for Today's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Checks on Zed cycle tracker ** https://etherpad.opendev.org/p/tc-zed-tracker * CentOS-stream-9 testing stability and collaboration with centos-stream maintainers * RBAC feedback in ops meetup ** https://etherpad.opendev.org/p/rbac-zed-ptg#L171 ** https://review.opendev.org/c/openstack/governance/+/847418 * Create the Environmental Sustainability SIG ** https://review.opendev.org/c/openstack/governance-sigs/+/845336 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 04 Jul 2022 11:27:21 -0500 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 7 July 2022, at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, 6 July at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From allison at openinfra.dev Wed Jul 6 21:12:53 2022 From: allison at openinfra.dev (Allison Price) Date: Wed, 6 Jul 2022 16:12:53 -0500 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: Message-ID: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. Cheers, Allison > On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: > > Just a quick follow up - I was permitted to share a pre-published > version of the article I was citing in my email from June 4th. [1] > Please enjoy responsibly. :-) > > [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf > > Cheers, > Radek > -yoctozepto > > On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek > wrote: >> >> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: >>> >>> Hi Rados?aw, >> >> Hi Andy, >> >> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. >> >>>> First of all, wow, that looks very interesting and in fact very much >>>> what I'm looking for. As I mentioned in the original message, the >>>> things this solution lacks are not something blocking for me. >>>> Regarding the approach to Guacamole, I know that it's preferable to >>>> have guacamole extension (that provides the dynamic inventory) >>>> developed rather than meddle with the internal database but I guess it >>>> is a good start. >>> >>> An even better approach would be something like the Guacozy project >>> (https://guacozy.readthedocs.io) >> >> I am not convinced. The project looks dead by now. [1] >> It offers a different UI which may appeal to certain users but I think >> sticking to vanilla Guacamole should do us right... For the time being >> at least. ;-) >> >>> They were able to use the Guacmole JavaScript libraries directly to >>> embed the HTML5 desktop within a React? app. I think this is a much >>> better approach, and I'd love to be able to do something similar in >>> the future. Would make the integration that much nicer. >> >> Well, as an example of embedding in the UI - sure. But it does not >> invalidate the need to modify Guacamole's database or write an >> extension to it so that it has the necessary creds. >> >>>> >>>> Any "quickstart setting up" would be awesome to have at this stage. As >>>> this is a Django app, I think I should be able to figure out the bits >>>> and bolts to get it up and running in some shape but obviously it will >>>> impede wider adoption. >>> >>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get >>> a quickstart guide together. >>> >>> I have a private repo with code to set up a development environment >>> which uses Heat and Ansible - this might be the quickest way to get >>> started. I'm happy to share this with you privately if you like. >> >> I'm interested. Please share it. >> >>>> On the note of adoption, if I find it usable, I can provide support >>>> for it in Kolla [1] and help grow the project's adoption this way. >>> >>> Kolla could be useful. We're already using containers for this project >>> now, and I have a helm chart for deploying to k8s. >>> https://github.com/NeCTAR-RC/bumblebee-helm >> >> Nice! The catch is obviously that some orgs frown upon K8s because >> they lack the necessary know-how. >> Kolla by design avoids the use of K8s. OpenStack components are not >> cloud-native anyway so benefits of using K8s are diminished (yet it >> makes sense to use K8s if there is enough experience with it as it >> makes certain ops more streamlined and simpler this way). >> >>> Also, an important part is making sure the images are set up correctly >>> with XRDP, etc. Our images are built using Packer, and the config for >>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images >> >> Ack, thanks for sharing. >> >>>> Also, since this is OpenStack-centric, maybe you could consider >>>> migrating to OpenDev at some point to collaborate with interested >>>> parties using a common system? >>>> Just food for thought at the moment. >>> >>> I think it would be more appropriate to start a new project. I think >>> our codebase has too many assumptions about the underlying cloud. >>> >>> We inherited the code from another project too, so it's got twice the cruft. >> >> I see. Well, that's good to know at least. >> >>>> Writing to let you know I have also found the following related paper: [1] >>>> and reached out to its authors in the hope to enable further >>>> collaboration to happen. >>>> The paper is not open access so I have only obtained it for myself and >>>> am unsure if licensing permits me to share, thus I also asked the >>>> authors to share their copy (that they have copyrights to). >>>> I have obviously let them know of the existence of this thread. ;-) >>>> Let's stay tuned. >>>> >>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 >>> >>> This looks interesting. A collaboration would be good if there is >>> enough interest in the community. >> >> I am looking forward to the collaboration happening. This could really >> liven up the OpenStack VDI. >> >> [1] https://github.com/paidem/guacozy/ >> >> -yoctozepto > From vishwanath.ne at gmail.com Wed Jul 6 21:24:15 2022 From: vishwanath.ne at gmail.com (Vishwanath) Date: Wed, 6 Jul 2022 14:24:15 -0700 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely Message-ID: Hello all, I have upgraded openstack, we are currently running on 14.1.0. I noticed fluentd container restarting indefinitely. The message I see under /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to fix this ? I noticed a similar issue back in 2017 from this post - https://bugs.launchpad.net/kolla-ansible/+bug/1663126 *error log:* *2022-07-06 21:20:42 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="'format' parameter is required"* *full logs:* 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf" 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.2.3' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.1.1' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version '0.14.1' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version '0.6.1' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.3' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5' 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, "", "apache_access", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, "", "wsgi_access", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(sahara-api|sahara-engine)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(magnum-conductor|magnum-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(keystone)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(heat-engine|heat-api|heat-api-cfn)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(glance-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(ceilometer-polling|ceilometer-agent-notification)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(mistral-server|mistral-engine|mistral-executor)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(murano-api|murano-engine)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(freezer-api|freezer-api_access|freezer-manage)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(kuryr-server)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(ironic-api|ironic-conductor|ironic-inspector)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(tacker-server|tacker-conductor)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(blazar-api|blazar-manager)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(masakari-engine|masakari-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /.+/, "", "unmatched", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload [#, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload [#, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] *2022-07-06 21:20:42 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="'format' parameter is required"* Thanks Vish -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Wed Jul 6 21:30:24 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 6 Jul 2022 18:30:24 -0300 Subject: [manila] Stepping down from manila core In-Reply-To: References: Message-ID: Em ter., 5 de jul. de 2022 ?s 15:50, Rodrigo Barbieri < rodrigo.barbieri2010 at gmail.com> escreveu: > Hello fellow zorillas, > > It has been a long time since I started to hope every day I'd able to > dedicate more time to manila core activities and so far that hasn't > happened and I don't see it happening in the near foreseeable future. I had > been following the meetings notes weekly until ~2 months ago but I recently > ended up dropping those as well. > > Therefore I am stepping down from the manila core role. I would like to > thank everyone that I worked closely with from 2014 to 2019 on this > project. I hold this project and all of you dear to my heart and I am > extremely glad and grateful to have worked with you and met you on > summits/PTGs, as the life memories around Manila are among the best I have > from that period. > > Rodrigo, thank you for your contributions in various ways to Manila during all these years. You helped us to shape many features and served as core for a long time. I have worked with you closely for some time and I learned a lot from you. I wish you all the best. > If someday circumstances change, the manila project and its community will > be ones I will be very happy to go back to working closely with again. > > And we would be lucky to have you back! > Kind regards, > -- > Rodrigo Barbieri > MSc Computer Scientist > Regards, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinhducnguyen1708 at gmail.com Thu Jul 7 04:27:21 2022 From: vinhducnguyen1708 at gmail.com (Vinh Nguyen Duc) Date: Thu, 7 Jul 2022 11:27:21 +0700 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) Message-ID: I have a problem with I/O performance on Openstack block device HDD. *Environment:**Openstack version: Ussuri* - OS: CentOS8 - Kernel: 4.18.0-240.15.1.el8_3.x86_64 - KVM: qemu-kvm-5.1.0-20.el8 *CEPH version: Octopus * *15.2.8-0.el8.x84_64* - OS: CentOS8 - Kernel: 4.18.0-240.15.1.el8_3.x86_64 In CEPH Cluster we have 2 class: - Bluestore - HDD (only for cinder volume) - SSD (images, cinder volume) *Hardware:* - Ceph-client: 2x10Gbps (bond) MTU 9000 - Ceph-replicate: 2x10Gbps (bond) MTU 9000 *VM:* - Swapoff - non LVM *Issue*When create VM on Openstack using cinder volume HDD, have really poor performance: 60-85 MB/s writes. And when tests with ioping have high latency. *Diagnostic* 1. I have checked the performance between Compute Host (Openstack) and CEPH, and created an RBD (HDD class) mounted on Compute Host. And the performance is 300-400 MB/s. => So i think the problem is in the hypervisor But when I check performance on VM using cinder Volume SSD, the result equals performance when test RBD (SSD) mounted on a Compute host. 2. I already have to configure disk_cachemodes="network=writeback"(and enable rbd cache client) or test with disk_cachemodes="none" but nothing different. 3. Push iperf3 from compute host to random ceph host still has 20Gb traffic. 4. Compute Host and CEPH host connected to the same switch (layer2). Where else can I look for issues? Please help me in this case. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at cleura.com Thu Jul 7 07:54:06 2022 From: tobias.rydberg at cleura.com (Tobias Rydberg) Date: Thu, 7 Jul 2022 09:54:06 +0200 Subject: [publiccloud-sig] A new start for Public Cloud SIG Message-ID: Hi everyone, In Berlin it became clear that there is a big interest in restarting the Public Cloud SIG. Thank you all for your contributions in that forum session [0] and the interest in participating in the work of this SIG. A lot of good ideas of what we should focus on was identified, with a clear focus of interoperability and standardization, to make the experience of using OpenStack as an end-user even better. Standardization of images and flavors? - naming, metadata etc - being one of them, working closely with InterOp WG regarding the checks and governance of the OpenStack Powered Program another. The ultimate goal could be to reach a state where it is possible to start to federate between the public clouds, but for that to be possible on a more global scale we need to start with aligning the simple things. To kick this off, we will start with bi-weekly IRC meetings again, shape the goals kick of some work towards identified goals. Since we have an IRC channel (#openstck-publiccloud) my suggestion is that we will start there. Let's decide on suggestions for day and time for our bi-weekly meetings during the kick off meeting. Kick off meeting =========== When: Wednesday 10th of August at 1400 UTC Where: IRC in channel #openstack-publiccloud I created an etherpad for our first meeting [1], feel free to add items to the agenda or other suggestions on goals etc that you might have prior to the meeting. Hope to see at IRC in August! [0] https://etherpad.opendev.org/p/berlin-summit-future-public-cloud-sig [1] https://etherpad.opendev.org/p/publiccloud-sig-kickoff BR, Tobias Rydberg -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3626 bytes Desc: S/MIME Cryptographic Signature URL: From skaplons at redhat.com Thu Jul 7 07:59:26 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 07 Jul 2022 09:59:26 +0200 Subject: [all][TC] Bare rechecks stats update Message-ID: <4937933.kRPvX5JM0G@p1> Hi, New stats from last 7 days about bare rechecks in each team are available in [1]: +--------------------+---------------+--------------+-------------------+ | Team | Bare rechecks | All Rechecks | Bare rechecks [%] | +--------------------+---------------+--------------+-------------------+ | skyline | 1 | 1 | 100.0 | | OpenStack Charms | 10 | 10 | 100.0 | | kolla | 92 | 92 | 100.0 (was 76.19%)| | sahara | 21 | 21 | 100.0 | | tacker | 5 | 5 | 100.0 | | keystone | 17 | 17 | 100.0 | | OpenStack-Helm | 15 | 15 | 100.0 (was 70%) | | rally | 1 | 1 | 100.0 | | barbican | 2 | 2 | 100.0 | | Telemetry | 3 | 3 | 100.0 | | ec2-api | 5 | 5 | 100.0 | | zun | 2 | 2 | 100.0 | | kuryr | 30 | 32 | 93.75 (was 92.5%) | | cinder | 89 | 102 | 87.25 (was 81.33%)| | Puppet OpenStack | 13 | 15 | 86.67 (was 85.29%)| | horizon | 17 | 20 | 85.0 (was 100%) | | Quality Assurance | 26 | 32 | 81.25 (was 64.44%)| | manila | 70 | 89 | 78.65 (was 74%) | | ironic | 77 | 101 | 76.24 (was 79.66%)| | tripleo | 230 | 318 | 72.33 (was 87.19%)| | swift | 5 | 7 | 71.43 (was 75%) | | glance | 8 | 12 | 66.67 (was 50%) | | designate | 9 | 14 | 64.29 (was 64.71%)| | nova | 10 | 18 | 55.56 (was 84.21%)| | octavia | 6 | 11 | 54.55 (was 16.67%)| | OpenStackSDK | 13 | 25 | 52.0 (was 74.19%) | | neutron | 26 | 54 | 48.15 (was 73.68%)| | heat | 3 | 8 | 37.5 (was 25%) | | requirements | 1 | 3 | 33.33 (was 87.5) | | oslo | 1 | 3 | 33.33 (was 0%) | | Release Management | 0 | 2 | 0.0 | | OpenStackAnsible | 0 | 34 | 0.0 (was 13.64%) | +--------------------+---------------+--------------+-------------------+ There are some teams which made progress since last check (especially *OpenStackAnsible* which had all rechecks done with given some comment at least - thx a lot for that :)) [1] https://etherpad.opendev.org/p/recheck-weekly-summary[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://etherpad.opendev.org/p/recheck-weekly-summary -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From pierre at stackhpc.com Thu Jul 7 08:48:48 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 7 Jul 2022 10:48:48 +0200 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely In-Reply-To: References: Message-ID: Hello Vish, Are you using a custom configuration for fluentd? Could you please share your generated td-agent.conf? Best wishes, Pierre Riteau (priteau) On Wed, 6 Jul 2022 at 23:34, Vishwanath wrote: > Hello all, > > I have upgraded openstack, we are currently running on 14.1.0. I noticed > fluentd container restarting indefinitely. The message I see under > /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to > fix this ? I noticed a similar issue back in 2017 from this post - > https://bugs.launchpad.net/kolla-ansible/+bug/1663126 > > *error log:* > *2022-07-06 21:20:42 +0000 [error]: config error > file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError > error="'format' parameter is required"* > > > *full logs:* > 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded > path="/etc/td-agent/td-agent.conf" > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' > version '5.2.3' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' > version '4.1.1' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' version > '2.6.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version > '0.14.1' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version > '0.6.1' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version > '2.0.3' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version > '1.8.2' > 2022-07-06 21:20:42 +0000 [info]: gem > 'fluent-plugin-prometheus_pushgateway' version '0.0.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' > version '2.1.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' > version '2.4.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' > version '2.3.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version > '1.0.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version > '1.2.5' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, > "", "apache_access", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, > "", "wsgi_access", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(sahara-api|sahara-engine)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(magnum-conductor|magnum-api)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(keystone)$/, "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(heat-engine|heat-api|heat-api-cfn)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(glance-api)$/, "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(ceilometer-polling|ceilometer-agent-notification)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(mistral-server|mistral-engine|mistral-executor)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(murano-api|murano-engine)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(freezer-api|freezer-api_access|freezer-manage)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(kuryr-server)$/, "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(ironic-api|ironic-conductor|ironic-inspector)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(tacker-server|tacker-conductor)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(blazar-api|blazar-manager)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(masakari-engine|masakari-api)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /.+/, "", "unmatched", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload > [# @keys="Payload">, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload > [# @keys="Payload">, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] > *2022-07-06 21:20:42 +0000 [error]: config error > file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError > error="'format' parameter is required"* > > > Thanks > Vish > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jul 7 09:26:48 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 7 Jul 2022 11:26:48 +0200 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> References: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> Message-ID: Hi Allison, I am also in touch with folks at rz.uni-freiburg.de who are also interested in this topic. We might be able to gather a panel for discussion. I think we need to introduce the topic properly with some presentations and then move onto a discussion if time allows (I believe it will as the time slot is 1h and the presentations should not be overly detailed for an introductory session). Cheers, Radek -yoctozepto On Wed, 6 Jul 2022 at 23:12, Allison Price wrote: > > I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. > > Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. > > Cheers, > Allison > > > On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: > > > > Just a quick follow up - I was permitted to share a pre-published > > version of the article I was citing in my email from June 4th. [1] > > Please enjoy responsibly. :-) > > > > [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf > > > > Cheers, > > Radek > > -yoctozepto > > > > On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek > > wrote: > >> > >> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: > >>> > >>> Hi Rados?aw, > >> > >> Hi Andy, > >> > >> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. > >> > >>>> First of all, wow, that looks very interesting and in fact very much > >>>> what I'm looking for. As I mentioned in the original message, the > >>>> things this solution lacks are not something blocking for me. > >>>> Regarding the approach to Guacamole, I know that it's preferable to > >>>> have guacamole extension (that provides the dynamic inventory) > >>>> developed rather than meddle with the internal database but I guess it > >>>> is a good start. > >>> > >>> An even better approach would be something like the Guacozy project > >>> (https://guacozy.readthedocs.io) > >> > >> I am not convinced. The project looks dead by now. [1] > >> It offers a different UI which may appeal to certain users but I think > >> sticking to vanilla Guacamole should do us right... For the time being > >> at least. ;-) > >> > >>> They were able to use the Guacmole JavaScript libraries directly to > >>> embed the HTML5 desktop within a React? app. I think this is a much > >>> better approach, and I'd love to be able to do something similar in > >>> the future. Would make the integration that much nicer. > >> > >> Well, as an example of embedding in the UI - sure. But it does not > >> invalidate the need to modify Guacamole's database or write an > >> extension to it so that it has the necessary creds. > >> > >>>> > >>>> Any "quickstart setting up" would be awesome to have at this stage. As > >>>> this is a Django app, I think I should be able to figure out the bits > >>>> and bolts to get it up and running in some shape but obviously it will > >>>> impede wider adoption. > >>> > >>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get > >>> a quickstart guide together. > >>> > >>> I have a private repo with code to set up a development environment > >>> which uses Heat and Ansible - this might be the quickest way to get > >>> started. I'm happy to share this with you privately if you like. > >> > >> I'm interested. Please share it. > >> > >>>> On the note of adoption, if I find it usable, I can provide support > >>>> for it in Kolla [1] and help grow the project's adoption this way. > >>> > >>> Kolla could be useful. We're already using containers for this project > >>> now, and I have a helm chart for deploying to k8s. > >>> https://github.com/NeCTAR-RC/bumblebee-helm > >> > >> Nice! The catch is obviously that some orgs frown upon K8s because > >> they lack the necessary know-how. > >> Kolla by design avoids the use of K8s. OpenStack components are not > >> cloud-native anyway so benefits of using K8s are diminished (yet it > >> makes sense to use K8s if there is enough experience with it as it > >> makes certain ops more streamlined and simpler this way). > >> > >>> Also, an important part is making sure the images are set up correctly > >>> with XRDP, etc. Our images are built using Packer, and the config for > >>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images > >> > >> Ack, thanks for sharing. > >> > >>>> Also, since this is OpenStack-centric, maybe you could consider > >>>> migrating to OpenDev at some point to collaborate with interested > >>>> parties using a common system? > >>>> Just food for thought at the moment. > >>> > >>> I think it would be more appropriate to start a new project. I think > >>> our codebase has too many assumptions about the underlying cloud. > >>> > >>> We inherited the code from another project too, so it's got twice the cruft. > >> > >> I see. Well, that's good to know at least. > >> > >>>> Writing to let you know I have also found the following related paper: [1] > >>>> and reached out to its authors in the hope to enable further > >>>> collaboration to happen. > >>>> The paper is not open access so I have only obtained it for myself and > >>>> am unsure if licensing permits me to share, thus I also asked the > >>>> authors to share their copy (that they have copyrights to). > >>>> I have obviously let them know of the existence of this thread. ;-) > >>>> Let's stay tuned. > >>>> > >>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 > >>> > >>> This looks interesting. A collaboration would be good if there is > >>> enough interest in the community. > >> > >> I am looking forward to the collaboration happening. This could really > >> liven up the OpenStack VDI. > >> > >> [1] https://github.com/paidem/guacozy/ > >> > >> -yoctozepto > > > From eyalb1 at gmail.com Thu Jul 7 10:04:47 2022 From: eyalb1 at gmail.com (Eyal B) Date: Thu, 7 Jul 2022 13:04:47 +0300 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: Message-ID: Hello, Will the licenses be renewed ? they ended on July 5 Eyal On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni wrote: > Sorry for the typo, It'd be July 5, 2022 > > > On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray wrote: > >> Hi Swapnil, >> >> We?re at July 2021 already ? so they expire at the end of this month? >> >> >> >> *From: *Swapnil Kulkarni >> *Date: *Tuesday, 6 July 2021 at 17:50 >> *To: *openstack-discuss at lists.openstack.org < >> openstack-discuss at lists.openstack.org> >> *Subject: *[all] PyCharm Licenses Renewed till July 2021 >> >> Hello, >> >> >> >> Happy to inform you the open source developer license for Pycharm has >> been renewed for 1 additional year till July 2021. >> >> >> Best Regards, >> Swapnil Kulkarni >> coolsvap at gmail dot com >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Jul 7 10:06:28 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 7 Jul 2022 12:06:28 +0200 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: References: Message-ID: <20220707100628.zdaikq5knnyzktxo@localhost> On 07/07, Vinh Nguyen Duc wrote: > I have a problem with I/O performance on Openstack block device HDD. > > *Environment:**Openstack version: Ussuri* > - OS: CentOS8 > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > - KVM: qemu-kvm-5.1.0-20.el8 > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > - OS: CentOS8 > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > In CEPH Cluster we have 2 class: > - Bluestore > - HDD (only for cinder volume) > - SSD (images, cinder volume) > *Hardware:* > - Ceph-client: 2x10Gbps (bond) MTU 9000 > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > *VM:* > - Swapoff > - non LVM > > *Issue*When create VM on Openstack using cinder volume HDD, have really > poor performance: 60-85 MB/s writes. And when tests with ioping have high > latency. > *Diagnostic* > 1. I have checked the performance between Compute Host (Openstack) and > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > performance is 300-400 MB/s. Hi, I probably won't be able to help you on the hypervisor side, but I have a couple of questions that may help narrow down the issue: - Are Cinder volumes using encryption? - How did you connect the volume to the Compute Host, using krbd or rbd-nbd? - Do both RBD images (Cinder and yours) have the same Ceph flags? - Did you try connecting to the Compute Host the same RBD image created by Cinder instead of creating a new one? Cheers, Gorka. > => So i think the problem is in the hypervisor > But when I check performance on VM using cinder Volume SSD, the result > equals performance when test RBD (SSD) mounted on a Compute host. > 2. I already have to configure disk_cachemodes="network=writeback"(and > enable rbd cache client) or test with disk_cachemodes="none" but nothing > different. > 3. Push iperf3 from compute host to random ceph host still has 20Gb > traffic. > 4. Compute Host and CEPH host connected to the same switch (layer2). > Where else can I look for issues? > Please help me in this case. > Thank you. From geguileo at redhat.com Thu Jul 7 10:15:08 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 7 Jul 2022 12:15:08 +0200 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: <20220706082014.qu34mzuhmb3uma2j@localhost> References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> <20220706082014.qu34mzuhmb3uma2j@localhost> Message-ID: <20220707101508.omgqkhyvlxmuvgry@localhost> On 06/07, Gorka Eguileor wrote: > On 05/07, Arnaud wrote: > > Hey, > > > > Thanks for your answer! > > OK, I understand the why ;) also because we hit some issues on our deployment. > > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). > > Hi Arnaud, > > Deferred deletion should reduce the number of required native threads > since delete calls will complete faster. > > > > We also disabled the periodic task to compute usage and use the less precise way from db. > > Are you referring to the 'rbd_exclusive_cinder_pool' configuration > option? Because that should already have the optimum default (True > value). > > > > First question here: do you think we are going the right path? > > > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? > > Native threads on the RBD driver are not only used for deletion, they > are used for *all* RBD calls. > > We haven't defined any particular method to calculate the optimum number > of threads on a system, but I can think of 2 possible avenues to > explore: > > - Performance testing: Run a set of tests with a high number of > concurrent requests and different operations and see how Cinder > performs. I wouldn't bother with individual attach and detach to VM > operations because those are noops on the Cinder side, creating volume > from image with either different images or cache disabled would be > better. > > - Reporting native thread usage: To really know if the number of native > threads is sufficient or not you could modify the Cinder volume > manager (and possibly also eventlet.tpool to gather statistics on the > number of used/free native threads and number of queued requests that > are waiting for a native thread to pick them up. > Hi Arnaud, I just realized you should also be able to use Guru Meditation Reports [1] to get the native threads that are executing at a given time. Cinder uses multiple processes, one for the parent and one for each individual backend, so the PID that should be used to send the signal is not the parent. We can get GMR in the logs for all backend with: $ ps -C cinder-volume -o pid --no-headers | tail -n +2 | xargs sudo kill -SIGUSR2 Then go into the "Threads" section and see how many native threads there are. Cheers, Gorka. [1]: https://docs.openstack.org/nova/queens/reference/gmr.html > Cheers, > Gorka. > > > > > Thanks! > > > > Arnaud > > > > > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > > >Hi Arnaud, > > > > > >We discussed this in last week's cinder meeting and unfortunately we > > >haven't tested it thoroughly so we don't have any performance numbers to > > >share. > > >What we can tell is the reason why RBD requires a higher number of native > > >threads. RBD calls C code which could potentially block green threads hence > > >blocking the main operation therefore all of the calls in RBD to execute > > >operations are wrapped to use native threads so depending on the operations > > >we want to > > >perform concurrently, we can set the value of > > >backend_native_threads_pool_size for RBD. > > > > > >Thanks and regards > > >Rajat Dhasmana > > > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > > > >> Hey all, > > >> > > >> Is there any recommendation on the number of threads to use when using > > >> RBD backend (option backend_native_threads_pool_size)? > > >> The doc is saying that 20 is the default but it should be increased, > > >> specially for the RBD driver, but up to which value? > > >> > > >> Is there anyone tuning this parameter in their openstack deployments? > > >> > > >> If yes, maybe we can add some recommendations on openstack large-scale > > >> doc about it? > > >> > > >> Cheers, > > >> > > >> Arnaud. > > >> > > >> From dmellado at redhat.com Thu Jul 7 11:08:54 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Thu, 7 Jul 2022 13:08:54 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: Message-ID: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> Just noticed that as well, thanks for bringing this up Eyal! On 7/7/22 12:04, Eyal B wrote: > Hello, > > Will the licenses be renewed ? they ended on July 5 > > Eyal > > On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni > wrote: > > Sorry for the typo, It'd be July 5, 2022 > > > On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray > wrote: > > Hi Swapnil,____ > > We?re at July 2021 already ? so they expire at the end of this > month?____ > > __ __ > > *From: *Swapnil Kulkarni > > *Date: *Tuesday, 6 July 2021 at 17:50 > *To: *openstack-discuss at lists.openstack.org > > > > *Subject: *[all] PyCharm Licenses Renewed till July 2021____ > > Hello,____ > > __ __ > > Happy to inform you the open source developer?license for > Pycharm has been renewed for 1 additional year till July 2021. ____ > > > ____ > > Best?Regards, > Swapnil Kulkarni > coolsvap at gmail dot com____ > > __ __ > From smooney at redhat.com Thu Jul 7 11:32:43 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 12:32:43 +0100 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. Message-ID: hi o/ it looks like neutron recently moved linuxbridge to be experimental Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 neutron-server[90491]: ERROR neutron.common.experimental [-] Feature 'linuxbridge' is experimental and has to be explicitly enabled in 'cfg.CONF.experimental' we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but it is enabeld in our job config as one of the configured mech drivers even if we dont install the agent. i have not looked up what change in neutron change this yet but im going to propose a patch to nova to disable it and i likely need to fix os-vif too so if you see a post fialure in either the nova-next of novs hybrid plug jobs (and or look into it and see " die 2385 'Neutron did not start'" in the Run devstack task summery that is why this is failing. ill update this thread once we have fixed the issue. From smooney at redhat.com Thu Jul 7 11:58:43 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 12:58:43 +0100 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. In-Reply-To: References: Message-ID: i have filed a bug for this https://bugs.launchpad.net/os-vif/+bug/1980948 and submited two patches for os-vif and nova https://review.opendev.org/q/topic:bug%252F1980948 other projects might also be affected by the cahnge intorduced in https://github.com/openstack/neutron/commit/7f0413c84c4515cd2fae31d823613c4d7ea43110 until those are merged please continue to hold of rechecking nova or os-vif ci failures. On Thu, 2022-07-07 at 12:32 +0100, Sean Mooney wrote: > hi o/ > > it looks like neutron recently moved linuxbridge to be experimental > Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 neutron-server[90491]: ERROR neutron.common.experimental [-] Feature 'linuxbridge' is > experimental and has to be explicitly enabled in 'cfg.CONF.experimental' > > we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but it is enabeld in our job config as one of the configured mech drivers > even if we dont install the agent. > > i have not looked up what change in neutron change this yet but im going to propose a patch to nova to disable it and i likely need to fix os-vif too > so if you see a post fialure in either the nova-next of novs hybrid plug jobs (and or look into it and see " die 2385 'Neutron did not start'" in the > Run devstack task summery that is why this is failing. > > ill update this thread once we have fixed the issue. > From smooney at redhat.com Thu Jul 7 12:26:20 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 13:26:20 +0100 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: <20220707100628.zdaikq5knnyzktxo@localhost> References: <20220707100628.zdaikq5knnyzktxo@localhost> Message-ID: <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> On Thu, 2022-07-07 at 12:06 +0200, Gorka Eguileor wrote: > On 07/07, Vinh Nguyen Duc wrote: > > I have a problem with I/O performance on Openstack block device HDD. > > > > *Environment:**Openstack version: Ussuri* > > - OS: CentOS8 > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > - KVM: qemu-kvm-5.1.0-20.el8 > > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > > - OS: CentOS8 > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > In CEPH Cluster we have 2 class: > > - Bluestore > > - HDD (only for cinder volume) > > - SSD (images, cinder volume) > > *Hardware:* > > - Ceph-client: 2x10Gbps (bond) MTU 9000 > > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > > *VM:* > > - Swapoff > > - non LVM > > > > *Issue*When create VM on Openstack using cinder volume HDD, have really > > poor performance: 60-85 MB/s writes. And when tests with ioping have high > > latency. > > *Diagnostic* > > 1. I have checked the performance between Compute Host (Openstack) and > > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > > performance is 300-400 MB/s. > > Hi, > > I probably won't be able to help you on the hypervisor side, but I have > a couple of questions that may help narrow down the issue: > > - Are Cinder volumes using encryption? if you are not using encyrption you might be encountering librados issue tracked downstream by https://bugzilla.redhat.com/show_bug.cgi?id=1897572 this is unfixable without moving to updating to a new version fo the cpeh client libs. > > - How did you connect the volume to the Compute Host, using krbd or > rbd-nbd? in ussuri we still technially have the workaround options to use krbd but they are deprecated and removed in xena. https://github.com/openstack/nova/blob/stable/ussuri/nova/conf/workarounds.py#L277-L329= in generaly using these options might invlaidate any support agreement you may have with a vendeor. we are aware of at least once edgecase currently where enableing this with encyrpted volume breaks live migration potentally causeing dataloss. https://bugs.launchpad.net/nova/+bug/1939545 there is a backport inflight for the fix to train https://review.opendev.org/q/topic:bug%252F1939545 but its only been backported to wallaby so far so it is not safe to enable those options and use live migration today. you should also be aware that to enabel this optionon a host you need to drain the host first then enable the option adn cold migrate instance to the host. live migration betwen hosts with local attach enabeld and disabled is not supported. if you want to disable it again in the futrue which you will have to do to upgrade to xena you need to cold migrate all instances again. so if you are deploying your own version fo cpeh and can move to a newer version which has the librados perforamce enhacment feature that is operationlly less painful then using these workaround. the only reason we developed this workaroudn to use krbd in nova was because our hands were tieed downstream since we could not ship a new version of ceph but needed to support release with this perfromance limiation for multiple years. so unless your in a simialr situration upgradeing ceph and ensuring you use the new versions of the ceph libs with qemu and a new enough qemu to leverave the performance enhancments is the best option. so with those disclaimer you may want to consider evaluating those workaround options but keep in mind the limiatation and the fact that you cannot live migrate until that bug is fixt before considering using it in production. > > - Do both RBD images (Cinder and yours) have the same Ceph flags? > > - Did you try connecting to the Compute Host the same RBD image created > by Cinder instead of creating a new one? > > Cheers, > Gorka. > > > => So i think the problem is in the hypervisor > > But when I check performance on VM using cinder Volume SSD, the result > > equals performance when test RBD (SSD) mounted on a Compute host. > > 2. I already have to configure disk_cachemodes="network=writeback"(and > > enable rbd cache client) or test with disk_cachemodes="none" but nothing > > different. > > 3. Push iperf3 from compute host to random ceph host still has 20Gb > > traffic. > > 4. Compute Host and CEPH host connected to the same switch (layer2). > > Where else can I look for issues? > > Please help me in this case. > > Thank you. > > From vinhducnguyen1708 at gmail.com Thu Jul 7 12:40:19 2022 From: vinhducnguyen1708 at gmail.com (Vinh Nguyen Duc) Date: Thu, 7 Jul 2022 19:40:19 +0700 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> References: <20220707100628.zdaikq5knnyzktxo@localhost> <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> Message-ID: Thank for your email We are not using encryption volume. If this is a bug of librados, i do not see any effect of throughput when VM using volume SSD. And the performance of ceph HDD mounted directly from compute still good. We already disable debug in ceph.conf On Thu, 7 Jul 2022 at 19:26 Sean Mooney wrote: > On Thu, 2022-07-07 at 12:06 +0200, Gorka Eguileor wrote: > > On 07/07, Vinh Nguyen Duc wrote: > > > I have a problem with I/O performance on Openstack block device HDD. > > > > > > *Environment:**Openstack version: Ussuri* > > > - OS: CentOS8 > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > - KVM: qemu-kvm-5.1.0-20.el8 > > > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > > > - OS: CentOS8 > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > In CEPH Cluster we have 2 class: > > > - Bluestore > > > - HDD (only for cinder volume) > > > - SSD (images, cinder volume) > > > *Hardware:* > > > - Ceph-client: 2x10Gbps (bond) MTU 9000 > > > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > > > *VM:* > > > - Swapoff > > > - non LVM > > > > > > *Issue*When create VM on Openstack using cinder volume HDD, have really > > > poor performance: 60-85 MB/s writes. And when tests with ioping have > high > > > latency. > > > *Diagnostic* > > > 1. I have checked the performance between Compute Host (Openstack) and > > > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > > > performance is 300-400 MB/s. > > > > Hi, > > > > I probably won't be able to help you on the hypervisor side, but I have > > a couple of questions that may help narrow down the issue: > > > > - Are Cinder volumes using encryption? > if you are not using encyrption you might be encountering librados issue > tracked downstream by https://bugzilla.redhat.com/show_bug.cgi?id=1897572 > this is unfixable without moving to updating to a new version fo the cpeh > client > libs. > > > > - How did you connect the volume to the Compute Host, using krbd or > > rbd-nbd? > in ussuri we still technially have the workaround options to use krbd but > they are > deprecated and removed in xena. > > https://github.com/openstack/nova/blob/stable/ussuri/nova/conf/workarounds.py#L277-L329= > in generaly using these options might invlaidate any support agreement you > may have with a vendeor. > > we are aware of at least once edgecase currently where enableing this with > encyrpted volume breaks live > migration potentally causeing dataloss. > https://bugs.launchpad.net/nova/+bug/1939545 > there is a backport inflight for the fix to train > https://review.opendev.org/q/topic:bug%252F1939545 > but its only been backported to wallaby so far so it is not safe to enable > those options and use live migration > today. > > you should also be aware that to enabel this optionon a host you need to > drain the host first then enable the option adn cold > migrate instance to the host. live migration betwen hosts with local > attach enabeld and disabled is not supported. > > if you want to disable it again in the futrue which you will have to do to > upgrade to xena you need to cold migrate all instances > again. > > so if you are deploying your own version fo cpeh and can move to a newer > version which has the librados perforamce enhacment feature > that is operationlly less painful then using these workaround. > > the only reason we developed this workaroudn to use krbd in nova was > because our hands were tieed downstream since we could not ship a new > version of > ceph but needed to support release with this perfromance limiation for > multiple years. so unless your in a simialr situration > upgradeing ceph and ensuring you use the new versions of the ceph libs > with qemu and a new enough qemu to leverave the performance enhancments is > the > best option. > > so with those disclaimer you may want to consider evaluating those > workaround options but keep in mind the limiatation and the fact that you > cannot > live migrate until that bug is fixt before considering using it in > production. > > > > > - Do both RBD images (Cinder and yours) have the same Ceph flags? > > > > - Did you try connecting to the Compute Host the same RBD image created > > by Cinder instead of creating a new one? > > > > Cheers, > > Gorka. > > > > > => So i think the problem is in the hypervisor > > > But when I check performance on VM using cinder Volume SSD, the result > > > equals performance when test RBD (SSD) mounted on a Compute host. > > > 2. I already have to configure disk_cachemodes="network=writeback"(and > > > enable rbd cache client) or test with disk_cachemodes="none" but > nothing > > > different. > > > 3. Push iperf3 from compute host to random ceph host still has 20Gb > > > traffic. > > > 4. Compute Host and CEPH host connected to the same switch (layer2). > > > Where else can I look for issues? > > > Please help me in this case. > > > Thank you. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Jul 7 12:47:13 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 7 Jul 2022 14:47:13 +0200 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: References: <20220707100628.zdaikq5knnyzktxo@localhost> <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> Message-ID: <20220707124713.gyy72zmzxelf2nt6@localhost> On 07/07, Vinh Nguyen Duc wrote: > Thank for your email > We are not using encryption volume. > > If this is a bug of librados, i do not see any effect of throughput when VM > using volume SSD. > And the performance of ceph HDD mounted directly from compute still good. > Hi, Did the Cinder volume that was performing poorly in the VM perform well when manually connected directly to the Compute Host? Cheers, Gorka. > We already disable debug in ceph.conf > > On Thu, 7 Jul 2022 at 19:26 Sean Mooney wrote: > > > On Thu, 2022-07-07 at 12:06 +0200, Gorka Eguileor wrote: > > > On 07/07, Vinh Nguyen Duc wrote: > > > > I have a problem with I/O performance on Openstack block device HDD. > > > > > > > > *Environment:**Openstack version: Ussuri* > > > > - OS: CentOS8 > > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > > - KVM: qemu-kvm-5.1.0-20.el8 > > > > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > > > > - OS: CentOS8 > > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > > In CEPH Cluster we have 2 class: > > > > - Bluestore > > > > - HDD (only for cinder volume) > > > > - SSD (images, cinder volume) > > > > *Hardware:* > > > > - Ceph-client: 2x10Gbps (bond) MTU 9000 > > > > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > > > > *VM:* > > > > - Swapoff > > > > - non LVM > > > > > > > > *Issue*When create VM on Openstack using cinder volume HDD, have really > > > > poor performance: 60-85 MB/s writes. And when tests with ioping have > > high > > > > latency. > > > > *Diagnostic* > > > > 1. I have checked the performance between Compute Host (Openstack) and > > > > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > > > > performance is 300-400 MB/s. > > > > > > Hi, > > > > > > I probably won't be able to help you on the hypervisor side, but I have > > > a couple of questions that may help narrow down the issue: > > > > > > - Are Cinder volumes using encryption? > > if you are not using encyrption you might be encountering librados issue > > tracked downstream by https://bugzilla.redhat.com/show_bug.cgi?id=1897572 > > this is unfixable without moving to updating to a new version fo the cpeh > > client > > libs. > > > > > > - How did you connect the volume to the Compute Host, using krbd or > > > rbd-nbd? > > in ussuri we still technially have the workaround options to use krbd but > > they are > > deprecated and removed in xena. > > > > https://github.com/openstack/nova/blob/stable/ussuri/nova/conf/workarounds.py#L277-L329= > > in generaly using these options might invlaidate any support agreement you > > may have with a vendeor. > > > > we are aware of at least once edgecase currently where enableing this with > > encyrpted volume breaks live > > migration potentally causeing dataloss. > > https://bugs.launchpad.net/nova/+bug/1939545 > > there is a backport inflight for the fix to train > > https://review.opendev.org/q/topic:bug%252F1939545 > > but its only been backported to wallaby so far so it is not safe to enable > > those options and use live migration > > today. > > > > you should also be aware that to enabel this optionon a host you need to > > drain the host first then enable the option adn cold > > migrate instance to the host. live migration betwen hosts with local > > attach enabeld and disabled is not supported. > > > > if you want to disable it again in the futrue which you will have to do to > > upgrade to xena you need to cold migrate all instances > > again. > > > > so if you are deploying your own version fo cpeh and can move to a newer > > version which has the librados perforamce enhacment feature > > that is operationlly less painful then using these workaround. > > > > the only reason we developed this workaroudn to use krbd in nova was > > because our hands were tieed downstream since we could not ship a new > > version of > > ceph but needed to support release with this perfromance limiation for > > multiple years. so unless your in a simialr situration > > upgradeing ceph and ensuring you use the new versions of the ceph libs > > with qemu and a new enough qemu to leverave the performance enhancments is > > the > > best option. > > > > so with those disclaimer you may want to consider evaluating those > > workaround options but keep in mind the limiatation and the fact that you > > cannot > > live migrate until that bug is fixt before considering using it in > > production. > > > > > > > > - Do both RBD images (Cinder and yours) have the same Ceph flags? > > > > > > - Did you try connecting to the Compute Host the same RBD image created > > > by Cinder instead of creating a new one? > > > > > > Cheers, > > > Gorka. > > > > > > > => So i think the problem is in the hypervisor > > > > But when I check performance on VM using cinder Volume SSD, the result > > > > equals performance when test RBD (SSD) mounted on a Compute host. > > > > 2. I already have to configure disk_cachemodes="network=writeback"(and > > > > enable rbd cache client) or test with disk_cachemodes="none" but > > nothing > > > > different. > > > > 3. Push iperf3 from compute host to random ceph host still has 20Gb > > > > traffic. > > > > 4. Compute Host and CEPH host connected to the same switch (layer2). > > > > Where else can I look for issues? > > > > Please help me in this case. > > > > Thank you. > > > > > > > > > > From papathanail at uom.edu.gr Thu Jul 7 13:02:42 2022 From: papathanail at uom.edu.gr (GEORGIOS PAPATHANAIL) Date: Thu, 7 Jul 2022 16:02:42 +0300 Subject: Openstack instance is unreachable In-Reply-To: References: <20220628105723.Horde.VEeg85NrLzpcrDTFq6t70GG@webmail.nde.ag> Message-ID: Any thoughts? ???? ??? 29 ???? 2022 ???? 20:06 ? ??????? GEORGIOS PAPATHANAIL < papathanail at uom.edu.gr> ??????: > I did the installation based on this > https://docs.openstack.org/install-guide/openstack-services.html (queens > version) > > In my previous installation (I use VMWare instead of XenServer) the only > thing that I did is to enable promisc mode in VSphere, and the VMs were > reachable. > > I am using ml2 plugin and linuxbridge (default installation) > > Does it need extra configuration? > > Thanks > > > -- *George Papathanail* *Associate Researcher* *Department of Applied Informatics* *University of Macedonia* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tolga at etom.com.tr Thu Jul 7 13:29:27 2022 From: tolga at etom.com.tr (tolga at etom.com.tr) Date: Thu, 07 Jul 2022 16:29:27 +0300 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: Message-ID: <662231657197586@mail.yandex.com.tr> An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Jul 7 15:10:00 2022 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 7 Jul 2022 20:40:00 +0530 Subject: [manila] Stepping down from manila core In-Reply-To: References: Message-ID: On Thu, Jul 7, 2022 at 3:17 AM Carlos Silva wrote: > > > > Em ter., 5 de jul. de 2022 ?s 15:50, Rodrigo Barbieri escreveu: >> >> Hello fellow zorillas, >> >> It has been a long time since I started to hope every day I'd able to dedicate more time to manila core activities and so far that hasn't happened and I don't see it happening in the near foreseeable future. I had been following the meetings notes weekly until ~2 months ago but I recently ended up dropping those as well. >> >> Therefore I am stepping down from the manila core role. I would like to thank everyone that I worked closely with from 2014 to 2019 on this project. I hold this project and all of you dear to my heart and I am extremely glad and grateful to have worked with you and met you on summits/PTGs, as the life memories around Manila are among the best I have from that period. >> > Rodrigo, thank you for your contributions in various ways to Manila during all these years. You helped us to shape many features and served as core for a long time. I have worked with you closely for some time and I learned a lot from you. I wish you all the best. >> >> If someday circumstances change, the manila project and its community will be ones I will be very happy to go back to working closely with again. >> > And we would be lucky to have you back! ++ What he said; Thank you Rodrigo! >> >> Kind regards, >> -- >> Rodrigo Barbieri >> MSc Computer Scientist > > > Regards, > carloss From fungi at yuggoth.org Thu Jul 7 18:16:21 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 Jul 2022 18:16:21 +0000 Subject: [tc] Reminder: August 2022 OpenInfra Board Sync Message-ID: <20220707181621.4hhm555beap4veie@yuggoth.org> If you're interested in participating in an informal discussion between the OpenStack TC, OpenInfra board members, and other interested community collaborators, don't forget to mark your preferred dates and times by Friday, 2022-07-15, so that I can let the board members know what our collective availability looks like: https://framadate.org/atdFRM8YeUtauSgC If you don't know what this is about, see my earlier post for details (I intentionally avoided replying to it in order to increase visibility for the reminder): https://lists.openstack.org/pipermail/openstack-discuss/2022-June/029352.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From katonalala at gmail.com Thu Jul 7 20:01:46 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 7 Jul 2022 22:01:46 +0200 Subject: [neutron] Drivers meeting agenda - 08.07.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. * [RFE] Adding the "rekey" parameter to the api for strongswan like "dpd_action" (#link https://bugs.launchpad.net/neutron/+bug/1979044) * [RFE] Firewall Group Ordering on Port Association (#link https://bugs.launchpad.net/neutron/+bug/1979816) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jul 7 22:06:42 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 23:06:42 +0100 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. In-Reply-To: References: Message-ID: <423de88694a70e179e5dfb4172fdf879ae95a574.camel@redhat.com> https://review.opendev.org/c/openstack/nova/+/848948 has now merged in nova and the os-vif change have also merged so the nova/os-vif check and gate pipelines should now be unblocked and its is ok to recheck (with a reason) if requried. On Thu, 2022-07-07 at 12:58 +0100, Sean Mooney wrote: > i have filed a bug for this https://bugs.launchpad.net/os-vif/+bug/1980948 and submited two patches for os-vif and nova > https://review.opendev.org/q/topic:bug%252F1980948 > other projects might also be affected by the cahnge intorduced in https://github.com/openstack/neutron/commit/7f0413c84c4515cd2fae31d823613c4d7ea43110 > > until those are merged please continue to hold of rechecking nova or os-vif ci failures. > On Thu, 2022-07-07 at 12:32 +0100, Sean Mooney wrote: > > hi o/ > > > > it looks like neutron recently moved linuxbridge to be experimental > > Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 neutron-server[90491]: ERROR neutron.common.experimental [-] Feature 'linuxbridge' is > > experimental and has to be explicitly enabled in 'cfg.CONF.experimental' > > > > we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but it is enabeld in our job config as one of the configured mech drivers > > even if we dont install the agent. > > > > i have not looked up what change in neutron change this yet but im going to propose a patch to nova to disable it and i likely need to fix os-vif too > > so if you see a post fialure in either the nova-next of novs hybrid plug jobs (and or look into it and see " die 2385 'Neutron did not start'" in the > > Run devstack task summery that is why this is failing. > > > > ill update this thread once we have fixed the issue. > > > From vegkeppnemairamek at gmail.com Fri Jul 8 01:28:15 2022 From: vegkeppnemairamek at gmail.com (Airamek) Date: Fri, 8 Jul 2022 03:28:15 +0200 Subject: Nova-conductor is having trouble with AMQP authentication Message-ID: <5e2ae288-9aa0-7c77-bbf9-47d44737a857@gmail.com> Hello everyone! I've installed OpenStack Ussuri on my home servers (one controller and one compute node, both running OpenSuse Leap 15.3) based on the instructions in the docs(https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-ussuri) Everything works until I launch an instance in Horizon. The instance gets stuck on "Scheduling". Looking trough the logs I've came across nova-conductor.log(on the controller), which contained the following error log(full log in the attachments): 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server [req-eb225ffd-9b12-4e80-987b-34f0317082c6 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. I'm 100% sure my RabbitMQ username and password are set correctly in nova.conf. I've included my nova.conf too, with the passwords removed. I would be really thankful, if someone could point me in the right direction! P.S: Please excuse me for any grammatical or spelling mistakes, English is not my first language. -------------- next part -------------- 2022-07-08 00:44:03.933 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Caught SIGTERM, stopping children 2022-07-08 00:44:03.945 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Waiting on 2 children to exit 2022-07-08 00:44:03.949 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Child 24716 killed by signal 15 2022-07-08 00:44:03.950 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Child 24717 killed by signal 15 2022-07-08 00:44:09.847 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Starting 2 workers 2022-07-08 00:44:09.865 25885 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:44:09.873 25830 WARNING oslo_config.cfg [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 00:44:09.877 25886 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:44:09.886 25830 WARNING oslo_config.cfg [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 00:48:08.510 25886 ERROR stevedore.extension [req-eb225ffd-9b12-4e80-987b-34f0317082c6 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server [req-eb225ffd-9b12-4e80-987b-34f0317082c6 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server 2022-07-08 00:59:06.907 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Caught SIGTERM, stopping children 2022-07-08 00:59:06.912 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Waiting on 2 children to exit 2022-07-08 00:59:06.924 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Child 25886 killed by signal 15 2022-07-08 00:59:06.926 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Child 25885 killed by signal 15 2022-07-08 00:59:13.233 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Starting 2 workers 2022-07-08 00:59:13.250 32521 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:59:13.300 32522 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:59:13.296 32454 WARNING oslo_config.cfg [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 00:59:13.337 32454 WARNING oslo_config.cfg [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:00:04.052 32521 ERROR stevedore.extension [req-57288225-27ca-413d-8b4e-1e4227b32748 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server [req-57288225-27ca-413d-8b4e-1e4227b32748 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server 2022-07-08 01:04:53.610 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:04:53.616 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Waiting on 2 children to exit 2022-07-08 01:04:53.627 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Child 32521 killed by signal 15 2022-07-08 01:04:53.630 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Child 32522 killed by signal 15 2022-07-08 01:04:59.323 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Starting 2 workers 2022-07-08 01:04:59.338 2766 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:04:59.350 2700 WARNING oslo_config.cfg [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:04:59.356 2767 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:04:59.365 2700 WARNING oslo_config.cfg [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:05:33.816 2767 ERROR stevedore.extension [req-a6193109-e4b0-44b5-a02a-bfa50509dca2 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server [req-a6193109-e4b0-44b5-a02a-bfa50509dca2 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server 2022-07-08 01:11:17.263 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:11:17.273 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Waiting on 2 children to exit 2022-07-08 01:11:17.274 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Child 2766 killed by signal 15 2022-07-08 01:11:17.278 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Child 2767 killed by signal 15 2022-07-08 01:11:24.003 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Starting 2 workers 2022-07-08 01:11:24.019 5582 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:11:24.028 5546 WARNING oslo_config.cfg [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:11:24.036 5583 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:11:24.038 5546 WARNING oslo_config.cfg [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:11:49.934 5583 ERROR stevedore.extension [req-9559f892-d837-4c51-98a0-cb378efc1372 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server [req-9559f892-d837-4c51-98a0-cb378efc1372 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server 2022-07-08 01:17:11.819 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:17:11.836 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Waiting on 2 children to exit 2022-07-08 01:17:11.869 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Child 5582 killed by signal 15 2022-07-08 01:17:11.870 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Child 5583 killed by signal 15 2022-07-08 01:18:47.201 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Starting 2 workers 2022-07-08 01:18:47.243 9104 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:18:47.275 9103 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:18:47.325 8868 WARNING oslo_config.cfg [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:18:47.390 8868 WARNING oslo_config.cfg [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:18:47.804 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:47.877 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:49.837 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:49.906 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:53.861 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:53.939 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:59.879 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:59.957 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:07.902 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:07.977 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:17.925 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:18.000 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:29.952 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:30.025 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:43.980 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:44.053 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:00.010 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:00.082 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:18.044 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:18.114 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:38.079 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:38.148 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:00.115 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:00.185 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:24.156 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:24.222 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:50.196 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:50.261 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:22:18.239 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:22:18.304 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:22:26.391 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:22:26.518 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Waiting on 2 children to exit 2022-07-08 01:22:26.523 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Child 9104 killed by signal 15 2022-07-08 01:22:26.589 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Child 9103 killed by signal 15 2022-07-08 01:25:10.902 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Starting 2 workers 2022-07-08 01:25:10.937 3134 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:25:10.939 3135 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:25:10.934 2697 WARNING oslo_config.cfg [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:25:10.955 2697 WARNING oslo_config.cfg [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:25:11.160 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:11.179 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:13.175 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:13.195 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:17.193 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:17.210 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:23.211 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:23.227 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:31.232 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:31.248 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:41.254 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:41.270 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:53.280 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:53.295 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:07.309 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:07.322 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:23.339 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:23.351 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:41.370 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:41.382 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:01.402 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:01.414 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:23.437 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:23.449 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:47.473 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:47.485 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:13.511 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:13.521 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:41.556 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:41.563 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:11.601 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:11.606 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:43.654 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:43.654 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:15.700 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:15.700 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:47.755 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:47.756 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:19.804 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:19.804 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:51.855 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:51.855 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:23.901 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:23.902 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:55.958 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:55.958 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:39:42.378 3135 ERROR stevedore.extension [req-d1dd37be-fbd5-40a1-82d6-cc794be38f3d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server [req-d1dd37be-fbd5-40a1-82d6-cc794be38f3d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server 2022-07-08 02:03:03.123 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'. Trying again in 1 seconds.: amqp.exceptions.ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown' 2022-07-08 02:03:03.137 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer 2022-07-08 02:03:03.140 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer 2022-07-08 02:03:04.190 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:04.199 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:04.229 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:05.213 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:05.245 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:06.210 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 4 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:07.229 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:07.263 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:07.598 3134 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer 2022-07-08 02:03:07.624 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:08.242 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:08.279 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:09.641 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:10.226 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 6 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:10.257 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 4 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:10.301 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 4 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:13.486 3135 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer 2022-07-08 02:03:13.501 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:13.657 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:14.273 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:14.317 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:15.287 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:15.331 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:15.518 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:16.247 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 8 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:17.354 3134 INFO oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] Reconnected to AMQP server on laena.internal.teamorange.hu:5672 via [amqp] client with port 46022. 2022-07-08 02:03:17.418 3135 INFO oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] Reconnected to AMQP server on laena.internal.teamorange.hu:5672 via [amqp] client with port 46026. 2022-07-08 02:03:24.285 3135 INFO oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] Reconnected to AMQP server on laena.internal.teamorange.hu:5672 via [amqp] client with port 46054. 2022-07-08 02:03:57.591 3134 ERROR stevedore.extension [req-da7fd51a-06a4-420b-9987-17c5ceb0047f 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server [req-da7fd51a-06a4-420b-9987-17c5ceb0047f 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server 2022-07-08 02:05:17.158 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Caught SIGTERM, stopping children 2022-07-08 02:05:17.264 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Waiting on 2 children to exit 2022-07-08 02:05:17.267 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Child 3134 killed by signal 15 2022-07-08 02:05:17.268 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Child 3135 killed by signal 15 2022-07-08 02:05:30.091 19820 INFO oslo_service.service [req-aa2efa96-8b64-4db9-a74b-ddb22587846f - - - - -] Starting 2 workers 2022-07-08 02:05:30.113 19911 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 02:05:30.137 19820 WARNING oslo_config.cfg [req-aa2efa96-8b64-4db9-a74b-ddb22587846f - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 02:05:30.165 19912 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 02:05:30.173 19820 WARNING oslo_config.cfg [req-aa2efa96-8b64-4db9-a74b-ddb22587846f - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 02:06:19.381 19911 ERROR stevedore.extension [req-41a6d1e9-1ffd-496f-bf84-ddad3226456d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server [req-41a6d1e9-1ffd-496f-bf84-ddad3226456d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server -------------- next part -------------- [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:[ActualPasswordWouldBeHere]@laena.internal.teamorange.hu// my_ip = 10.66.33.14 rootwrap_config=/etc/nova/rootwrap.conf [api_database] connection = mysql+pymysql://nova:[ActualPasswordWouldBeHere]@laena.internal.teamorange.hu/nova_api [database] connection = mysql+pymysql://nova:[ActualPasswordWouldBeHere]@laena.internal.teamorange.hu/nova [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://laena.internal.teamorange.hu:5000/ auth_url = http://laena.internal.teamorange.hu:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = [ActualPasswordWouldBeHere] [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [glance] api_servers = http://laena.internal.teamorange.hu:9292 [oslo_concurrency] lock_path = /var/run/nova [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://laena.internal.teamorange.hu:5000/v3 username = placement password = [ActualPasswordWouldBeHere] [neutron] auth_url = http://laena.internal.teamorange.hu:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = [ActualPasswordWouldBeHere] service_metadata_proxy = true metadata_proxy_shared_secret = [ActualPasswordWouldBeHere] [cinder] os_region_name = RegionOne [scheduler] discover_hosts_in_cells_interval = 300 From kkchn.in at gmail.com Fri Jul 8 05:47:23 2022 From: kkchn.in at gmail.com (KK CHN) Date: Fri, 8 Jul 2022 11:17:23 +0530 Subject: Cluster management tool for Openstack Message-ID: List, 1. Is there any specific tool or project component available to maintain multiple OpenStack Clusters 2. Is there any concept of Supervisory cluster ? Regards, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Fri Jul 8 06:38:27 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 8 Jul 2022 08:38:27 +0200 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. In-Reply-To: <423de88694a70e179e5dfb4172fdf879ae95a574.camel@redhat.com> References: <423de88694a70e179e5dfb4172fdf879ae95a574.camel@redhat.com> Message-ID: Hi, Thanks for the quick fix, and sorry for the inconvenience. I pushed a similar patch for rally-openstack to remove linuxbridge from mechanism_driver list, as I see the neutron task no need for linuxbridge: https://review.opendev.org/c/openstack/rally-openstack/+/849069 Lajos Sean Mooney ezt ?rta (id?pont: 2022. j?l. 8., P, 0:15): > https://review.opendev.org/c/openstack/nova/+/848948 has now merged in > nova and the > os-vif change have also merged > > so the nova/os-vif check and gate pipelines should now be unblocked and > its is ok to recheck (with a reason) if requried. > > On Thu, 2022-07-07 at 12:58 +0100, Sean Mooney wrote: > > i have filed a bug for this > https://bugs.launchpad.net/os-vif/+bug/1980948 and submited two patches > for os-vif and nova > > https://review.opendev.org/q/topic:bug%252F1980948 > > other projects might also be affected by the cahnge intorduced in > https://github.com/openstack/neutron/commit/7f0413c84c4515cd2fae31d823613c4d7ea43110 > > > > until those are merged please continue to hold of rechecking nova or > os-vif ci failures. > > On Thu, 2022-07-07 at 12:32 +0100, Sean Mooney wrote: > > > hi o/ > > > > > > it looks like neutron recently moved linuxbridge to be experimental > > > Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 > neutron-server[90491]: ERROR neutron.common.experimental [-] Feature > 'linuxbridge' is > > > experimental and has to be explicitly enabled in > 'cfg.CONF.experimental' > > > > > > we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but > it is enabeld in our job config as one of the configured mech drivers > > > even if we dont install the agent. > > > > > > i have not looked up what change in neutron change this yet but im > going to propose a patch to nova to disable it and i likely need to fix > os-vif too > > > so if you see a post fialure in either the nova-next of novs hybrid > plug jobs (and or look into it and see " die 2385 'Neutron did not start'" > in the > > > Run devstack task summery that is why this is failing. > > > > > > ill update this thread once we have fixed the issue. > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Fri Jul 8 07:56:41 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Fri, 8 Jul 2022 09:56:41 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> Message-ID: <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> So... no news about this? Should we just assume that the licenses will be no longer? Bummer... On 7/7/22 13:08, Daniel Mellado wrote: > Just noticed that as well, thanks for bringing this up Eyal! > > On 7/7/22 12:04, Eyal B wrote: >> Hello, >> >> Will the licenses be renewed ? they ended on July 5 >> >> Eyal >> >> On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni > > wrote: >> >> ??? Sorry for the typo, It'd be July 5, 2022 >> >> >> ??? On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray > ??? > wrote: >> >> ??????? Hi Swapnil,____ >> >> ??????? We?re at July 2021 already ? so they expire at the end of this >> ??????? month?____ >> >> ??????? __ __ >> >> ??????? *From: *Swapnil Kulkarni > ??????? > >> ??????? *Date: *Tuesday, 6 July 2021 at 17:50 >> ??????? *To: *openstack-discuss at lists.openstack.org >> ??????? >> ??????? > ??????? > >> ??????? *Subject: *[all] PyCharm Licenses Renewed till July 2021____ >> >> ??????? Hello,____ >> >> ??????? __ __ >> >> ??????? Happy to inform you the open source developer?license for >> ??????? Pycharm has been renewed for 1 additional year till July 2021. >> ____ >> >> >> ??????? ____ >> >> ??????? Best?Regards, >> ??????? Swapnil Kulkarni >> ??????? coolsvap at gmail dot com____ >> >> ??????? __ __ >> From katonalala at gmail.com Fri Jul 8 11:43:27 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 8 Jul 2022 13:43:27 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> Message-ID: Hi, Thanks for asking, I have the same problem, my license also expired this week. Lajos Katona Daniel Mellado ezt ?rta (id?pont: 2022. j?l. 8., P, 10:04): > So... no news about this? Should we just assume that the licenses will > be no longer? Bummer... > > On 7/7/22 13:08, Daniel Mellado wrote: > > Just noticed that as well, thanks for bringing this up Eyal! > > > > On 7/7/22 12:04, Eyal B wrote: > >> Hello, > >> > >> Will the licenses be renewed ? they ended on July 5 > >> > >> Eyal > >> > >> On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni >> > wrote: > >> > >> Sorry for the typo, It'd be July 5, 2022 > >> > >> > >> On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray >> > wrote: > >> > >> Hi Swapnil,____ > >> > >> We?re at July 2021 already ? so they expire at the end of this > >> month?____ > >> > >> __ __ > >> > >> *From: *Swapnil Kulkarni >> > > >> *Date: *Tuesday, 6 July 2021 at 17:50 > >> *To: *openstack-discuss at lists.openstack.org > >> > >> >> > > >> *Subject: *[all] PyCharm Licenses Renewed till July 2021____ > >> > >> Hello,____ > >> > >> __ __ > >> > >> Happy to inform you the open source developer license for > >> Pycharm has been renewed for 1 additional year till July 2021. > >> ____ > >> > >> > >> ____ > >> > >> Best Regards, > >> Swapnil Kulkarni > >> coolsvap at gmail dot com____ > >> > >> __ __ > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Fri Jul 8 12:07:35 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Fri, 8 Jul 2022 17:37:35 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL Message-ID: Hi Team, We were trying to install overcloud with SSL enabled for which the UC is installed, but OC install is getting failed at step 4: ERROR :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 600, in urlopen\n chunked=chunked)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, in _make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in connect\n _match_hostname(cert, self.assert_hostname or server_hostname)\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in _match_hostname\n match_hostname(cert, asserted_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % (hostname, dnsnames[0]))\nssl.CertificateError: hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, in urlopen\n _stacktrace=sys.exc_info()[2])\n File \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 138, in _do_create_plugin\n authenticated=False)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 610, in get_discovery\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, in __init__\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 141, in run\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 517, in search_services\n services = self.list_services()\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 492, in list_services\n if self._is_client_version('identity', 2):\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 460, in _is_client_version\n client = getattr(self, client_name)\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n 'identity', min_version=2, max_version='3.latest')\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in get_endpoint\n return self.session.get_endpoint(auth or self.auth, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n allow_version_hack=allow_version_hack, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 206, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 161, in _do_create_plugin\n 'auth_url is correct. %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 overcloud-controller-0 : ok=437 changed=104 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-1 : ok=436 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-2 : ok=431 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 undercloud : ok=28 changed=7 unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in the deploy.sh: openstack overcloud deploy --templates \ -r /home/stack/templates/roles_data.yaml \ --networks-file /home/stack/templates/custom_network_data.yaml \ --vip-file /home/stack/templates/custom_vip_data.yaml \ --baremetal-deployment /home/stack/templates/overcloud-baremetal-deploy.yaml \ --network-config \ -e /home/stack/templates/environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml Addition lines as highlighted in yellow were passed with modifications: tls-endpoints-public-ip.yaml: Passed as is in the defaults. enable-tls.yaml: # ******************************************************************* # This file was created automatically by the sample environment # generator. Developers should use `tox -e genconfig` to update it. # Users are recommended to make changes to a copy of the file instead # of the original, if any customizations are needed. # ******************************************************************* # title: Enable SSL on OpenStack Public Endpoints # description: | # Use this environment to pass in certificates for SSL deployments. # For these values to take effect, one of the tls-endpoints-*.yaml # environments must also be used. parameter_defaults: # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon # Type: boolean HorizonSecureCookies: True # Specifies the default CA cert to use if TLS is used for services in the public network. # Type: string PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' # The content of the SSL certificate (without Key) in PEM format. # Type: string SSLRootCertificate: | -----BEGIN CERTIFICATE----- ----*** CERTICATELINES TRIMMED ** -----END CERTIFICATE----- SSLCertificate: | -----BEGIN CERTIFICATE----- ----*** CERTICATELINES TRIMMED ** -----END CERTIFICATE----- # The content of an SSL intermediate CA certificate in PEM format. # Type: string SSLIntermediateCertificate: '' # The content of the SSL Key in PEM format. # Type: string SSLKey: | -----BEGIN PRIVATE KEY----- ----*** CERTICATELINES TRIMMED ** -----END PRIVATE KEY----- # ****************************************************** # Static parameters - these are values that must be # included in the environment but should not be changed. # ****************************************************** # The filepath of the certificate as it will be stored in the controller. # Type: string DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem # ********************* # End static parameters # ********************* inject-trust-anchor.yaml # ******************************************************************* # This file was created automatically by the sample environment # generator. Developers should use `tox -e genconfig` to update it. # Users are recommended to make changes to a copy of the file instead # of the original, if any customizations are needed. # ******************************************************************* # title: Inject SSL Trust Anchor on Overcloud Nodes # description: | # When using an SSL certificate signed by a CA that is not in the default # list of CAs, this environment allows adding a custom CA certificate to # the overcloud nodes. parameter_defaults: # The content of a CA's SSL certificate file in PEM format. This is evaluated on the client side. # Mandatory. This parameter must be set by the user. # Type: string SSLRootCertificate: | -----BEGIN CERTIFICATE----- ----*** CERTICATELINES TRIMMED ** -----END CERTIFICATE----- resource_registry: OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml The procedure to create such files was followed using: Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based certificate, without DNS. * Any idea around this error would be of great help. -- skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jul 8 12:31:33 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 8 Jul 2022 18:01:33 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: What is the domain name you have specified in the undercloud.conf file? And what is the fqdn name used for the generation of the SSL cert? On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, wrote: > Hi Team, > We were trying to install overcloud with SSL enabled for which the UC is > installed, but OC install is getting failed at step 4: > > ERROR > :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries > exceeded with url: / (Caused by SSLError(CertificateError(\"hostname > 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", > "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the > exact error", "rc": 1} > 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | > FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | > item={'service_name': 'cinderv3', 'service_type': 'volume'} | > error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": > "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": > "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover > available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. > Attempting to parse version from URL.\nTraceback (most recent call last):\n > File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line > 600, in urlopen\n chunked=chunked)\n File > \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, > in _make_request\n self._validate_conn(conn)\n File > \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, > in _validate_conn\n conn.connect()\n File > \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in > connect\n _match_hostname(cert, self.assert_hostname or > server_hostname)\n File > \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in > _match_hostname\n match_hostname(cert, asserted_hostname)\n File > \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % > (hostname, dnsnames[0]))\nssl.CertificateError: hostname > 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring > handling of the above exception, another exception occurred:\n\nTraceback > (most recent call last):\n File > \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in > send\n timeout=timeout\n File > \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, > in urlopen\n _stacktrace=sys.exc_info()[2])\n File > \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in > increment\n raise MaxRetryError(_pool, url, error or > ResponseError(cause))\nurllib3.exceptions.MaxRetryError: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'undercloud.com'\",),))\n\nDuring handling of the above exception, > another exception occurred:\n\nTraceback (most recent call last):\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, > in _send_request\n resp = self.session.request(method, url, **kwargs)\n > File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, > in request\n resp = self.send(prep, **send_kwargs)\n File > \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in > send\n r = adapter.send(request, **kwargs)\n File > \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in > send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'undercloud.com'\",),))\n\nDuring handling of the above exception, > another exception occurred:\n\nTraceback (most recent call last):\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", > line 138, in _do_create_plugin\n authenticated=False)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 610, in get_discovery\n authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, > in get_discovery\n disc = Discover(session, url, > authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, > in __init__\n authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, > in get_version_data\n resp = session.get(url, headers=headers, > authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, > in get\n return self.request(url, 'GET', **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in > request\n resp = send(**kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, > in _send_request\n raise > exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL > exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'undercloud.com'\",),))\n\nDuring handling of the above exception, > another exception occurred:\n\nTraceback (most recent call last):\n File > \"\", line 102, in \n File \"\", line 94, in > _ansiballz_main\n File \"\", line 40, in invoke_module\n File > \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return > _run_module_code(code, init_globals, run_name, mod_spec)\n File > \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n > mod_name, mod_spec, pkg_name, script_name)\n File > \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, > run_globals)\n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", > line 185, in \n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", > line 181, in main\n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", > line 407, in __call__\n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", > line 141, in run\n File > \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line > 517, in search_services\n services = self.list_services()\n File > \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line > 492, in list_services\n if self._is_client_version('identity', 2):\n > File > \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", > line 460, in _is_client_version\n client = getattr(self, client_name)\n > File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", > line 32, in _identity_client\n 'identity', min_version=2, > max_version='3.latest')\n File > \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", > line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in > get_endpoint\n return self.session.get_endpoint(auth or self.auth, > **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, > in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 380, in get_endpoint\n allow_version_hack=allow_version_hack, > **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 271, in get_endpoint_data\n service_catalog = > self.get_access(session).service_catalog\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", > line 206, in get_auth_ref\n self._plugin = > self._do_create_plugin(session)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", > line 161, in _do_create_plugin\n 'auth_url is correct. %s' % > e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find > versioned identity endpoints when attempting to authenticate. Please check > that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": > "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} > 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:01.271914 | 2.47s > 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:01.273659 | 2.47s > > PLAY RECAP > ********************************************************************* > localhost : ok=0 changed=0 unreachable=0 > failed=0 skipped=2 rescued=0 ignored=0 > overcloud-controller-0 : ok=437 changed=104 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-1 : ok=436 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-2 : ok=431 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 > failed=0 skipped=198 rescued=0 ignored=0 > undercloud : ok=28 changed=7 unreachable=0 > failed=1 skipped=3 rescued=0 ignored=0 > 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary > Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: > 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > in the deploy.sh: > > openstack overcloud deploy --templates \ > -r /home/stack/templates/roles_data.yaml \ > --networks-file /home/stack/templates/custom_network_data.yaml \ > --vip-file /home/stack/templates/custom_vip_data.yaml \ > --baremetal-deployment > /home/stack/templates/overcloud-baremetal-deploy.yaml \ > --network-config \ > -e /home/stack/templates/environment.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml > \ > -e /home/stack/templates/ironic-config.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > > Addition lines as highlighted in yellow were passed with modifications: > tls-endpoints-public-ip.yaml: > Passed as is in the defaults. > enable-tls.yaml: > > # ******************************************************************* > # This file was created automatically by the sample environment > # generator. Developers should use `tox -e genconfig` to update it. > # Users are recommended to make changes to a copy of the file instead > # of the original, if any customizations are needed. > # ******************************************************************* > # title: Enable SSL on OpenStack Public Endpoints > # description: | > # Use this environment to pass in certificates for SSL deployments. > # For these values to take effect, one of the tls-endpoints-*.yaml > # environments must also be used. > parameter_defaults: > # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon > # Type: boolean > HorizonSecureCookies: True > > # Specifies the default CA cert to use if TLS is used for services in > the public network. > # Type: string > PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' > > # The content of the SSL certificate (without Key) in PEM format. > # Type: string > SSLRootCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > > SSLCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > # The content of an SSL intermediate CA certificate in PEM format. > # Type: string > SSLIntermediateCertificate: '' > > # The content of the SSL Key in PEM format. > # Type: string > SSLKey: | > -----BEGIN PRIVATE KEY----- > ----*** CERTICATELINES TRIMMED ** > -----END PRIVATE KEY----- > > # ****************************************************** > # Static parameters - these are values that must be > # included in the environment but should not be changed. > # ****************************************************** > # The filepath of the certificate as it will be stored in the controller. > # Type: string > DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem > > # ********************* > # End static parameters > # ********************* > > inject-trust-anchor.yaml > > # ******************************************************************* > # This file was created automatically by the sample environment > # generator. Developers should use `tox -e genconfig` to update it. > # Users are recommended to make changes to a copy of the file instead > # of the original, if any customizations are needed. > # ******************************************************************* > # title: Inject SSL Trust Anchor on Overcloud Nodes > # description: | > # When using an SSL certificate signed by a CA that is not in the default > # list of CAs, this environment allows adding a custom CA certificate to > # the overcloud nodes. > parameter_defaults: > # The content of a CA's SSL certificate file in PEM format. This is > evaluated on the client side. > # Mandatory. This parameter must be set by the user. > # Type: string > SSLRootCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > > resource_registry: > OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml > > > > > The procedure to create such files was followed using: > Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) > > > Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based > certificate, without DNS. * > > Any idea around this error would be of great help. > > -- > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From development at manuel-bentele.de Fri Jul 8 12:39:49 2022 From: development at manuel-bentele.de (Manuel Bentele) Date: Fri, 8 Jul 2022 14:39:49 +0200 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> Message-ID: Hi developers, After an invitation from Rados?aw, I join the VDI discussion as well. I'm one of the authors of the OpenStack VDI paper [1] mentioned by Rados?aw. The paper is now freely accessible to everyone in the pre-published version [2]. Thank you Rados?aw for sharing. The main repository mentioned in the OpenStack VDI paper is currently hosted on GitHub [3] and contains configurations for general testing and development purposes (for me and my colleagues). The repository also provides a little DevStack setup for some simple VDI tests based on the functionality that OpenStack already provides. If we aim for a more active development in the future, we can also move the repository to another more official location. Our needs for a VDI solution are already outlined in our paper [1]. We are currently hosting part of the bwCloud [4] infrastructure and a HPC cluster [5]. Researchers and students can carry out computations there. However, we are receiving more and more requests from these people to offer virtual (powerful) graphical desktops for graphic(-intense) applications, so that computation results can be visualized or other use cases can be covered (e.g. office work). That's why we've had the idea of a free VDI solution for a long time, where we now (hopefully) initiated a further development with our published paper.I'm looking forward to further exciting ideas and a great cooperation with all interested people from the OpenStack community. [1] https://doi.org/10.1007/978-3-030-99191-3_12 [2] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf [3] https://github.com/bwLehrpool/osvdi [4] https://www.bw-cloud.org/en/ [5] https://www.nemo.uni-freiburg.de Regards, Manuel On 7/7/22 11:26, radoslaw.piliszek at gmail.com (Rados?aw Piliszek) wrote: > Hi Allison, > > I am also in touch with folks at rz.uni-freiburg.de who are also > interested in this topic. We might be able to gather a panel for > discussion. I think we need to introduce the topic properly with some > presentations and then move onto a discussion if time allows (I > believe it will as the time slot is 1h and the presentations should > not be overly detailed for an introductory session). > > Cheers, > Radek > -yoctozepto > > On Wed, 6 Jul 2022 at 23:12, Allison Price wrote: >> I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. >> >> Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. >> >> Cheers, >> Allison >> >>> On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: >>> >>> Just a quick follow up - I was permitted to share a pre-published >>> version of the article I was citing in my email from June 4th. [1] >>> Please enjoy responsibly. :-) >>> >>> [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf >>> >>> Cheers, >>> Radek >>> -yoctozepto >>> >>> On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek >>> wrote: >>>> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: >>>>> Hi Rados?aw, >>>> Hi Andy, >>>> >>>> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. >>>> >>>>>> First of all, wow, that looks very interesting and in fact very much >>>>>> what I'm looking for. As I mentioned in the original message, the >>>>>> things this solution lacks are not something blocking for me. >>>>>> Regarding the approach to Guacamole, I know that it's preferable to >>>>>> have guacamole extension (that provides the dynamic inventory) >>>>>> developed rather than meddle with the internal database but I guess it >>>>>> is a good start. >>>>> An even better approach would be something like the Guacozy project >>>>> (https://guacozy.readthedocs.io) >>>> I am not convinced. The project looks dead by now. [1] >>>> It offers a different UI which may appeal to certain users but I think >>>> sticking to vanilla Guacamole should do us right... For the time being >>>> at least. ;-) >>>> >>>>> They were able to use the Guacmole JavaScript libraries directly to >>>>> embed the HTML5 desktop within a React? app. I think this is a much >>>>> better approach, and I'd love to be able to do something similar in >>>>> the future. Would make the integration that much nicer. >>>> Well, as an example of embedding in the UI - sure. But it does not >>>> invalidate the need to modify Guacamole's database or write an >>>> extension to it so that it has the necessary creds. >>>> >>>>>> Any "quickstart setting up" would be awesome to have at this stage. As >>>>>> this is a Django app, I think I should be able to figure out the bits >>>>>> and bolts to get it up and running in some shape but obviously it will >>>>>> impede wider adoption. >>>>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get >>>>> a quickstart guide together. >>>>> >>>>> I have a private repo with code to set up a development environment >>>>> which uses Heat and Ansible - this might be the quickest way to get >>>>> started. I'm happy to share this with you privately if you like. >>>> I'm interested. Please share it. >>>> >>>>>> On the note of adoption, if I find it usable, I can provide support >>>>>> for it in Kolla [1] and help grow the project's adoption this way. >>>>> Kolla could be useful. We're already using containers for this project >>>>> now, and I have a helm chart for deploying to k8s. >>>>> https://github.com/NeCTAR-RC/bumblebee-helm >>>> Nice! The catch is obviously that some orgs frown upon K8s because >>>> they lack the necessary know-how. >>>> Kolla by design avoids the use of K8s. OpenStack components are not >>>> cloud-native anyway so benefits of using K8s are diminished (yet it >>>> makes sense to use K8s if there is enough experience with it as it >>>> makes certain ops more streamlined and simpler this way). >>>> >>>>> Also, an important part is making sure the images are set up correctly >>>>> with XRDP, etc. Our images are built using Packer, and the config for >>>>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images >>>> Ack, thanks for sharing. >>>> >>>>>> Also, since this is OpenStack-centric, maybe you could consider >>>>>> migrating to OpenDev at some point to collaborate with interested >>>>>> parties using a common system? >>>>>> Just food for thought at the moment. >>>>> I think it would be more appropriate to start a new project. I think >>>>> our codebase has too many assumptions about the underlying cloud. >>>>> >>>>> We inherited the code from another project too, so it's got twice the cruft. >>>> I see. Well, that's good to know at least. >>>> >>>>>> Writing to let you know I have also found the following related paper: [1] >>>>>> and reached out to its authors in the hope to enable further >>>>>> collaboration to happen. >>>>>> The paper is not open access so I have only obtained it for myself and >>>>>> am unsure if licensing permits me to share, thus I also asked the >>>>>> authors to share their copy (that they have copyrights to). >>>>>> I have obviously let them know of the existence of this thread. ;-) >>>>>> Let's stay tuned. >>>>>> >>>>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 >>>>> This looks interesting. A collaboration would be good if there is >>>>> enough interest in the community. >>>> I am looking forward to the collaboration happening. This could really >>>> liven up the OpenStack VDI. >>>> >>>> [1] https://github.com/paidem/guacozy/ >>>> >>>> -yoctozepto > From arne.wiebalck at cern.ch Fri Jul 8 13:48:03 2022 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 8 Jul 2022 15:48:03 +0200 Subject: [baremetal-sig][ironic] Tue July 12, 2022, 2pm UTC: "Bare metal Kubernetes at G-Research" Message-ID: Dear all, The Bare Metal SIG will meet next week on Tue July 12, 2022, 2pm UTC featuring a topic of the day presentation by Scott Solkhon & Jamie Poole: "Bare metal Kubernetes at G-Research" Everyone is welcome, all details on how to join can be found on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Hope to see you there! Arne -- Arne Wiebalck CERN IT From skaplons at redhat.com Fri Jul 8 14:21:44 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 08 Jul 2022 16:21:44 +0200 Subject: [neutron] CI meeting on Tuesday 12.07.2022 cancelled Message-ID: <1740893.ZqjkciB8ac@p1> Hi, I will be on PTO next week. As there is nothing really critical to our CI currently, lets cancel next week's meeting. See You on CI meeting on Tuesday 19th of July. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From rdhasman at redhat.com Fri Jul 8 15:23:06 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 8 Jul 2022 20:53:06 +0530 Subject: [cinder] Spec Freeze Exception Request In-Reply-To: References: <20220701185518.6cid4paqrsnxnq6a@localhost> Message-ID: Hi, Today is the last day for spec freeze exception but there were some unexpected circumstances because of which the review process was delayed. So we will be granting extra days to this spec since it doesn't require much revisions to be in a mergiable state. We will provide an additional extra week since we also have M-2 next week which has the new volume and target driver merge deadline so some of the review bandwidth will be shared there. In conclusion, granting this spec the new deadline to be merged by 15th July. Thanks and regards Rajat Dhasmana On Sat, Jul 2, 2022 at 10:35 AM Rajat Dhasmana wrote: > Thanks Gorka for spelling out all the changes made to the spec since the > initial submission in yoga, would make the review experience quite better. > The quota issues have indeed been a pain point for OpenStack operators for > a long time and it's really crucial to fix them. > I'm OK with granting this an FFE (+1) > > On Sat, Jul 2, 2022 at 12:31 AM Gorka Eguileor > wrote: > >> Hi, >> >> I would like to request a spec freeze exception for the new Cinder Quota >> System spec [1]. >> >> Analysis of the required spec changes needed to implement the second >> quota driver, as agreed in the PTG/mid-cycle, were non trivial. >> >> In the latest spec update I just pushed there are considerable changes: >> >> - General improvements to existing sections to increase readability. >> >> - Description of the additional driver and reasons why we decided to >> implement it. >> >> - Spell out through the entire spec the similarities and differences of >> both drivers. >> >> - Change in tracking of the reservations to accommodate the new driver. >> >> - Description on how switching from one driver to the other would work. >> >> - Updated the driver interface to accommodate the particularities of the >> new driver. >> >> - Updated the performance section with a very brief summary of the >> performance tests done with some code prototipe. >> >> - Updating the phases of the effort as well as the work items. >> >> Cheers, >> Gorka. >> >> >> [1]: https://review.opendev.org/c/openstack/cinder-specs/+/819693 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Jul 8 15:53:07 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 8 Jul 2022 17:53:07 +0200 Subject: [release] Release countdown for week R-12, Jul 11 - 15 Message-ID: Development Focus ----------------- The Zed-2 milestone is next week, on July 14th, 2022! Zed-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/zed/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new 'Zed' deliverables need to have a deliverable file in https://opendev.org/openstack/releases/src/branch/master/deliverables and need to have done a release by milestone-2. Changes proposing those deliverables for inclusion in Zed have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in Zed, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Zed-2 Milestone: July 14th, 2022 Next PTG: October 17-20, 2022 (Columbus, Ohio) El?d Ill?s irc: elodilles From james.slagle at gmail.com Fri Jul 8 16:32:38 2022 From: james.slagle at gmail.com (James Slagle) Date: Fri, 8 Jul 2022 12:32:38 -0400 Subject: [TripleO] PTL outage Message-ID: Hello TripleO, I'll be out on PTO for the next 2 weeks. Not that I'm typically needed :), but If anything urgent requires attention, please reach out to the kind folks directly in #tripleo. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jul 8 16:50:34 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Jul 2022 16:50:34 +0000 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> Message-ID: <20220708165033.vbj35dgo6hjhoh2v@yuggoth.org> On 2022-07-01 16:31:16 +0100 (+0100), Stephen Finucane wrote: [...] > The following projects are owned by 'stable-maint-core': > > * barbican-stable-maint > * ceilometer-stable-maint > * cinder-stable-maint > * designate-stable-maint > * glance-stable-maint > * heat-stable-maint > * horizon-stable-maint > * ironic-stable-maint > * keystone-stable-maint > * manila-stable-maint > * neutron-stable-maint > * nova-stable-maint > * oslo-stable-maint > * python-openstackclient-stable-maint > * sahara-stable-maint > * swift-stable-maint > * trove-stable-maint > * zaqar-stable-maint [...] I've gone through and switched all 18 of these to self-owned just now, as requested in Ghanshyam's reply. Please let me know if you find any others which need similar treatment. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Jul 8 19:13:49 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Jul 2022 14:13:49 -0500 Subject: [tc][qa][centos stream] CentOS-stream-9 testing stability and collaboration with CentOS-stream maintainers Message-ID: <181df3b8755.10aa7593b285941.3522470925080375540@ghanshyammann.com> Hello Everyone, As you know, CentOS-stream-9 is in Zed cycle testing runtime and we are facing some stability issues to make its testing voting (detach volume, qemu issues are some known ones). Currently, it is a non-voting job which is not good for the long term as non-voting job failures are mostly ignored. We called CentOS stream folks to discuss how to make it stable testing or a better way to improve the communication between both communities as well as coordinated debugging. Alan, Brian, and Aleksandra from the CentOS stream team joined the TC call on 7 July. With the CentOS stream model of having the latest packages version like libvirt etc it is good to test and capture the potential issues in advance but at the same time, there will more failure than other distros like ubuntu etc. Knowing the failure in the early stage is good but we have a few challenges to triage such failure. OpenStack or CentOS team alone might not be able to know the root cause of the issue at first glance, whether it is in OpenStack component or in CentOS stream. CentOS team has less experience in debugging devstack or tempest tests. And if such failures are more frequent (current case) then having it as a voting testing on every patch is another challenge for OpenStack smooth development cycle. We could not find any best solution to solve all these challenges but we all agree that we need some coordinated debugging. We need some of the initial level of failure debugging in OpenStack and pass the information to CentOS team for further debugging. That is one possible step forward and we will see how it goes. Basically doing the following : * If OpenStack members see failure in CentOS stream then do some initial level of debugging to know which OpenStack component is failing and collect some log information. * Report the bug with all those information at https://bugzilla.redhat.com/ * Contact CenntOS Stream team (you can address/ping Alan, Brian or Aleksandra ) via ** ML: https://lists.centos.org/mailman/listinfo/centos-devel ** IRC channel on Libera chat #centos-devel and #centos-stream Alan, Brian, Aleksandra, or any TC members feel free to add the things if I missed something. -gmann From gmann at ghanshyammann.com Fri Jul 8 19:22:16 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Jul 2022 14:22:16 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 08 July 2022: Reading: 5 min Message-ID: <181df4344ac.ed01c4bb286104.1709392448799473428@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on 07 July. Most of the meeting discussions are summarized in this email. As it was video meeting, full recording is available @https://www.youtube.com/watch?v=zeJG6Mujalo&t=29s and brief logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-07-15.00.log.html * Next TC weekly meeting will be on 14 July Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by 13 July. 2. What we completed this week: ========================= * Nothing specific in this week. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[2], Two are completed and others items are in-progress. Open Reviews ----------------- * Three open reviews for ongoing activities[3]. CentOS-stream-9 testing stability and collaboration with centos-stream maintainers --------------------------------------------------------------------------------------------------- We discussed this topic in the TC meeting and I summarized the discussion in separate ML[4]. Create the Environmental Sustainability SIG --------------------------------------------------- Not much update on this except what we discussed last time, feel free to add your feedback in review[5]. Consistent and Secure Default RBAC ------------------------------------------- We have a good amount of discussion and review in the goal document updates[6] and I have updated the patch by resolving the review comments. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[7]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[8]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[9][10]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [12] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847413 [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029468.html [5] https://review.opendev.org/c/openstack/governance-sigs/+/845336 [6] https://review.opendev.org/c/openstack/governance/+/847418 [7] https://review.opendev.org/c/openstack/governance/+/836888 [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [9] https://etherpad.opendev.org/p/zuul-config-error-openstack [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From gmann at ghanshyammann.com Fri Jul 8 23:06:07 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Jul 2022 18:06:07 -0500 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> Message-ID: <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> ---- On Fri, 22 Apr 2022 15:53:37 -0500 Ghanshyam Mann wrote --- > Hi Braden, > > Please let us know about the status of your company's permission to maintain the project. > As we are in Zed cycle development and there is no one to maintain/lead this project we > need to start thinking about the next steps mentioned in the leaderless project etherpad > Hi Braden, We have not heard back from you if you can help in maintaining the Adjutant. As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project list - https://review.opendev.org/c/openstack/governance/+/849153/1 -gmann > - https://etherpad.opendev.org/p/zed-leaderless > > -gmann > > ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- > > I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. > > On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote: > > > > Hi, > > > > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. > > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > From swogatpradhan22 at gmail.com Sat Jul 9 05:54:41 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sat, 9 Jul 2022 11:24:41 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: I had faced a similar kind of issue, for ip based setup you need to specify the domain name as the ip that you are going to use, this error is showing up because the ssl is ip based but the fqdns seems to be undercloud.com or overcloud.example.com. I think for undercloud you can change the undercloud.conf. And will it work if we specify clouddomain parameter to the IP address for overcloud? because it seems he has not specified the clouddomain parameter and overcloud.example.com is the default domain for overcloud.example.com. On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, wrote: > What is the domain name you have specified in the undercloud.conf file? > And what is the fqdn name used for the generation of the SSL cert? > > On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, > wrote: > >> Hi Team, >> We were trying to install overcloud with SSL enabled for which the UC is >> installed, but OC install is getting failed at step 4: >> >> ERROR >> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >> exact error", "rc": 1} >> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >> Attempting to parse version from URL.\nTraceback (most recent call last):\n >> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >> 600, in urlopen\n chunked=chunked)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >> in _make_request\n self._validate_conn(conn)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >> in _validate_conn\n conn.connect()\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >> connect\n _match_hostname(cert, self.assert_hostname or >> server_hostname)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >> handling of the above exception, another exception occurred:\n\nTraceback >> (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >> send\n timeout=timeout\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >> increment\n raise MaxRetryError(_pool, url, error or >> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >> in request\n resp = self.send(prep, **send_kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >> send\n r = adapter.send(request, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 138, in _do_create_plugin\n authenticated=False)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 610, in get_discovery\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >> in get_discovery\n disc = Discover(session, url, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >> in __init__\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >> in get_version_data\n resp = session.get(url, headers=headers, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >> in get\n return self.request(url, 'GET', **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >> request\n resp = send(**kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >> in _send_request\n raise >> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"\", line 102, in \n File \"\", line 94, in >> _ansiballz_main\n File \"\", line 40, in invoke_module\n File >> \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return >> _run_module_code(code, init_globals, run_name, mod_spec)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >> mod_name, mod_spec, pkg_name, script_name)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >> run_globals)\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 185, in \n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 181, in main\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >> line 407, in __call__\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 141, in run\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 517, in search_services\n services = self.list_services()\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 492, in list_services\n if self._is_client_version('identity', 2):\n >> File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 460, in _is_client_version\n client = getattr(self, client_name)\n >> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >> line 32, in _identity_client\n 'identity', min_version=2, >> max_version='3.latest')\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 271, in get_endpoint_data\n service_catalog = >> self.get_access(session).service_catalog\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 206, in get_auth_ref\n self._plugin = >> self._do_create_plugin(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >> versioned identity endpoints when attempting to authenticate. Please check >> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.271914 | 2.47s >> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.273659 | 2.47s >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=0 changed=0 unreachable=0 >> failed=0 skipped=2 rescued=0 ignored=0 >> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >> failed=0 skipped=198 rescued=0 ignored=0 >> undercloud : ok=28 changed=7 unreachable=0 >> failed=1 skipped=3 rescued=0 ignored=0 >> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> >> in the deploy.sh: >> >> openstack overcloud deploy --templates \ >> -r /home/stack/templates/roles_data.yaml \ >> --networks-file /home/stack/templates/custom_network_data.yaml \ >> --vip-file /home/stack/templates/custom_vip_data.yaml \ >> --baremetal-deployment >> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >> --network-config \ >> -e /home/stack/templates/environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> >> Addition lines as highlighted in yellow were passed with modifications: >> tls-endpoints-public-ip.yaml: >> Passed as is in the defaults. >> enable-tls.yaml: >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Enable SSL on OpenStack Public Endpoints >> # description: | >> # Use this environment to pass in certificates for SSL deployments. >> # For these values to take effect, one of the tls-endpoints-*.yaml >> # environments must also be used. >> parameter_defaults: >> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >> # Type: boolean >> HorizonSecureCookies: True >> >> # Specifies the default CA cert to use if TLS is used for services in >> the public network. >> # Type: string >> PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >> >> # The content of the SSL certificate (without Key) in PEM format. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> SSLCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> # The content of an SSL intermediate CA certificate in PEM format. >> # Type: string >> SSLIntermediateCertificate: '' >> >> # The content of the SSL Key in PEM format. >> # Type: string >> SSLKey: | >> -----BEGIN PRIVATE KEY----- >> ----*** CERTICATELINES TRIMMED ** >> -----END PRIVATE KEY----- >> >> # ****************************************************** >> # Static parameters - these are values that must be >> # included in the environment but should not be changed. >> # ****************************************************** >> # The filepath of the certificate as it will be stored in the >> controller. >> # Type: string >> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >> >> # ********************* >> # End static parameters >> # ********************* >> >> inject-trust-anchor.yaml >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Inject SSL Trust Anchor on Overcloud Nodes >> # description: | >> # When using an SSL certificate signed by a CA that is not in the >> default >> # list of CAs, this environment allows adding a custom CA certificate to >> # the overcloud nodes. >> parameter_defaults: >> # The content of a CA's SSL certificate file in PEM format. This is >> evaluated on the client side. >> # Mandatory. This parameter must be set by the user. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> resource_registry: >> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >> >> >> >> >> The procedure to create such files was followed using: >> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >> >> >> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >> certificate, without DNS. * >> >> Any idea around this error would be of great help. >> >> -- >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sat Jul 9 09:03:13 2022 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 9 Jul 2022 18:03:13 +0900 Subject: [heat][horizon] heat-dashboard-core maintenance In-Reply-To: References: Message-ID: Hi, > In addition, I would like to drop the following folks from heat-dashboard-core. > They were active in heat-dashboard but they had no activities for over > three years. > If there is no objection, I will drop them next week. > - Kazunori Shinohara > - Keiichi Hikita > - Xinni Ge There is no objection on the above proposal. I have updated heat-dashboard-core member in Gerrit. -- Akihiro Motoki (amotoki) On Wed, Jun 22, 2022 at 6:03 PM Akihiro Motoki wrote: > > Hi, > > I added heat-core gerrit group to heat-dashboard-core [1]. > Heat team and horizon team agreed to maintain heat-dashboard together > a while ago > and horizon-core was added to heat-dashboard-core. At that time > heat-dashboard had > active contributors so head-dashboard-core did not include heat-core team. > The active contributors in heat-dashboard-core have gone and Rico is > the only person > from the heat team, so it is time to add heat-core to heat-dashboard-core. > > In addition, I would like to drop the following folks from heat-dashboard-core. > They were active in heat-dashboard but they had no activities for over > three years. > If there is no objection, I will drop them next week. > - Kazunori Shinohara > - Keiichi Hikita > - Xinni Ge > > [1] https://review.opendev.org/admin/groups/8803fcad46b4bce76ed436861474878f36e0a8e4,members > > Thanks, > Akihiro Motoki (amotoki) From bshephar at redhat.com Fri Jul 8 21:21:11 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Sat, 9 Jul 2022 07:21:11 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, It looks like you have set the dns name on the SSL certificate to overcloud.example.com instead of the IP address. So the SSL cert validation is failing. Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'overcloud.example.com'\",),)) Note point number 1 here: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html#certificate-and-public-vip-configuration It's actually worded poorly. I don't believe IP's can be set for the common name, and we need to use subjectAltName instead. See below: So, when you create this file: [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.comCN=openstack.example.com Remove the CN= part from that file: [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.com Then in the v3.ext file set IP.1=fd00:fd00:fd00:9900::2ef like so: authorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentsubjectAltName = @alt_names[alt_names]IP.1=fd00:fd00:fd00:9900::2ef On Fri, 8 Jul 2022 at 10:31 pm, Swogat Pradhan wrote: > What is the domain name you have specified in the undercloud.conf file? > And what is the fqdn name used for the generation of the SSL cert? > > On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, > wrote: > >> Hi Team, >> We were trying to install overcloud with SSL enabled for which the UC is >> installed, but OC install is getting failed at step 4: >> >> ERROR >> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >> exact error", "rc": 1} >> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >> Attempting to parse version from URL.\nTraceback (most recent call last):\n >> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >> 600, in urlopen\n chunked=chunked)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >> in _make_request\n self._validate_conn(conn)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >> in _validate_conn\n conn.connect()\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >> connect\n _match_hostname(cert, self.assert_hostname or >> server_hostname)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >> handling of the above exception, another exception occurred:\n\nTraceback >> (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >> send\n timeout=timeout\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >> increment\n raise MaxRetryError(_pool, url, error or >> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >> in request\n resp = self.send(prep, **send_kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >> send\n r = adapter.send(request, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 138, in _do_create_plugin\n authenticated=False)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 610, in get_discovery\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >> in get_discovery\n disc = Discover(session, url, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >> in __init__\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >> in get_version_data\n resp = session.get(url, headers=headers, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >> in get\n return self.request(url, 'GET', **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >> request\n resp = send(**kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >> in _send_request\n raise >> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"\", line 102, in \n File \"\", line 94, in >> _ansiballz_main\n File \"\", line 40, in invoke_module\n File >> \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return >> _run_module_code(code, init_globals, run_name, mod_spec)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >> mod_name, mod_spec, pkg_name, script_name)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >> run_globals)\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 185, in \n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 181, in main\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >> line 407, in __call__\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 141, in run\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 517, in search_services\n services = self.list_services()\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 492, in list_services\n if self._is_client_version('identity', 2):\n >> File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 460, in _is_client_version\n client = getattr(self, client_name)\n >> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >> line 32, in _identity_client\n 'identity', min_version=2, >> max_version='3.latest')\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 271, in get_endpoint_data\n service_catalog = >> self.get_access(session).service_catalog\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 206, in get_auth_ref\n self._plugin = >> self._do_create_plugin(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >> versioned identity endpoints when attempting to authenticate. Please check >> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.271914 | 2.47s >> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.273659 | 2.47s >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=0 changed=0 unreachable=0 >> failed=0 skipped=2 rescued=0 ignored=0 >> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >> failed=0 skipped=198 rescued=0 ignored=0 >> undercloud : ok=28 changed=7 unreachable=0 >> failed=1 skipped=3 rescued=0 ignored=0 >> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> >> in the deploy.sh: >> >> openstack overcloud deploy --templates \ >> -r /home/stack/templates/roles_data.yaml \ >> --networks-file /home/stack/templates/custom_network_data.yaml \ >> --vip-file /home/stack/templates/custom_vip_data.yaml \ >> --baremetal-deployment >> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >> --network-config \ >> -e /home/stack/templates/environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> >> Addition lines as highlighted in yellow were passed with modifications: >> tls-endpoints-public-ip.yaml: >> Passed as is in the defaults. >> enable-tls.yaml: >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Enable SSL on OpenStack Public Endpoints >> # description: | >> # Use this environment to pass in certificates for SSL deployments. >> # For these values to take effect, one of the tls-endpoints-*.yaml >> # environments must also be used. >> parameter_defaults: >> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >> # Type: boolean >> HorizonSecureCookies: True >> >> # Specifies the default CA cert to use if TLS is used for services in >> the public network. >> # Type: string >> PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >> >> # The content of the SSL certificate (without Key) in PEM format. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> SSLCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> # The content of an SSL intermediate CA certificate in PEM format. >> # Type: string >> SSLIntermediateCertificate: '' >> >> # The content of the SSL Key in PEM format. >> # Type: string >> SSLKey: | >> -----BEGIN PRIVATE KEY----- >> ----*** CERTICATELINES TRIMMED ** >> -----END PRIVATE KEY----- >> >> # ****************************************************** >> # Static parameters - these are values that must be >> # included in the environment but should not be changed. >> # ****************************************************** >> # The filepath of the certificate as it will be stored in the >> controller. >> # Type: string >> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >> >> # ********************* >> # End static parameters >> # ********************* >> >> inject-trust-anchor.yaml >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Inject SSL Trust Anchor on Overcloud Nodes >> # description: | >> # When using an SSL certificate signed by a CA that is not in the >> default >> # list of CAs, this environment allows adding a custom CA certificate to >> # the overcloud nodes. >> parameter_defaults: >> # The content of a CA's SSL certificate file in PEM format. This is >> evaluated on the client side. >> # Mandatory. This parameter must be set by the user. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> resource_registry: >> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >> >> >> >> >> The procedure to create such files was followed using: >> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >> >> >> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >> certificate, without DNS. * >> >> Any idea around this error would be of great help. >> >> -- >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Sat Jul 9 04:29:41 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Sat, 9 Jul 2022 09:59:41 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Thanks Brandon for your input. We have this IP as stated getting allocated. Maybe we can pass domain name to get this more predictable. But in that case also we would need to do the same way as you suggest ? Will try your and Swogat's suggestions. Best Regards, Lokendra On Sat, 9 Jul 2022, 02:51 Brendan Shephard, wrote: > Hey, > > It looks like you have set the dns name on the SSL certificate to > overcloud.example.com instead of the IP address. So the SSL cert > validation is failing. > > Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' > doesn't match 'overcloud.example.com'\",),)) > > Note point number 1 here: > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html#certificate-and-public-vip-configuration > > It's actually worded poorly. I don't believe IP's can be set for the > common name, and we need to use subjectAltName instead. See below: > > So, when you create this file: > > [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.comCN=openstack.example.com > > > Remove the CN= part from that file: > > [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.com > > > Then in the v3.ext file set IP.1=fd00:fd00:fd00:9900::2ef like so: > > authorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentsubjectAltName = @alt_names[alt_names]IP.1=fd00:fd00:fd00:9900::2ef > > > > > On Fri, 8 Jul 2022 at 10:31 pm, Swogat Pradhan > wrote: > >> What is the domain name you have specified in the undercloud.conf file? >> And what is the fqdn name used for the generation of the SSL cert? >> >> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >> wrote: >> >>> Hi Team, >>> We were trying to install overcloud with SSL enabled for which the UC is >>> installed, but OC install is getting failed at step 4: >>> >>> ERROR >>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>> exact error", "rc": 1} >>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>> 600, in urlopen\n chunked=chunked)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>> in _make_request\n self._validate_conn(conn)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>> in _validate_conn\n conn.connect()\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>> connect\n _match_hostname(cert, self.assert_hostname or >>> server_hostname)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>> handling of the above exception, another exception occurred:\n\nTraceback >>> (most recent call last):\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>> send\n timeout=timeout\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>> increment\n raise MaxRetryError(_pool, url, error or >>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>> in request\n resp = self.send(prep, **send_kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>> send\n r = adapter.send(request, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 138, in _do_create_plugin\n authenticated=False)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 610, in get_discovery\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>> in get_discovery\n disc = Discover(session, url, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>> in __init__\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>> in get_version_data\n resp = session.get(url, headers=headers, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>> in get\n return self.request(url, 'GET', **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>> request\n resp = send(**kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>> in _send_request\n raise >>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File \"\", line 102, in \n File \"\", line >>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>> mod_name, mod_spec, pkg_name, script_name)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>> run_globals)\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 185, in \n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 181, in main\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>> line 407, in __call__\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 141, in run\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 517, in search_services\n services = self.list_services()\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>> File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>> line 32, in _identity_client\n 'identity', min_version=2, >>> max_version='3.latest')\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 271, in get_endpoint_data\n service_catalog = >>> self.get_access(session).service_catalog\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 206, in get_auth_ref\n self._plugin = >>> self._do_create_plugin(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>> versioned identity endpoints when attempting to authenticate. Please check >>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.271914 | 2.47s >>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.273659 | 2.47s >>> >>> PLAY RECAP >>> ********************************************************************* >>> localhost : ok=0 changed=0 unreachable=0 >>> failed=0 skipped=2 rescued=0 ignored=0 >>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>> failed=0 skipped=198 rescued=0 ignored=0 >>> undercloud : ok=28 changed=7 unreachable=0 >>> failed=1 skipped=3 rescued=0 ignored=0 >>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> >>> >>> in the deploy.sh: >>> >>> openstack overcloud deploy --templates \ >>> -r /home/stack/templates/roles_data.yaml \ >>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>> --baremetal-deployment >>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>> --network-config \ >>> -e /home/stack/templates/environment.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>> \ >>> -e /home/stack/templates/ironic-config.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>> -e /home/stack/containers-prepare-parameter.yaml >>> >>> Addition lines as highlighted in yellow were passed with modifications: >>> tls-endpoints-public-ip.yaml: >>> Passed as is in the defaults. >>> enable-tls.yaml: >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Enable SSL on OpenStack Public Endpoints >>> # description: | >>> # Use this environment to pass in certificates for SSL deployments. >>> # For these values to take effect, one of the tls-endpoints-*.yaml >>> # environments must also be used. >>> parameter_defaults: >>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>> # Type: boolean >>> HorizonSecureCookies: True >>> >>> # Specifies the default CA cert to use if TLS is used for services in >>> the public network. >>> # Type: string >>> PublicTLSCAFile: >>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>> >>> # The content of the SSL certificate (without Key) in PEM format. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> SSLCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> # The content of an SSL intermediate CA certificate in PEM format. >>> # Type: string >>> SSLIntermediateCertificate: '' >>> >>> # The content of the SSL Key in PEM format. >>> # Type: string >>> SSLKey: | >>> -----BEGIN PRIVATE KEY----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END PRIVATE KEY----- >>> >>> # ****************************************************** >>> # Static parameters - these are values that must be >>> # included in the environment but should not be changed. >>> # ****************************************************** >>> # The filepath of the certificate as it will be stored in the >>> controller. >>> # Type: string >>> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >>> >>> # ********************* >>> # End static parameters >>> # ********************* >>> >>> inject-trust-anchor.yaml >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>> # description: | >>> # When using an SSL certificate signed by a CA that is not in the >>> default >>> # list of CAs, this environment allows adding a custom CA certificate >>> to >>> # the overcloud nodes. >>> parameter_defaults: >>> # The content of a CA's SSL certificate file in PEM format. This is >>> evaluated on the client side. >>> # Mandatory. This parameter must be set by the user. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> resource_registry: >>> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >>> >>> >>> >>> >>> The procedure to create such files was followed using: >>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>> >>> >>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>> certificate, without DNS. * >>> >>> Any idea around this error would be of great help. >>> >>> -- >>> skype: lokendrarathour >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Sat Jul 9 05:46:00 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Sat, 9 Jul 2022 15:46:00 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, I personally use DNS names. I updated that documentation, so that is essentially exactly what I'm using in my environment. I just pasted in exactly what I have in my files and changed the domain names to example.com. So what we have in that documentation should work with DNS names. I also made a video about this: https://www.youtube.com/watch?v=FmO6n1fUiYU I believe the only difference when using IP's instead of domain names is that you can't use the common name (CN) field. Brendan Shephard Software Engineer Red Hat APAC 193 N Quay Brisbane City QLD 4000 @RedHat Red Hat Red Hat On Sat, Jul 9, 2022 at 2:30 PM Lokendra Rathour wrote: > Thanks Brandon for your input. > We have this IP as stated getting allocated. > Maybe we can pass domain name to get this more predictable. > But in that case also we would need to do the same way as you suggest ? > Will try your and Swogat's suggestions. > > Best Regards, > Lokendra > > On Sat, 9 Jul 2022, 02:51 Brendan Shephard, wrote: > >> Hey, >> >> It looks like you have set the dns name on the SSL certificate to >> overcloud.example.com instead of the IP address. So the SSL cert >> validation is failing. >> >> Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' >> doesn't match 'overcloud.example.com'\",),)) >> >> Note point number 1 here: >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html#certificate-and-public-vip-configuration >> >> It's actually worded poorly. I don't believe IP's can be set for the >> common name, and we need to use subjectAltName instead. See below: >> >> So, when you create this file: >> >> [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.comCN=openstack.example.com >> >> >> Remove the CN= part from that file: >> >> [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.com >> >> >> Then in the v3.ext file set IP.1=fd00:fd00:fd00:9900::2ef like so: >> >> authorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentsubjectAltName = @alt_names[alt_names]IP.1=fd00:fd00:fd00:9900::2ef >> >> >> >> >> On Fri, 8 Jul 2022 at 10:31 pm, Swogat Pradhan >> wrote: >> >>> What is the domain name you have specified in the undercloud.conf file? >>> And what is the fqdn name used for the generation of the SSL cert? >>> >>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >>> wrote: >>> >>>> Hi Team, >>>> We were trying to install overcloud with SSL enabled for which the UC >>>> is installed, but OC install is getting failed at step 4: >>>> >>>> ERROR >>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>>> exact error", "rc": 1} >>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>> 600, in urlopen\n chunked=chunked)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>> in _make_request\n self._validate_conn(conn)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>> in _validate_conn\n conn.connect()\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>> connect\n _match_hostname(cert, self.assert_hostname or >>>> server_hostname)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>> handling of the above exception, another exception occurred:\n\nTraceback >>>> (most recent call last):\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>> send\n timeout=timeout\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>> increment\n raise MaxRetryError(_pool, url, error or >>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>> send\n r = adapter.send(request, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>> in get_discovery\n disc = Discover(session, url, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>> in __init__\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>> in get_version_data\n resp = session.get(url, headers=headers, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>> request\n resp = send(**kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>> in _send_request\n raise >>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File \"\", line 102, in \n File \"\", line >>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>> run_globals)\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 185, in \n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 181, in main\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>> line 407, in __call__\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 141, in run\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 517, in search_services\n services = self.list_services()\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>> File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>> line 32, in _identity_client\n 'identity', min_version=2, >>>> max_version='3.latest')\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 271, in get_endpoint_data\n service_catalog = >>>> self.get_access(session).service_catalog\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 206, in get_auth_ref\n self._plugin = >>>> self._do_create_plugin(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>> versioned identity endpoints when attempting to authenticate. Please check >>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.271914 | 2.47s >>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.273659 | 2.47s >>>> >>>> PLAY RECAP >>>> ********************************************************************* >>>> localhost : ok=0 changed=0 unreachable=0 >>>> failed=0 skipped=2 rescued=0 ignored=0 >>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>> failed=0 skipped=198 rescued=0 ignored=0 >>>> undercloud : ok=28 changed=7 unreachable=0 >>>> failed=1 skipped=3 rescued=0 ignored=0 >>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> >>>> >>>> in the deploy.sh: >>>> >>>> openstack overcloud deploy --templates \ >>>> -r /home/stack/templates/roles_data.yaml \ >>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>> --baremetal-deployment >>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>> --network-config \ >>>> -e /home/stack/templates/environment.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>> \ >>>> -e /home/stack/templates/ironic-config.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>> -e /home/stack/containers-prepare-parameter.yaml >>>> >>>> Addition lines as highlighted in yellow were passed with modifications: >>>> tls-endpoints-public-ip.yaml: >>>> Passed as is in the defaults. >>>> enable-tls.yaml: >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Enable SSL on OpenStack Public Endpoints >>>> # description: | >>>> # Use this environment to pass in certificates for SSL deployments. >>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>> # environments must also be used. >>>> parameter_defaults: >>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>> # Type: boolean >>>> HorizonSecureCookies: True >>>> >>>> # Specifies the default CA cert to use if TLS is used for services in >>>> the public network. >>>> # Type: string >>>> PublicTLSCAFile: >>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>> >>>> # The content of the SSL certificate (without Key) in PEM format. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> SSLCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> # The content of an SSL intermediate CA certificate in PEM format. >>>> # Type: string >>>> SSLIntermediateCertificate: '' >>>> >>>> # The content of the SSL Key in PEM format. >>>> # Type: string >>>> SSLKey: | >>>> -----BEGIN PRIVATE KEY----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END PRIVATE KEY----- >>>> >>>> # ****************************************************** >>>> # Static parameters - these are values that must be >>>> # included in the environment but should not be changed. >>>> # ****************************************************** >>>> # The filepath of the certificate as it will be stored in the >>>> controller. >>>> # Type: string >>>> DeployedSSLCertificatePath: >>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>> >>>> # ********************* >>>> # End static parameters >>>> # ********************* >>>> >>>> inject-trust-anchor.yaml >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>> # description: | >>>> # When using an SSL certificate signed by a CA that is not in the >>>> default >>>> # list of CAs, this environment allows adding a custom CA certificate >>>> to >>>> # the overcloud nodes. >>>> parameter_defaults: >>>> # The content of a CA's SSL certificate file in PEM format. This is >>>> evaluated on the client side. >>>> # Mandatory. This parameter must be set by the user. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> resource_registry: >>>> OS::TripleO::NodeTLSCAData: >>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>> >>>> >>>> >>>> >>>> The procedure to create such files was followed using: >>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>> >>>> >>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>>> certificate, without DNS. * >>>> >>>> Any idea around this error would be of great help. >>>> >>>> -- >>>> skype: lokendrarathour >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jul 9 13:26:36 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 9 Jul 2022 13:26:36 +0000 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II Message-ID: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> It became apparent in a Python community discussion[0] yesterday that lockfile has been designated as a "critical project" by PyPI, even though it was effectively abandoned in 2010. Because our release automation account is listed as one of its maintainers, I looked into whether we still need it for anything... For those who have been around here a while, you may recall that the OpenStack project temporarily assumed maintenance[1] of lockfile in late 2014 and uploaded a few new releases for it, as a stop-gap until we could replace our uses with oslo.concurrency. That work was completed[2] early in the Liberty development cycle. Unfortunately for us, that's not the end of the story. I looked yesterday expecting to see that we've not needed lockfile for 7 years, and was disappointed to discover it's still in our constraints list. Why? After much wailing and gnashing of teeth and manually installing multiple bisections of the requirements list, I narrowed it down to one dependency: ansible-runner. Apparently, ansible-runner currently depends[3] on python-daemon, which still has a dependency on lockfile[4]. Our uses of ansible-runner seem to be pretty much limited to TripleO repositories (hence tagging them in the subject), so it's possible they could find an alternative to it and solve this dilemma. Optionally, we could try to help the ansible-runner or python-daemon maintainers with new implementations of the problem dependencies as a way out. Whatever path we take, we're long overdue. The reasons we moved off lockfile ages ago are still there, and the risk to us has only continued to increase in the meantime. I'm open to suggestions, but we really ought to make sure we have it out of our constraints list by the Zed release. [0] https://discuss.python.org/t/17219 [1] https://lists.openstack.org/pipermail/openstack-dev/2014-June/038387.html [2] https://review.openstack.org/151224 [3] https://github.com/ansible/ansible-runner/issues/379 [4] https://pagure.io/python-daemon/issue/42 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kdhall at binghamton.edu Sat Jul 9 21:28:32 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sat, 9 Jul 2022 17:28:32 -0400 Subject: [OpenStack-Ansible] Log Aggregation in Yoga Message-ID: Hello, I've been told that Openstack now uses journald rather than rsyslog, but the Yoga docs still show a log aggregation host. Although I still prefer rsyslog, what I really prefer is to have logs for all hosts in the cluster collected in one place. How do I configure this for a fresh installation? Is it reasonable to assign this duty to an infrastructure host? Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Sat Jul 9 21:40:31 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sat, 9 Jul 2022 17:40:31 -0400 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? Message-ID: Hello. I'm preparing to do my first deployment (of Yoga) on real hardware. The documentation regarding br-vlan was hard for me to understand. Could I get a clarification on what to do with this? Note: my intended use case is as an academic/instructional environment. I'm thinking along the lines of treating each student as a separate tenant that would be preconfigured with templated set of VMs appropriate to the course content. Any other thoughts regarding this scenario would also be greatly appreciated. Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Sat Jul 9 23:36:04 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Sat, 9 Jul 2022 18:36:04 -0500 Subject: [event] OpenInfradays Mexico 2022 (Virtual) In-Reply-To: References: Message-ID: hey hey community!!!!! don't forget to share your knowledge with LATAM community -- https://openinfradays.mx 6:34 CFP will close on Monday next week, but send us an email and we can talk about waiting or extending the date On Tue, Jul 5, 2022 at 12:38 AM Alvaro Soto wrote: > Hello Community, > the CFP will close in 6 days, don?t forget to submit your proposals, we > only need the title and abstracts, video talk needs to be submitted later > on. > Remember that this is a virtual event and a great opportunity to share and > spread knowledge across the LATAM region. > > https://events.linuxfoundation.org/about/community/?_sft_lfevent-country=mx > https://openinfradays.mx/ > > Cheers! > > --- > Alvaro Soto > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Thu, Jun 23, 2022, 6:13 PM Alvaro Soto wrote: > >> You're all invited to participate in the CFP for OID-MX22 >> https://openinfradays.mx >> >> Let me know if you have any questions. >> >> --- >> Alvaro Soto Escobar >> >> Note: My work hours may not be your work hours. Please do not feel the >> need to respond during a time that is not convenient for you. >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Sun Jul 10 05:25:15 2022 From: tjoen at dds.nl (tjoen) Date: Sun, 10 Jul 2022 07:25:15 +0200 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: Message-ID: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> On 7/9/22 23:40, Dave Hall wrote: > I'm preparing to do my first deployment (of Yoga) on real hardware. The > documentation regarding br-vlan was hard for me to understand. Could I > get a clarification on what to do with this? I think in Yoga br-vlan has been replaced by a real bridge > Note: my intended use case is as an academic/instructional environment. > I'm thinking along the lines of treating each student as a separate tenant > that would be preconfigured with templated set of VMs appropriate to the > course content. In my case I am trying to get a new version running (login to Cirros) on a LFS system. I got Yoga woring on py3.9. Currently migrating to py3.10 for Zed From noonedeadpunk at gmail.com Sun Jul 10 06:57:52 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 08:57:52 +0200 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: Message-ID: Hi Dave, Intended use-case for br-vlan is when you want or need to provide vlan networks in the environment. As example, we use vlan networks to bring in customers owned public networks, as we need to pass vlan from the gateway to the compute nodes, and we are not able to set vxlan on the gateway due to hardware that is used there. At the same time in many environments you might not need using vlans at all, as vxlan is what will be used by default to provide tenant networks. ??, 9 ???. 2022 ?., 23:44 Dave Hall : > Hello. > > I'm preparing to do my first deployment (of Yoga) on real hardware. The > documentation regarding br-vlan was hard for me to understand. Could I > get a clarification on what to do with this? > > Note: my intended use case is as an academic/instructional environment. > I'm thinking along the lines of treating each student as a separate tenant > that would be preconfigured with templated set of VMs appropriate to the > course content. > > Any other thoughts regarding this scenario would also be greatly > appreciated. > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jul 10 10:02:03 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 12:02:03 +0200 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Hey, Yes, indeed, we do store all logs with journald nowadays and rsyslog is not being used. These roles and documentation is our technical debt and we should have deprecated that back in Victoria. Journald for containers is bind mounted and you can check for all container logs on host. At the same time there plenty of tools to convert journald to any format of your taste including rsyslog. To have that said, I would discourage using rsyslog as with that you loose tons of important metadata and it's hard to parse them properly. If you're using any central logging tool, like elk or graylog, there re ways to forward journal to these as well. We also have roles for elk or graylog in our ops repo https://opendev.org/openstack/openstack-ansible-ops Though we don't provide support for them, thus don't guarantee they're working as expected and some effort might be needed to update them. ??, 9 ???. 2022 ?., 23:33 Dave Hall : > Hello, > > I've been told that Openstack now uses journald rather than rsyslog, but > the Yoga docs still show a log aggregation host. Although I still prefer > rsyslog, what I really prefer is to have logs for all hosts in the cluster > collected in one place. How do I configure this for a fresh installation? > Is it reasonable to assign this duty to an infrastructure host? > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Sun Jul 10 13:42:50 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sun, 10 Jul 2022 09:42:50 -0400 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Two further questions: After looking a bit further, journald seems to have the ability to forward to an journald aggregation target. Is there an option in OpenStack-Ansible to configure this in all of the deployed containers? Are there any relevant services deployed in a typical OSA deployment that don't log to journald? Thanks. -Dave On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov wrote: > Hey, > > Yes, indeed, we do store all logs with journald nowadays and rsyslog is > not being used. These roles and documentation is our technical debt and we > should have deprecated that back in Victoria. > > Journald for containers is bind mounted and you can check for all > container logs on host. At the same time there plenty of tools to convert > journald to any format of your taste including rsyslog. To have that said, > I would discourage using rsyslog as with that you loose tons of important > metadata and it's hard to parse them properly. If you're using any central > logging tool, like elk or graylog, there re ways to forward journal to > these as well. We also have roles for elk or graylog in our ops repo > https://opendev.org/openstack/openstack-ansible-ops > Though we don't provide support for them, thus don't guarantee they're > working as expected and some effort might be needed to update them. > > ??, 9 ???. 2022 ?., 23:33 Dave Hall : > >> Hello, >> >> I've been told that Openstack now uses journald rather than rsyslog, but >> the Yoga docs still show a log aggregation host. Although I still prefer >> rsyslog, what I really prefer is to have logs for all hosts in the cluster >> collected in one place. How do I configure this for a fresh installation? >> Is it reasonable to assign this duty to an infrastructure host? >> >> Thanks. >> >> -Dave >> >> -- >> Dave Hall >> Binghamton University >> kdhall at binghamton.edu >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jul 10 13:50:56 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 15:50:56 +0200 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Yes, we have a role openstack.osa.journald_remote that should be shipped and installed during bootstrap: https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/journald_remote I think as of today only ceph can not log into journald. ??, 10 ???. 2022 ?., 15:43 Dave Hall : > Two further questions: > > After looking a bit further, journald seems to have the ability to forward > to an journald aggregation target. Is there an option in OpenStack-Ansible > to configure this in all of the deployed containers? > > Are there any relevant services deployed in a typical OSA deployment that > don't log to journald? > > Thanks. > > -Dave > > On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov > wrote: > >> Hey, >> >> Yes, indeed, we do store all logs with journald nowadays and rsyslog is >> not being used. These roles and documentation is our technical debt and we >> should have deprecated that back in Victoria. >> >> Journald for containers is bind mounted and you can check for all >> container logs on host. At the same time there plenty of tools to convert >> journald to any format of your taste including rsyslog. To have that said, >> I would discourage using rsyslog as with that you loose tons of important >> metadata and it's hard to parse them properly. If you're using any central >> logging tool, like elk or graylog, there re ways to forward journal to >> these as well. We also have roles for elk or graylog in our ops repo >> https://opendev.org/openstack/openstack-ansible-ops >> Though we don't provide support for them, thus don't guarantee they're >> working as expected and some effort might be needed to update them. >> >> ??, 9 ???. 2022 ?., 23:33 Dave Hall : >> >>> Hello, >>> >>> I've been told that Openstack now uses journald rather than rsyslog, but >>> the Yoga docs still show a log aggregation host. Although I still prefer >>> rsyslog, what I really prefer is to have logs for all hosts in the cluster >>> collected in one place. How do I configure this for a fresh installation? >>> Is it reasonable to assign this duty to an infrastructure host? >>> >>> Thanks. >>> >>> -Dave >>> >>> -- >>> Dave Hall >>> Binghamton University >>> kdhall at binghamton.edu >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jul 10 13:59:57 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 15:59:57 +0200 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Btw journald-remote on itself has quite a few long-lasting bugs, and it's development seems quite absent as of today. Most significant one is regarding rotation of logs on the remote server, check https://github.com/systemd/systemd/issues/5242 ??, 10 ???. 2022 ?., 15:43 Dave Hall : > Two further questions: > > After looking a bit further, journald seems to have the ability to forward > to an journald aggregation target. Is there an option in OpenStack-Ansible > to configure this in all of the deployed containers? > > Are there any relevant services deployed in a typical OSA deployment that > don't log to journald? > > Thanks. > > -Dave > > On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov > wrote: > >> Hey, >> >> Yes, indeed, we do store all logs with journald nowadays and rsyslog is >> not being used. These roles and documentation is our technical debt and we >> should have deprecated that back in Victoria. >> >> Journald for containers is bind mounted and you can check for all >> container logs on host. At the same time there plenty of tools to convert >> journald to any format of your taste including rsyslog. To have that said, >> I would discourage using rsyslog as with that you loose tons of important >> metadata and it's hard to parse them properly. If you're using any central >> logging tool, like elk or graylog, there re ways to forward journal to >> these as well. We also have roles for elk or graylog in our ops repo >> https://opendev.org/openstack/openstack-ansible-ops >> Though we don't provide support for them, thus don't guarantee they're >> working as expected and some effort might be needed to update them. >> >> ??, 9 ???. 2022 ?., 23:33 Dave Hall : >> >>> Hello, >>> >>> I've been told that Openstack now uses journald rather than rsyslog, but >>> the Yoga docs still show a log aggregation host. Although I still prefer >>> rsyslog, what I really prefer is to have logs for all hosts in the cluster >>> collected in one place. How do I configure this for a fresh installation? >>> Is it reasonable to assign this duty to an infrastructure host? >>> >>> Thanks. >>> >>> -Dave >>> >>> -- >>> Dave Hall >>> Binghamton University >>> kdhall at binghamton.edu >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at shrug.pw Sun Jul 10 15:28:15 2022 From: neil at shrug.pw (Neil Hanlon) Date: Sun, 10 Jul 2022 11:28:15 -0400 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: I had been poking at some updated documentation for OSA around remote journaling/centralized logging that I have been meaning to put in for review which may be useful here. I'll try and get to tidying it up in the next couple weeks. --Neil On Sun, Jul 10, 2022, 10:04 Dmitriy Rabotyagov wrote: > Btw journald-remote on itself has quite a few long-lasting bugs, and it's > development seems quite absent as of today. Most significant one is > regarding rotation of logs on the remote server, check > https://github.com/systemd/systemd/issues/5242 > > ??, 10 ???. 2022 ?., 15:43 Dave Hall : > >> Two further questions: >> >> After looking a bit further, journald seems to have the ability to >> forward to an journald aggregation target. Is there an option in >> OpenStack-Ansible to configure this in all of the deployed containers? >> >> Are there any relevant services deployed in a typical OSA deployment that >> don't log to journald? >> >> Thanks. >> >> -Dave >> >> On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov >> wrote: >> >>> Hey, >>> >>> Yes, indeed, we do store all logs with journald nowadays and rsyslog is >>> not being used. These roles and documentation is our technical debt and we >>> should have deprecated that back in Victoria. >>> >>> Journald for containers is bind mounted and you can check for all >>> container logs on host. At the same time there plenty of tools to convert >>> journald to any format of your taste including rsyslog. To have that said, >>> I would discourage using rsyslog as with that you loose tons of important >>> metadata and it's hard to parse them properly. If you're using any central >>> logging tool, like elk or graylog, there re ways to forward journal to >>> these as well. We also have roles for elk or graylog in our ops repo >>> https://opendev.org/openstack/openstack-ansible-ops >>> Though we don't provide support for them, thus don't guarantee they're >>> working as expected and some effort might be needed to update them. >>> >>> ??, 9 ???. 2022 ?., 23:33 Dave Hall : >>> >>>> Hello, >>>> >>>> I've been told that Openstack now uses journald rather than rsyslog, >>>> but the Yoga docs still show a log aggregation host. Although I still >>>> prefer rsyslog, what I really prefer is to have logs for all hosts in the >>>> cluster collected in one place. How do I configure this for a fresh >>>> installation? Is it reasonable to assign this duty to an infrastructure >>>> host? >>>> >>>> Thanks. >>>> >>>> -Dave >>>> >>>> -- >>>> Dave Hall >>>> Binghamton University >>>> kdhall at binghamton.edu >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sun Jul 10 17:39:18 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sun, 10 Jul 2022 23:09:18 +0530 Subject: Podman issue when building custom horizon container Message-ID: Hi, I am following the https://access.redhat.com/documentation/zh-cn/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/dashboard-customization link to custom build the container image. but when trying to push i am getting the following error: Error: writing blob: initiating layer upload to /v2/tripleomaster/openstack-horizon/blobs/uploads/ in 172.25.201.68:8787: StatusCode: 404, ... [root at hkg2director httpd]# tail -f image_serve_error.log [Fri Jul 08 12:58:09.788817 2022] [core:error] [pid 5033:tid 140483665843968] [client 172.25.163.196:33866] AH00126: Invalid URI in request GET /././.. HTTP/1.1 [Sun Jul 10 21:42:59.744102 2022] [negotiation:error] [pid 33922:tid 140484739585792] (2)No such file or directory: [client 172.25.201.106:58678] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 21:43:01.280405 2022] [negotiation:error] [pid 5035:tid 140483917494016] (2)No such file or directory: [client 172.25.201.97:45222] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 21:43:02.024126 2022] [negotiation:error] [pid 33922:tid 140484043319040] (2)No such file or directory: [client 172.25.201.103:34508] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 21:43:16.852186 2022] [negotiation:error] [pid 33922:tid 140484076889856] (2)No such file or directory: [client 172.25.201.105:53542] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:05.220181 2022] [negotiation:error] [pid 33922:tid 140485133846272] (2)No such file or directory: [client 172.25.201.106:56804] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:06.679326 2022] [negotiation:error] [pid 5035:tid 140485108668160] (2)No such file or directory: [client 172.25.201.103:54208] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:07.026650 2022] [negotiation:error] [pid 5035:tid 140483917494016] (2)No such file or directory: [client 172.25.201.97:35056] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:22.600075 2022] [negotiation:error] [pid 33922:tid 140484060104448] (2)No such file or directory: [client 172.25.201.105:45446] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Mon Jul 11 01:23:45.354025 2022] [negotiation:error] [pid 129344:tid 139984250070784] (2)No such file or directory: [client 172.25.201.68:59514] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-9.type-map [stack at hkg2director horizon-themes]$ sudo podman push 172.25.201.68:8787/tripleomaster/openstack-horizon:0-9 WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". Getting image source signatures Copying blob 80c0be683ac9 skipped: already exists Copying blob bab1c7b6a899 [--------------------------------------] 8.0b / 59.5KiB Copying blob f9f09cbae066 [--------------------------------------] 8.0b / 59.5KiB Copying blob b7b591e3443f [--------------------------------------] 8.0b / 20.0KiB Copying blob bb83a400dc7e [--------------------------------------] 8.0b / 7.0KiB Copying blob 1ca9ec783ad6 [--------------------------------------] 8.0b / 7.0KiB Copying blob 3a6b7f2864e6 [--------------------------------------] 8.0b / 7.0KiB Copying blob ffd77c9907b7 [--------------------------------------] 8.0b / 9.0KiB Copying blob 6ae06b1bf643 [--------------------------------------] 8.0b / 7.5KiB Copying blob bc3dd4002908 [--------------------------------------] 8.0b / 7.5KiB Copying blob 8b974d9968c1 [--------------------------------------] 8.0b / 14.0KiB Copying blob 30f87a7de8b5 [--------------------------------------] 8.0b / 6.0KiB Copying blob aa02c13deb86 [--------------------------------------] 0.0b / 4.0KiB Copying blob 89b853af1aa2 [--------------------------------------] 8.0b / 14.0KiB Copying blob 3ee1ead6db5f [--------------------------------------] 8.0b / 59.5KiB Copying blob fd1baa2a1cd0 [--------------------------------------] 8.0b / 59.5KiB Copying blob 33afb269824d [--------------------------------------] 8.0b / 7.0KiB Copying blob f1117feaa844 [--------------------------------------] 8.0b / 7.0KiB Copying blob b36e5fbb3eab [--------------------------------------] 8.0b / 7.0KiB Copying blob 5d6773201dfb [--------------------------------------] 8.0b / 7.5KiB Copying blob f9ae49c1e307 [--------------------------------------] 8.0b / 7.5KiB Copying blob 64e624a7f305 [--------------------------------------] 8.0b / 6.0KiB Copying blob 2fa93f4f2a45 [--------------------------------------] 8.0b / 128.6MiB Copying blob ccf04fbd6e19 [--------------------------------------] 8.0b / 201.0MiB Copying blob 8a36fc7d2feb [--------------------------------------] 8.0b / 281.3MiB Error: writing blob: initiating layer upload to /v2/tripleomaster/openstack-horizon/blobs/uploads/ in 172.25.201.68:8787: StatusCode: 404, ... Can someone please guide me to fix this issue? With regards, Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Sun Jul 10 23:28:36 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 11 Jul 2022 01:28:36 +0200 Subject: [IRONIC] - Various questions around network features. Message-ID: I everyone, I?m currently working back again with Ironic and it?s amazing! However, during our demo session to our users few questions arise. We?re currently deploying nodes using a private vlan that can?t be reached from outside of the Openstack network fabric (vlan 101 - 192.168.101.0/24) and everything is fine with this provisioning network as our ToR switch all know about it and other Control plan VLANs such as the internal APIs VLAN which allow the IPA Ramdisk to correctly and seamlessly be able to contact the internal IRONIC APIs. (When you declare a port as a trunk allowed all vlan on a aruba switch it seems it automatically analyse the CIDR your host try to reach from your VLAN and route everything to the corresponding VLAN that match the destination IP). So know, I still get few tiny issues: 1?/- When I spawn a nova instance on a ironic host that is set to use flat network (From horizon as a user), why does the nova wizard still ask for a neutron network if it?s not set on the provisioned host by the IPA ramdisk right after the whole disk image copy? Is that some missing development on horizon or did I missed something? 2?/- In a flat network layout deployment using direct deploy scenario for images, am I still supposed to create a ironic provisioning network in neutron? >From my understanding (and actually my tests) we don?t, as any host booting on the provisioning vlan will catch up an IP and initiate the bootp sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, but it?s a bit confusing as many documentation explicitly require for this network to exist on neutron. 3?/- My whole Openstack network setup is using Openvswitch and vxlan tunnels on top of a spine/leaf architecture using aruba CX8360 switches (for both spine and leafs), am I required to use either the networking-generic-switch driver or a vendor neutron driver ? If that?s right, how will this driver be able to instruct the switch to assign the host port the correct openvswitch vlan id and register the correct vxlan to openvswitch from this port? I mean, ok neutron know the vxlan and openvswitch the tunnel vlan id/interface but what is the glue of all that? 4?/- I?ve successfully used openstack cloud oriented CentOS and debian images or snapshot of VMs to provision my hosts, this is an awesome feature, but I?m wondering if there is a way to let those host cloud-init instance to request for neutron metadata endpoint? I was a bit surprised about the ironic networking part as I was expecting the IPA ramdisk to at least be able to set the host os with the appropriate network configuration file for whole disk images that do not use encryption by injecting those information from the neutron api into the host disk while mounted (right after the image dd). All in all I really like the ironic approach of the baremetal provisioning process, and I?m pretty sure that I?m just missing a bit of understanding of the networking part but it?s really the most confusing part of it to me as I feel like if there is a missing link in between neutron and the host HW or the switches. Thanks a lot anyone that will take time to explain me this :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Mon Jul 11 02:43:36 2022 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 11 Jul 2022 12:43:36 +1000 Subject: [service-announce] Updating Zuul's Default Ansible Version to Ansible v5 In-Reply-To: <8f869fba-10b8-488c-8f58-065115822555@www.fastmail.com> References: <8f869fba-10b8-488c-8f58-065115822555@www.fastmail.com> Message-ID: On Wed, Jun 15, 2022 at 12:11:00PM -0700, Clark Boylan wrote: > The OpenDev team will be updating the default Ansible version in our > Zuul tenants from Ansible 2.9 to Ansible 5 on June 30, 2022. Zuul > itself will eventually update its default, but making the change in > our tenant configs allows us to control exactly when this happens. Note this has been merged with https://review.opendev.org/c/openstack/project-config/+/849120 Just for visibility I've cc'd this to openstack-discuss; but please subscribe to service-announce [1] if you're interested in such OpenDev infra updates. Thanks, -i [1] https://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce From jonathan.rosser at rd.bbc.co.uk Mon Jul 11 07:28:35 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 11 Jul 2022 08:28:35 +0100 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> References: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> Message-ID: On 10/07/2022 06:25, tjoen wrote: > On 7/9/22 23:40, Dave Hall wrote: >> I'm preparing to do my first deployment (of Yoga) on real hardware.? The >> documentation regarding br-vlan was hard for me to understand. Could I >> get a clarification on what to do with this? > > I think in Yoga br-vlan has been replaced by a real bridge With openstack deployed with Openstack-Ansible,? br-vlan remains unchanged in the Yoga release. Jonathan. From gibi at redhat.com Mon Jul 11 07:33:04 2022 From: gibi at redhat.com (Balazs Gibizer) Date: Mon, 11 Jul 2022 09:33:04 +0200 Subject: [nova] Review guide for PCI tracking for Placement patches In-Reply-To: References: Message-ID: <4BIUER.4Z8ORYASOYO42@redhat.com> On Tue, Jun 21 2022 at 02:04:01 PM +02:00:00, Balazs Gibizer wrote: > Hi Nova, > > The first batch of patches are up for review for the PCI tracking for > Placement feature. These mostly covers two aspects of the spec[1]: > 1) renaming [pci]passthrough_whitelist to [pci]device_spec > 2) pci inventory reporting to placement, excluding existing PCI > allocation healing in placement > > This covers the first 4 sub chapters of Proposed Change chapter of > the spec[1] up until "PCI alias configuration". I noted intentional > deviations from the spec in the spec review [2] and I will push a > follow up to the spec at some point fixing those. > > I tried to do it in small steps hence the long list of commits[3]: > > #2) pci inventory reporting to placement, excluding existing PCI > allocation healing in placement > 5827d56310 Stop if tracking is disable after it was enabled before > a4b5788858 Support [pci]device_spec reconfiguration > 10642c787a Reject devname based device_spec config > b0ad05fb69 Ignore PCI devs with physical_network tag > f5a34ee441 Reject mixed VF rc and trait config > c60b26014f Reject PCI dependent device config > 5cf7325221 Extend device_spec with resource_class and traits > eff0df6a98 Basics for PCI Placement reporting > #1) renaming [pci]passthrough_whitelist to [pci]device_spec > adfe34080a Rename whitelist in tests > ea955a0c15 Rename exception.PciConfigInvalidWhitelist to > PciConfigInvalidSpec > 55770e4c14 Rename [pci]passthrough_whitelist to device_spec > > There is a side track branching out from "adfe34080a Rename whitelist > in tests" to clean up the device spec handling[4]: > > 514500b5a4 Move __str__ to the PciAddressSpec base class > 3a6198c8fb Fix type annotation of pci.Whitelist class > f70adbb613 Remove unused PF checking from get_function_by_ifname > b7eef53b1d Clean up mapping input to address spec types > 93bbd67101 Poison /sys access via various calls in test > 467ef91a86 Remove dead code from PhysicalPciAddress > 233212d30f Fix PciAddressSpec descendants to call super.__init__ > ad5bd46f46 Unparent PciDeviceSpec from PciAddressSpec > cef0d2de4c Extra tests for remote managed dev spec > 2fa2825afb Add more test coverage for devname base dev spec > adfe34080a Rename whitelist in tests > > This is not a mandatory part of the feature but I think they improve > the code in hand and even fixing some small bugs. > > > I will continue with adding allocation healing for existing PCI > allocations. > > Any feedback is highly appreciated. Pinging this thread as I would like to ask for at least a high level review round to see that the implementation direction is OK before I produce the next bunch of commits of the series. > Cheers, > gibi > > [1] > https://specs.openstack.org/openstack/nova-specs/specs/zed/approved/pci-device-tracking-in-placement.html > [2] https://review.opendev.org/c/openstack/nova-specs/+/791047 > [3] > https://review.opendev.org/q/topic:bp/pci-device-tracking-in-placement > [4] https://review.opendev.org/q/topic:bp/pci-device-spec-cleanup > > From jonathan.rosser at rd.bbc.co.uk Mon Jul 11 07:34:19 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 11 Jul 2022 08:34:19 +0100 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: Message-ID: <76fdd631-f5dc-5f45-7dc0-b1e375060857@rd.bbc.co.uk> If you choose to use vxlan for your tenant networks (the default in OSA) you would probably be using a vlan for the external provider network. This would default to br-vlan, but alternatively can be any interface of your choice. With the default configuration br-vlan would need to be present on all of your controller (or dedicated network) nodes and carry the tagged external vlan from your upstream switches. Jonathan. On 10/07/2022 07:57, Dmitriy Rabotyagov wrote: > Hi Dave, > > Intended use-case for br-vlan is when you want or need to provide vlan > networks in the environment. > > As example, we use vlan networks to bring in customers owned public > networks, as we need to pass vlan from the gateway to the compute > nodes, and we are not able to set vxlan on the gateway due to hardware > that is used there. > > At the same time in many environments you might not need using vlans > at all, as vxlan is what will be used by default to provide tenant > networks. > > ??, 9 ???. 2022 ?., 23:44 Dave Hall : > > Hello. > > I'm preparing to do my first deployment (of Yoga) on > real?hardware.? The documentation regarding br-vlan was hard for > me to understand.? ?Could I get a clarification?on what to do with > this? > > Note:? my intended use case is as an academic/instructional > environment.? I'm thinking along the lines of treating each > student as a separate tenant that would be preconfigured with > templated set of VMs appropriate to the course content. > > Any other thoughts regarding this scenario would also?be greatly > appreciated. > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Mon Jul 11 07:39:08 2022 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 11 Jul 2022 09:39:08 +0200 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> Message-ID: Hello there, Tripleo can't really drop the dependency on ansible-runner, since it's the official and supported way to run ansible from within python. We could of course replace it by subprocess.Popen and the whole family, but we'll lose all the facilities, especially the ones involved in the Ansible Execution Environment, which is kind of the "future" for running ansible (for the better and the worst). Since ansible-runner has an open issue for that (you even linked it), and at least one open pull-request to correct this issue, I'm not really sure we're needing to do anything but follow that open issue and ensure we get the right version of the package once they merge any of the proposal to remove it. Would that be OK? Though, of course, it won't allow to set a deadline on a specific date... Cheers, C. On 7/9/22 15:26, Jeremy Stanley wrote: > It became apparent in a Python community discussion[0] yesterday > that lockfile has been designated as a "critical project" by PyPI, > even though it was effectively abandoned in 2010. Because our > release automation account is listed as one of its maintainers, I > looked into whether we still need it for anything... > > For those who have been around here a while, you may recall that the > OpenStack project temporarily assumed maintenance[1] of lockfile in > late 2014 and uploaded a few new releases for it, as a stop-gap > until we could replace our uses with oslo.concurrency. That work was > completed[2] early in the Liberty development cycle. > > Unfortunately for us, that's not the end of the story. I looked > yesterday expecting to see that we've not needed lockfile for 7 > years, and was disappointed to discover it's still in our > constraints list. Why? After much wailing and gnashing of teeth and > manually installing multiple bisections of the requirements list, I > narrowed it down to one dependency: ansible-runner. > > Apparently, ansible-runner currently depends[3] on python-daemon, > which still has a dependency on lockfile[4]. Our uses of > ansible-runner seem to be pretty much limited to TripleO > repositories (hence tagging them in the subject), so it's possible > they could find an alternative to it and solve this dilemma. > Optionally, we could try to help the ansible-runner or python-daemon > maintainers with new implementations of the problem dependencies as > a way out. > > Whatever path we take, we're long overdue. The reasons we moved off > lockfile ages ago are still there, and the risk to us has only > continued to increase in the meantime. I'm open to suggestions, but > we really ought to make sure we have it out of our constraints list > by the Zed release. > > [0] https://discuss.python.org/t/17219 > [1] https://lists.openstack.org/pipermail/openstack-dev/2014-June/038387.html > [2] https://review.openstack.org/151224 > [3] https://github.com/ansible/ansible-runner/issues/379 > [4] https://pagure.io/python-daemon/issue/42 -- C?dric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From lucasagomes at gmail.com Mon Jul 11 09:13:16 2022 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 11 Jul 2022 10:13:16 +0100 Subject: [neutron] Bug Deputy Report July 04 - 11 Message-ID: Hi, This is the Neutron bug report from July 4th to 11th. Quieter week than usual. High: * https://bugs.launchpad.net/neutron/+bug/1980967 - "get_hypervisor_hostname helper function is failing silently" Assigned to: Miro Tomaska * https://bugs.launchpad.net/neutron/+bug/1981077 - "Remove import of 'imp' module" Unassigned * https://bugs.launchpad.net/neutron/+bug/1981113 - "OVN metadata agent can be slow with large amount of subnets" Assigned to: Miro Tomaska Medium: * https://bugs.launchpad.net/neutron/+bug/1980671 - "Neutron-dynamic-routing: missing transaction wrapper" Assigned to: Dr. Jens Harbott Cheers, Lucas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Mon Jul 11 09:59:02 2022 From: jpodivin at redhat.com (Jiri Podivin) Date: Mon, 11 Jul 2022 11:59:02 +0200 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> Message-ID: We could, and maybe should, ask people working on ansible to give that issue/PR more attention. I see the PR is untriaged for some time now... On Mon, Jul 11, 2022 at 9:50 AM C?dric Jeanneret wrote: > Hello there, > > Tripleo can't really drop the dependency on ansible-runner, since it's > the official and supported way to run ansible from within python. We > could of course replace it by subprocess.Popen and the whole family, but > we'll lose all the facilities, especially the ones involved in the > Ansible Execution Environment, which is kind of the "future" for running > ansible (for the better and the worst). > > Since ansible-runner has an open issue for that (you even linked it), > and at least one open pull-request to correct this issue, I'm not really > sure we're needing to do anything but follow that open issue and ensure > we get the right version of the package once they merge any of the > proposal to remove it. > > Would that be OK? Though, of course, it won't allow to set a deadline on > a specific date... > > Cheers, > > C. > > On 7/9/22 15:26, Jeremy Stanley wrote: > > It became apparent in a Python community discussion[0] yesterday > > that lockfile has been designated as a "critical project" by PyPI, > > even though it was effectively abandoned in 2010. Because our > > release automation account is listed as one of its maintainers, I > > looked into whether we still need it for anything... > > > > For those who have been around here a while, you may recall that the > > OpenStack project temporarily assumed maintenance[1] of lockfile in > > late 2014 and uploaded a few new releases for it, as a stop-gap > > until we could replace our uses with oslo.concurrency. That work was > > completed[2] early in the Liberty development cycle. > > > > Unfortunately for us, that's not the end of the story. I looked > > yesterday expecting to see that we've not needed lockfile for 7 > > years, and was disappointed to discover it's still in our > > constraints list. Why? After much wailing and gnashing of teeth and > > manually installing multiple bisections of the requirements list, I > > narrowed it down to one dependency: ansible-runner. > > > > Apparently, ansible-runner currently depends[3] on python-daemon, > > which still has a dependency on lockfile[4]. Our uses of > > ansible-runner seem to be pretty much limited to TripleO > > repositories (hence tagging them in the subject), so it's possible > > they could find an alternative to it and solve this dilemma. > > Optionally, we could try to help the ansible-runner or python-daemon > > maintainers with new implementations of the problem dependencies as > > a way out. > > > > Whatever path we take, we're long overdue. The reasons we moved off > > lockfile ages ago are still there, and the risk to us has only > > continued to increase in the meantime. I'm open to suggestions, but > > we really ought to make sure we have it out of our constraints list > > by the Zed release. > > > > [0] https://discuss.python.org/t/17219 > > [1] > https://lists.openstack.org/pipermail/openstack-dev/2014-June/038387.html > > [2] https://review.openstack.org/151224 > > [3] https://github.com/ansible/ansible-runner/issues/379 > > [4] https://pagure.io/python-daemon/issue/42 > > -- > C?dric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From development at manuel-bentele.de Mon Jul 11 10:18:38 2022 From: development at manuel-bentele.de (Manuel Bentele) Date: Mon, 11 Jul 2022 12:18:38 +0200 Subject: [all][dev] Where are cross-repository blueprints and specs located? Message-ID: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> Hi all, I have two general developer questions: * Where are the blueprints and specs located that address changes across the code base? * How to deal with changes that have dependencies on multiple repositories (where a specific merge and release order needs to be satisfied)? For example: An enumeration member has to be added and merged in repository A before it can be used in repository B and C. If this order is not satisfied, a DevStack setup may break. Regards, Manuel From smooney at redhat.com Mon Jul 11 11:13:35 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 11 Jul 2022 12:13:35 +0100 Subject: [all][dev] Where are cross-repository blueprints and specs located? In-Reply-To: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> References: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> Message-ID: <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> On Mon, 2022-07-11 at 12:18 +0200, Manuel Bentele wrote: > Hi all, > > I have two general developer questions: > > * Where are the blueprints and specs located that address changes > across the code base? in generall we create "sibling specs" in each project when we have cross project work. its rare that we have more then 2 or 3 projects that take part in any one feature or change so we dont have a singel repo for specs across all repos. the close thing we have to that is a a TC selected cross porject goal. but even then while the goal document might be held locally we would still track any work on that goal within each proejct. > * How to deal with changes that have dependencies on multiple > repositories (where a specific merge and release order needs to be > satisfied)?? > zuul (or ci/gating system) support cross repo dependiceis and speculitive merging for testing via "Depends-On:" lines in the commit message. That will prevent a patch on project A form merging until the patch it depens on in project B is merged however for testing if you are using the devstack jobs it will locally merge the patch to proejct B when testing the patch to project A. this is generaly refered to as co-gatating. To enable this the jobs have to declare the other project as a required-project which will resutl in zuul prepareing a version fo the git repo with all depencies merged which the jobs can then use instead of the current top of the master branch when testing. the simple tox based unit tests do not support using deps form git. they install from pypi so unit/functional tests will not pass until a change in a lib is merged but the integration jobs such as the devstack ones will work. in general however, out side of constantats and data structure taht are part of the lib api if project A depens on project B, the Project A should mock all calls to B in its unit tests and provide a testfixture for B in its functional tests where that is resonable if they want to mitigate the dependicty issues. > For example: An enumeration member has to be added and > merged in repository A before it can be used in repository B and C. > If this order is not satisfied, a DevStack setup may break. yes which is a good thing. we want devstack to brake in this case since the set of code you are deploying is broken. but for a ci perspecitve depend-on prevents this form happening. for local developemnt you can override the branches used and can tell devstack to deploy specific patch reviiosn form gerrit or spicic git sha's/branches via usign the _repo and _branch vars that you can define in the local.conf. look at the stackrc file for examples. > > Regards, > Manuel > > From development at manuel-bentele.de Mon Jul 11 12:13:13 2022 From: development at manuel-bentele.de (Manuel Bentele) Date: Mon, 11 Jul 2022 14:13:13 +0200 Subject: [all][dev] Where are cross-repository blueprints and specs located? In-Reply-To: <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> References: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> Message-ID: <6ef35f20-f3f5-d5e4-3176-2d14ca377386@manuel-bentele.de> Hi Sean, Thanks for you quick and detailed answer. It's good to know that such situations are solved with "sibling specs". Also the "Depends-On:" hint was very informative for me, especially that this statement can be evaluated by the CI/CD chain to prevent it from breaking. Cheers, Manuel On 7/11/22 13:13, Sean Mooney wrote: > On Mon, 2022-07-11 at 12:18 +0200, Manuel Bentele wrote: >> Hi all, >> >> I have two general developer questions: >> >> * Where are the blueprints and specs located that address changes >> across the code base? > in generall we create "sibling specs" in each project when we have > cross project work. its rare that we have more then 2 or 3 projects that take part > in any one feature or change so we dont have a singel repo for specs across all > repos. > > the close thing we have to that is a a TC selected cross porject goal. > but even then while the goal document might be held locally we would still > track any work on that goal within each proejct. >> * How to deal with changes that have dependencies on multiple >> repositories (where a specific merge and release order needs to be >> satisfied)? >> > zuul (or ci/gating system) support cross repo dependiceis and speculitive merging > for testing via "Depends-On:" lines in the commit message. > > That will prevent a patch on project A form merging until the patch it depens on in project > B is merged however for testing if you are using the devstack jobs it will locally merge > the patch to proejct B when testing the patch to project A. > > this is generaly refered to as co-gatating. > > To enable this the jobs have to declare the other project as a required-project > which will resutl in zuul prepareing a version fo the git repo with all depencies merged > which the jobs can then use instead of the current top of the master branch when testing. > > the simple tox based unit tests do not support using deps form git. they install > from pypi so unit/functional tests will not pass until a change in a lib is merged > but the integration jobs such as the devstack ones will work. > > in general however, out side of constantats and data structure taht are part of the > lib api if project A depens on project B, the Project A should mock all calls to B in > its unit tests and provide a testfixture for B in its functional tests where that > is resonable if they want to mitigate the dependicty issues. >> For example: An enumeration member has to be added and >> merged in repository A before it can be used in repository B and C. >> If this order is not satisfied, a DevStack setup may break. > yes which is a good thing. we want devstack to brake in this case since the > set of code you are deploying is broken. but for a ci perspecitve depend-on > prevents this form happening. > > for local developemnt you can override the branches used and can tell devstack > to deploy specific patch reviiosn form gerrit or spicic git sha's/branches > via usign the _repo and _branch vars that you can define in > the local.conf. > look at the stackrc file for examples. > >> Regards, >> Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jul 11 12:25:05 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Jul 2022 12:25:05 +0000 Subject: [all][dev] Where are cross-repository blueprints and specs located? In-Reply-To: <6ef35f20-f3f5-d5e4-3176-2d14ca377386@manuel-bentele.de> References: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> <6ef35f20-f3f5-d5e4-3176-2d14ca377386@manuel-bentele.de> Message-ID: <20220711122504.zxdlh4aeq2s6kdpg@yuggoth.org> On 2022-07-11 14:13:13 +0200 (+0200), Manuel Bentele wrote: > Thanks for you quick and detailed answer. It's good to know that such > situations are solved with "sibling specs". Also the "Depends-On:" hint was > very informative for me, especially that this statement can be evaluated by > the CI/CD chain to prevent it from breaking. [...] For more details on this feature, see Zuul's documentation: https://zuul-ci.org/docs/zuul/latest/gating.html#cross-project-dependencies -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tjoen at dds.nl Mon Jul 11 16:31:09 2022 From: tjoen at dds.nl (tjoen) Date: Mon, 11 Jul 2022 18:31:09 +0200 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> Message-ID: On 7/11/22 09:28, Jonathan Rosser wrote: > On 10/07/2022 06:25, tjoen wrote: >> On 7/9/22 23:40, Dave Hall wrote: >>> I'm preparing to do my first deployment (of Yoga) on real hardware.? The >>> documentation regarding br-vlan was hard for me to understand. Could I >>> get a clarification on what to do with this? >> >> I think in Yoga br-vlan has been replaced by a real bridge > > With openstack deployed with Openstack-Ansible,? br-vlan remains > unchanged in the Yoga release. I have no Ansible. Maybe that explains my situation From zigo at debian.org Mon Jul 11 16:33:37 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 11 Jul 2022 18:33:37 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> Message-ID: <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> On 6/16/22 19:53, Stephen Finucane wrote: > On Thu, 2022-06-16 at 17:13 +0100, Stephen Finucane wrote: >> On Wed, 2022-06-15 at 01:04 +0100, Sean Mooney wrote: >>> On Wed, 2022-06-15 at 00:58 +0100, Sean Mooney wrote: >>>> On Tue, 2022-06-14 at 18:49 -0500, Ghanshyam Mann wrote: >>>>> ---- On Tue, 14 Jun 2022 17:47:59 -0500 Thomas Goirand wrote ---- >>>>> > On 6/13/22 00:10, Thomas Goirand wrote: >>>>> > > Hi, >>>>> > > >>>>> > > A few DDs are pushing me to upgrade jsonschema in Debian Unstable. >>>>> > > However, OpenStack global requirements are still stuck at 3.2.0. Is >>>>> > > there any reason for it, or should we attempt to upgrade to 4.6.0? >>>>> > > >>>>> > > I'd really appreciate if someone (else than me) was driving this... >>>>> > > >>>>> > > Cheers, >>>>> > > >>>>> > > Thomas Goirand (zigo) >>>>> > > >>>>> > >>>>> > FYI, Nova fails with it: >>>>> > https://ci.debian.net/data/autopkgtest/unstable/amd64/n/nova/22676760/log.gz >>>>> > >>>>> > Can someone from the Nova team investigate? >>>>> >>>>> Nova failures are due to the error message change (it happens regularly and they change them in most versions) >>>>> in jsonschema new version. I remember we faced this type of issue previously also and we updated >>>>> nova tests not to assert the error message but seems like there are a few left which is failing with >>>>> jsonschema 4.6.0. >>>>> >>>>> Along with these error message failures fixes and using the latest jsonschema, in nova, we use >>>>> Draft4Validator from jsonschema and the latest validator is Draft7Validator so we should check >>>>> what are backward-incompatible changes in Draft7Validator and bump this too? >>>> >>>> well i think the reason this is on 3.2.0 iis nothing to do with nova >>>> i bumpt it manually in https://review.opendev.org/c/openstack/requirements/+/845859/ >>>> >>>> The conflict is caused by: >>>> The user requested jsonschema===4.0.1 >>>> tripleo-common 16.4.0 depends on jsonschema>=3.2.0 >>>> rsd-lib 1.2.0 depends on jsonschema>=2.6.0 >>>> taskflow 4.7.0 depends on jsonschema>=3.2.0 >>>> zvmcloudconnector 1.4.1 depends on jsonschema>=2.3.0 >>>> os-net-config 15.2.0 depends on jsonschema>=3.2.0 >>>> task-core 0.2.1 depends on jsonschema>=3.2.0 >>>> python-zaqarclient 2.3.0 depends on jsonschema>=2.6.0 >>>> warlock 1.3.3 depends on jsonschema<4 and >=0.7 >>>> >>>> https://zuul.opendev.org/t/openstack/build/06ed295bb8244c16b48e2698c1049be9 >>>> >>>> it looks like warlock is clamping it ot less then 4 which is why we are stokc on 3.2.0 >>> glance client seams to be the only real user of this >>> https://codesearch.opendev.org/?q=warlock&i=nope&literal=nope&files=&excludeFiles=&repos= >>> perhaps we could jsut remove the dependcy? >> >> I've proposed vendoring this dependency in glanceclient [1]. I've also proposed >> a change to fix this in warlock [2] but given the lack of activity there, I >> doubt it'll merge anytime soon so the former sounds like a better option. > > My efforts to collect *all* the projects continues. Just cut 2.0.0 of warlock so > we should see this make it's way through the requirements machinery in the next > few days. I'll abandon the glanceclient change now. > > Stephen Hi Stephen, I hope you don't mind I ping and up this thread. Thanks a lot for this work. Any more progress here? I'm being pressed by the Debian community to update jsonschema in Unstable, because 3.2.0 is breaking other software (at least 2 packages). If I do, I know things will break in OpenStack. So this *MUST* be fixed for Zed... Cheers, Thomas Goirand (zigo) From johnsomor at gmail.com Mon Jul 11 16:39:48 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Jul 2022 09:39:48 -0700 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: References: Message-ID: +1 from me. Takashi has been doing some great work. Michael On Thu, Jun 30, 2022 at 6:44 AM Herve Beraud wrote: > > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member of the oslo core team. > > During the last months Takashi has been a significant contributor to the oslo projects. > > Obviously we think he'd make a good addition to the core team. If there are no objections, I'll make that happen in a week. > > Thanks. > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > From kdhall at binghamton.edu Mon Jul 11 16:52:51 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Mon, 11 Jul 2022 12:52:51 -0400 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: <76fdd631-f5dc-5f45-7dc0-b1e375060857@rd.bbc.co.uk> References: <76fdd631-f5dc-5f45-7dc0-b1e375060857@rd.bbc.co.uk> Message-ID: OK. I think for my plan I would have students (as tennants) access their private subnets via an IP address on the native VLAN on my switch - probably via some 10.x.x.x IP address. I'm using a bonded 10G NIC with bridges on VLANs for mgmt, storage, and vxlan. It sounds like I should set up a bridge on the native VLAN for br-vlan, right? My apologies, but I'll admit that I haven't quite comprehended the nuances of OpenStack networking yet, especially regarding external access. I'm sure it will soon be as obvious to me as it is to all of you. (I'll also admit to a fear of having an error in my initial configuration that ends up being hard to correct. But, hey, Ansible. Right?) -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu 607-760-2328 (Cell) 607-777-4641 (Office) On Mon, Jul 11, 2022 at 3:34 AM Jonathan Rosser < jonathan.rosser at rd.bbc.co.uk> wrote: > If you choose to use vxlan for your tenant networks (the default in OSA) > you would probably be using a vlan for the external provider network. > > This would default to br-vlan, but alternatively can be any interface of > your choice. With the default configuration br-vlan would need to be > present on all of your controller (or dedicated network) nodes and carry > the tagged external vlan from your upstream switches. > > Jonathan. > On 10/07/2022 07:57, Dmitriy Rabotyagov wrote: > > Hi Dave, > > Intended use-case for br-vlan is when you want or need to provide vlan > networks in the environment. > > As example, we use vlan networks to bring in customers owned public > networks, as we need to pass vlan from the gateway to the compute nodes, > and we are not able to set vxlan on the gateway due to hardware that is > used there. > > At the same time in many environments you might not need using vlans at > all, as vxlan is what will be used by default to provide tenant networks. > > ??, 9 ???. 2022 ?., 23:44 Dave Hall : > >> Hello. >> >> I'm preparing to do my first deployment (of Yoga) on real hardware. The >> documentation regarding br-vlan was hard for me to understand. Could I >> get a clarification on what to do with this? >> >> Note: my intended use case is as an academic/instructional environment. >> I'm thinking along the lines of treating each student as a separate tenant >> that would be preconfigured with templated set of VMs appropriate to the >> course content. >> >> Any other thoughts regarding this scenario would also be greatly >> appreciated. >> >> Thanks. >> >> -Dave >> >> -- >> Dave Hall >> Binghamton University >> kdhall at binghamton.edu >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Jul 11 18:26:27 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 11 Jul 2022 15:26:27 -0300 Subject: [CLOUDKITTY] Missed CloudKitty meeting today Message-ID: Hello guys, I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. If you need something, just let me know. Again, sorry for the inconvenience; see you guys at our next meeting. -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jul 11 18:36:28 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Jul 2022 13:36:28 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 14 July 2022 at 1500 UTC Message-ID: <181ee8c6a9e.b95207cb391423.5781004865867855521@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 14 July 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, 13 July at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From ozzzo at yahoo.com Mon Jul 11 20:10:53 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Mon, 11 Jul 2022 20:10:53 +0000 (UTC) Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> Message-ID: <479542002.91408.1657570253503@mail.yahoo.com> Unfortunately I was not able to get permission to be Adjutant PTL. They didn't say no, but the decision makers are too busy to address the issue. As I settle into this new position, I am realizing that I don't have time to do it anyway, so I will have to regretfully agree to placing Adjutant on the "inactive" list. If circumstances change, I will ask about resurrecting the project. Albert On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann wrote: ---- On Fri, 22 Apr 2022 15:53:37 -0500? Ghanshyam Mann wrote --- > Hi Braden, > > Please let us know about the status of your company's permission to maintain the project. > As we are in Zed cycle development and there is no one to maintain/lead this project we > need to start thinking about the next steps mentioned in the leaderless project etherpad > Hi Braden, We have not heard back from you if you can help in maintaining the Adjutant. As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project list - https://review.opendev.org/c/openstack/governance/+/849153/1 -gmann > - https://etherpad.opendev.org/p/zed-leaderless > > -gmann > >? ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- >? >? ? ? ? ? ? ? ? I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. >? >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote:? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? Hi, >? > >? > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. >? > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. >? > >? > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html >? > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html >? > >? > -- >? > Slawek Kaplonski >? > Principal Software Engineer >? > Red Hat? ? ? ? ? ? ? ? ? ? ? ? ? ? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Jul 11 21:50:48 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 11 Jul 2022 16:50:48 -0500 Subject: [openstack-helm] No Meeting Tomorrow Message-ID: Hey team, Since there's nothing on the agenda, tomorrow's meeting is cancelled. We will plan on meeting again same time next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Jul 11 22:13:14 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 11 Jul 2022 15:13:14 -0700 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: Greetings! Hopefully these answers help! On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND wrote: > > I everyone, I?m currently working back again with Ironic and it?s amazing! > > However, during our demo session to our users few questions arise. > > We?re currently deploying nodes using a private vlan that can?t be reached from outside of the Openstack network fabric (vlan 101 - 192.168.101.0/24) and everything is fine with this provisioning network as our ToR switch all know about it and other Control plan VLANs such as the internal APIs VLAN which allow the IPA Ramdisk to correctly and seamlessly be able to contact the internal IRONIC APIs. Nice, I've had my lab configured like this in the past. > > (When you declare a port as a trunk allowed all vlan on a aruba switch it seems it automatically analyse the CIDR your host try to reach from your VLAN and route everything to the corresponding VLAN that match the destination IP). > Ugh, that... could be fun :\ > So know, I still get few tiny issues: > > 1?/- When I spawn a nova instance on a ironic host that is set to use flat network (From horizon as a user), why does the nova wizard still ask for a neutron network if it?s not set on the provisioned host by the IPA ramdisk right after the whole disk image copy? Is that some missing development on horizon or did I missed something? Horizon just is not aware... and you can actually have entirely different DHCP pools on the same flat network, so that neutron network is intended for the instance's addressing to utilize. Ironic does just ask from an allocation from a provisioning network, which can and *should* be a different network than the tenant network. > > 2?/- In a flat network layout deployment using direct deploy scenario for images, am I still supposed to create a ironic provisioning network in neutron? > > From my understanding (and actually my tests) we don?t, as any host booting on the provisioning vlan will catch up an IP and initiate the bootp sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, but it?s a bit confusing as many documentation explicitly require for this network to exist on neutron. Yes. Direct is short hand for "Copy it over the network and write it directly to disk". It still needs an IP address on the provisioning network (think, subnet instead of distinct L2 broadcast domain). When you ask nova for an instance, it sends over what the machine should use as a "VIF" (neutron port), however that is never actually bound configuration wise into neutron until after the deployment completes. It *could* be that your neutron config is such that it just works anyway, but I suspect upstream contributors would be a bit confused if you reported an issue and had no provisioning network defined. > > 3?/- My whole Openstack network setup is using Openvswitch and vxlan tunnels on top of a spine/leaf architecture using aruba CX8360 switches (for both spine and leafs), am I required to use either the networking-generic-switch driver or a vendor neutron driver ? If that?s right, how will this driver be able to instruct the switch to assign the host port the correct openvswitch vlan id and register the correct vxlan to openvswitch from this port? I mean, ok neutron know the vxlan and openvswitch the tunnel vlan id/interface but what is the glue of all that? If your happy with flat networks, no. If you want tenant isolation networking wise, yes. NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port level local link configuration (well, Ironic includes the port information (local link connection, physical network, and some other details) to Neutron with the port binding request. Those ML2 drivers, then either request the switch configuration be updated, or take locally configured credentials to modify port configuration in Neutron, and logs into the switch to toggle the access port's configuration which the baremetal node is attached to. Generally, they are not vxlan network aware, and at least with networking-generic-switch vlan ID numbers are expected and allocated via neutron. Sort of like the software is logging into the switch and running something along the lines of "conf t;int gi0/21;switchport mode access;switchport access vlan 391 ; wri mem" > > 4?/- I?ve successfully used openstack cloud oriented CentOS and debian images or snapshot of VMs to provision my hosts, this is an awesome feature, but I?m wondering if there is a way to let those host cloud-init instance to request for neutron metadata endpoint? > Generally yes, you *can* use network attached metadata with neutron *as long as* your switches know to direct the traffic for the metadata IP to the Neutron metadata service(s). We know of operators who ahve done it without issues, but often that additional switch configured route is not always the best hting. Generally we recommend enabling and using configuration drives, so the metadata is able to be picked up by cloud-init. > I was a bit surprised about the ironic networking part as I was expecting the IPA ramdisk to at least be able to set the host os with the appropriate network configuration file for whole disk images that do not use encryption by injecting those information from the neutron api into the host disk while mounted (right after the image dd). > IPA has no knowledge of how to modify the host OS in this regard. modifying the host OS has generally been something the ironic community has avoided since it is not exactly cloudy to have to do so. Generally most clouds are running with DHCP, so as long as that is enabled and configured, things should generally "just work". Hopefully that provides a little more context. Nothing prevents you from writing your own hardware manager that does exactly this, for what it is worth. > All in all I really like the ironic approach of the baremetal provisioning process, and I?m pretty sure that I?m just missing a bit of understanding of the networking part but it?s really the most confusing part of it to me as I feel like if there is a missing link in between neutron and the host HW or the switches. > Thanks! It is definitely one of the more complex parts given there are many moving parts, and everyone wants (or needs) to have their networking configured just a little differently. Hopefully I've kind of put some of the details out there, if you need more information, please feel free to reach out, and also please feel free to ask questions in #openstack-ironic on irc.oftc.net. > Thanks a lot anyone that will take time to explain me this :-) :) From stephenfin at redhat.com Tue Jul 12 12:14:03 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Jul 2022 13:14:03 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> Message-ID: <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > On 6/16/22 19:53, Stephen Finucane wrote: > > On Thu, 2022-06-16 at 17:13 +0100, Stephen Finucane wrote: > > > On Wed, 2022-06-15 at 01:04 +0100, Sean Mooney wrote: > > > > On Wed, 2022-06-15 at 00:58 +0100, Sean Mooney wrote: > > > > > On Tue, 2022-06-14 at 18:49 -0500, Ghanshyam Mann wrote: > > > > > > ---- On Tue, 14 Jun 2022 17:47:59 -0500 Thomas Goirand wrote ---- > > > > > > > On 6/13/22 00:10, Thomas Goirand wrote: > > > > > > > > Hi, > > > > > > > > > > > > > > > > A few DDs are pushing me to upgrade jsonschema in Debian Unstable. > > > > > > > > However, OpenStack global requirements are still stuck at 3.2.0. Is > > > > > > > > there any reason for it, or should we attempt to upgrade to 4.6.0? > > > > > > > > > > > > > > > > I'd really appreciate if someone (else than me) was driving this... > > > > > > > > > > > > > > > > Cheers, > > > > > > > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > > > > > > > > > > > FYI, Nova fails with it: > > > > > > > https://ci.debian.net/data/autopkgtest/unstable/amd64/n/nova/22676760/log.gz > > > > > > > > > > > > > > Can someone from the Nova team investigate? > > > > > > > > > > > > Nova failures are due to the error message change (it happens regularly and they change them in most versions) > > > > > > in jsonschema new version. I remember we faced this type of issue previously also and we updated > > > > > > nova tests not to assert the error message but seems like there are a few left which is failing with > > > > > > jsonschema 4.6.0. > > > > > > > > > > > > Along with these error message failures fixes and using the latest jsonschema, in nova, we use > > > > > > Draft4Validator from jsonschema and the latest validator is Draft7Validator so we should check > > > > > > what are backward-incompatible changes in Draft7Validator and bump this too? > > > > > > > > > > well i think the reason this is on 3.2.0 iis nothing to do with nova > > > > > i bumpt it manually in https://review.opendev.org/c/openstack/requirements/+/845859/ > > > > > > > > > > The conflict is caused by: > > > > > The user requested jsonschema===4.0.1 > > > > > tripleo-common 16.4.0 depends on jsonschema>=3.2.0 > > > > > rsd-lib 1.2.0 depends on jsonschema>=2.6.0 > > > > > taskflow 4.7.0 depends on jsonschema>=3.2.0 > > > > > zvmcloudconnector 1.4.1 depends on jsonschema>=2.3.0 > > > > > os-net-config 15.2.0 depends on jsonschema>=3.2.0 > > > > > task-core 0.2.1 depends on jsonschema>=3.2.0 > > > > > python-zaqarclient 2.3.0 depends on jsonschema>=2.6.0 > > > > > warlock 1.3.3 depends on jsonschema<4 and >=0.7 > > > > > > > > > > https://zuul.opendev.org/t/openstack/build/06ed295bb8244c16b48e2698c1049be9 > > > > > > > > > > it looks like warlock is clamping it ot less then 4 which is why we are stokc on 3.2.0 > > > > glance client seams to be the only real user of this > > > > https://codesearch.opendev.org/?q=warlock&i=nope&literal=nope&files=&excludeFiles=&repos= > > > > perhaps we could jsut remove the dependcy? > > > > > > I've proposed vendoring this dependency in glanceclient [1]. I've also proposed > > > a change to fix this in warlock [2] but given the lack of activity there, I > > > doubt it'll merge anytime soon so the former sounds like a better option. > > > > My efforts to collect *all* the projects continues. Just cut 2.0.0 of warlock so > > we should see this make it's way through the requirements machinery in the next > > few days. I'll abandon the glanceclient change now. > > > > Stephen > > Hi Stephen, > > I hope you don't mind I ping and up this thread. > > Thanks a lot for this work. Any more progress here? We've uncapped warlock in openstack/requirements [1]. We just need the glance folks to remove their own cap now [2] so that we can raise the version in upper constraint. Stephen [1] https://review.opendev.org/c/openstack/requirements/+/849284 [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > I'm being pressed by the Debian community to update jsonschema in > Unstable, because 3.2.0 is breaking other software (at least 2 > packages). If I do, I know things will break in OpenStack. So this > *MUST* be fixed for Zed... > > Cheers, > > Thomas Goirand (zigo) > From stephenfin at redhat.com Tue Jul 12 15:50:12 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Jul 2022 16:50:12 +0100 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: References: Message-ID: <42c2a184499470bdaa62a16b5f59def2a59e08dd.camel@redhat.com> On Thu, 2022-06-30 at 15:39 +0200, Herve Beraud wrote: > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member of > the oslo core team. > > During the last months Takashi has been a significant contributor to the oslo > projects. > > Obviously we think he'd make a good addition to the core team. If there are no > objections, I'll make that happen in a week. > > Thanks. +1 from me. It would be great to have tkajinam onboard. Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jul 12 15:50:30 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Jul 2022 16:50:30 +0100 Subject: Propose to add Tobias Urdin as Tooz core reviewer In-Reply-To: References: Message-ID: <573e57d95ca7553239e576f8b41f07b006dab513.camel@redhat.com> On Thu, 2022-06-30 at 15:43 +0200, Herve Beraud wrote: > Hello everybody, > > It is my pleasure to propose Tobias Urdin (tobias-urdin) as a new member of > the Tooz project core team. > > During the last months Tobias has been a significant contributor to the Tooz > project. > > Obviously we think he'd make a good addition to the core team. If there are no > objections, I'll make that happen in a week. > > Thanks. +1 from me! Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue Jul 12 16:48:19 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 12 Jul 2022 22:18:19 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Shephard/Swogat, I tried changing the setting as suggested and it looks like it has failed at step 4 with error: :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:24:47.736198 | 2.21s 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | TASK | Create identity internal endpoint 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request you have made requires authentication."} 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 Checking further the endpoint list: I see only one endpoint for keystone is gettin created. DeprecationWarning +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity | True | admin | http://30.30.30.173:35357 | | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ it looks like something related to the SSL, we have also verified that the GUI login screen shows that Certificates are applied. exploring more in logs, meanwhile any suggestions or know observation would be of great help. thanks again for the support. Best Regards, Lokendra On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan wrote: > I had faced a similar kind of issue, for ip based setup you need to > specify the domain name as the ip that you are going to use, this error is > showing up because the ssl is ip based but the fqdns seems to be > undercloud.com or overcloud.example.com. > I think for undercloud you can change the undercloud.conf. > > And will it work if we specify clouddomain parameter to the IP address for > overcloud? because it seems he has not specified the clouddomain parameter > and overcloud.example.com is the default domain for overcloud.example.com. > > On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, > wrote: > >> What is the domain name you have specified in the undercloud.conf file? >> And what is the fqdn name used for the generation of the SSL cert? >> >> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >> wrote: >> >>> Hi Team, >>> We were trying to install overcloud with SSL enabled for which the UC is >>> installed, but OC install is getting failed at step 4: >>> >>> ERROR >>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>> exact error", "rc": 1} >>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>> 600, in urlopen\n chunked=chunked)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>> in _make_request\n self._validate_conn(conn)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>> in _validate_conn\n conn.connect()\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>> connect\n _match_hostname(cert, self.assert_hostname or >>> server_hostname)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>> handling of the above exception, another exception occurred:\n\nTraceback >>> (most recent call last):\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>> send\n timeout=timeout\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>> increment\n raise MaxRetryError(_pool, url, error or >>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>> in request\n resp = self.send(prep, **send_kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>> send\n r = adapter.send(request, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 138, in _do_create_plugin\n authenticated=False)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 610, in get_discovery\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>> in get_discovery\n disc = Discover(session, url, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>> in __init__\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>> in get_version_data\n resp = session.get(url, headers=headers, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>> in get\n return self.request(url, 'GET', **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>> request\n resp = send(**kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>> in _send_request\n raise >>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File \"\", line 102, in \n File \"\", line >>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>> mod_name, mod_spec, pkg_name, script_name)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>> run_globals)\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 185, in \n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 181, in main\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>> line 407, in __call__\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 141, in run\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 517, in search_services\n services = self.list_services()\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>> File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>> line 32, in _identity_client\n 'identity', min_version=2, >>> max_version='3.latest')\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 271, in get_endpoint_data\n service_catalog = >>> self.get_access(session).service_catalog\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 206, in get_auth_ref\n self._plugin = >>> self._do_create_plugin(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>> versioned identity endpoints when attempting to authenticate. Please check >>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.271914 | 2.47s >>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.273659 | 2.47s >>> >>> PLAY RECAP >>> ********************************************************************* >>> localhost : ok=0 changed=0 unreachable=0 >>> failed=0 skipped=2 rescued=0 ignored=0 >>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>> failed=0 skipped=198 rescued=0 ignored=0 >>> undercloud : ok=28 changed=7 unreachable=0 >>> failed=1 skipped=3 rescued=0 ignored=0 >>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> >>> >>> in the deploy.sh: >>> >>> openstack overcloud deploy --templates \ >>> -r /home/stack/templates/roles_data.yaml \ >>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>> --baremetal-deployment >>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>> --network-config \ >>> -e /home/stack/templates/environment.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>> \ >>> -e /home/stack/templates/ironic-config.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>> -e /home/stack/containers-prepare-parameter.yaml >>> >>> Addition lines as highlighted in yellow were passed with modifications: >>> tls-endpoints-public-ip.yaml: >>> Passed as is in the defaults. >>> enable-tls.yaml: >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Enable SSL on OpenStack Public Endpoints >>> # description: | >>> # Use this environment to pass in certificates for SSL deployments. >>> # For these values to take effect, one of the tls-endpoints-*.yaml >>> # environments must also be used. >>> parameter_defaults: >>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>> # Type: boolean >>> HorizonSecureCookies: True >>> >>> # Specifies the default CA cert to use if TLS is used for services in >>> the public network. >>> # Type: string >>> PublicTLSCAFile: >>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>> >>> # The content of the SSL certificate (without Key) in PEM format. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> SSLCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> # The content of an SSL intermediate CA certificate in PEM format. >>> # Type: string >>> SSLIntermediateCertificate: '' >>> >>> # The content of the SSL Key in PEM format. >>> # Type: string >>> SSLKey: | >>> -----BEGIN PRIVATE KEY----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END PRIVATE KEY----- >>> >>> # ****************************************************** >>> # Static parameters - these are values that must be >>> # included in the environment but should not be changed. >>> # ****************************************************** >>> # The filepath of the certificate as it will be stored in the >>> controller. >>> # Type: string >>> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >>> >>> # ********************* >>> # End static parameters >>> # ********************* >>> >>> inject-trust-anchor.yaml >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>> # description: | >>> # When using an SSL certificate signed by a CA that is not in the >>> default >>> # list of CAs, this environment allows adding a custom CA certificate >>> to >>> # the overcloud nodes. >>> parameter_defaults: >>> # The content of a CA's SSL certificate file in PEM format. This is >>> evaluated on the client side. >>> # Mandatory. This parameter must be set by the user. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> resource_registry: >>> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >>> >>> >>> >>> >>> The procedure to create such files was followed using: >>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>> >>> >>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>> certificate, without DNS. * >>> >>> Any idea around this error would be of great help. >>> >>> -- >>> skype: lokendrarathour >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 12 19:22:18 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Wed, 13 Jul 2022 00:52:18 +0530 Subject: CRITICAL rally [-] Unhandled error: KeyError: 'openstack' Edit | Openstack wallaby tripleo | centos 8 stream Message-ID: Hi, I am using openstack wallaby in tripleo architecture. I googled around and found i can use openstack-rally for testing the openstack deployment and will be able to genrate report also. so i tried installing openstack rally using "yum install openstack-rally", "pip3 install openstack-rally" and finally cloned the git repo and ran "python3 setup.py install" but no matter what i do i am getting the error 'unhandled error : keyerror 'openstack'' (overcloud) [root at hkg2director ~]# rally deployment create --fromenv --name=existing +--------------------------------------+----------------------------+----------+------------------+--------+ | uuid | created_at | name | status | active | +--------------------------------------+----------------------------+----------+------------------+--------+ | 484aae52-a690-4163-828b-16adcaa0d8fb | 2022-06-07T05:48:39.039296 | existing | deploy->finished | | +--------------------------------------+----------------------------+----------+------------------+--------+ Using deployment: 484aae52-a690-4163-828b-16adcaa0d8fb (overcloud) [root at hkg2director ~]# rally deployment show 484aae52-a690-4163-828b-16adcaa0d8fb Command failed, please check log for more info 2022-06-07 13:48:58.651 482053 CRITICAL rally [-] Unhandled error: KeyError: 'openstack' 2022-06-07 13:48:58.651 482053 ERROR rally Traceback (most recent call last): 2022-06-07 13:48:58.651 482053 ERROR rally File "/bin/rally", line 10, in 2022-06-07 13:48:58.651 482053 ERROR rally sys.exit(main()) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/main.py", line 40, in main 2022-06-07 13:48:58.651 482053 ERROR rally return cliutils.run(sys.argv, categories) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/cliutils.py", line 669, in run 2022-06-07 13:48:58.651 482053 ERROR rally ret = fn(*fn_args, **fn_kwargs) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/envutils.py", line 142, in inner 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/plugins/__init__.py", line 59, in wrapper 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/commands/deployment.py", line 205, in show 2022-06-07 13:48:58.651 482053 ERROR rally creds = deployment["credentials"]["openstack"][0] 2022-06-07 13:48:58.651 482053 ERROR rally KeyError: 'openstack' 2022-06-07 13:48:58.651 482053 ERROR rally can someone please help me in fixing this issue or give any suggestion on which tool to use to test the openstack deployment and benchmark. Also is tempest available for wallaby?? i checked the opendev and github repos last tags available are for victoria. With regards, Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dale at catalystcloud.nz Tue Jul 12 21:18:57 2022 From: dale at catalystcloud.nz (Dale Smith) Date: Wed, 13 Jul 2022 09:18:57 +1200 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <479542002.91408.1657570253503@mail.yahoo.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> <479542002.91408.1657570253503@mail.yahoo.com> Message-ID: <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> Hi gmann and Albert, I'd like to put my hand up for PTL of Adjutant if you are unable, Albert. Catalyst Cloud continue to have an interest in keeping this project active and maintained, and I am an early contributor/reviewer of Adjutant codebase alongside Adrian Turjak in 2015/2016. cheers, Dale Smith On 12/07/22 08:10, Albert Braden wrote: > Unfortunately I was not able to get permission to be Adjutant PTL. > They didn't say no, but the decision makers are too busy to address > the issue. As I settle into this new position, I am realizing that I > don't have time to do it anyway, so I will have to regretfully agree > to placing Adjutant on the "inactive" list. If circumstances change, I > will ask about resurrecting the project. > > Albert > On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann > wrote: > > > ---- On Fri, 22 Apr 2022 15:53:37 -0500? Ghanshyam Mann > wrote --- > > Hi Braden, > > > > Please let us know about the status of your company's permission to > maintain the project. > > As we are in Zed cycle development and there is no one to > maintain/lead this project we > > need to start thinking about the next steps mentioned in the > leaderless project etherpad > > > Hi Braden, > > We have not heard back from you if you can help in maintaining the > Adjutant. > > As it has no PTL and no patches for the last 250 days, I am adding it > to the 'Inactive' project > list > - https://review.opendev.org/c/openstack/governance/+/849153/1 > > -gmann > > > > - https://etherpad.opendev.org/p/zed-leaderless > > > > -gmann > > > >? ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden > wrote ---- > >? >? ? ? ? ? ? ? ? I'm still waiting for permission to work on > Adjutant. My contract ends this month and I'm taking 2 months off > before I start fulltime. I have hope that permission will be granted > while I'm out. I expect that I will be able to start working on > Adjutant in June. > >? >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? On Saturday, March 5, 2022, 01:32:13 > PM EST, Slawek Kaplonski wrote:? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? > >? >? ? ? ? ? ? ? ? > >? >? ? ? ? ? ? ? ? Hi, > >? > > >? > After last PTL elections [1] Adjutant project don't have any PTL. > It also didn't had PTL in the Yoga cycle already. > >? > So this is call for maintainters for Adjutant. If You are using > it or interested in it, and if You are willing to help maintaining > this project, please contact TC members through this mailing list or > directly on the #openstack-tc channel @OFTC. We can talk possibilities > to make someone a PTL of the project or going with this project to the > Distributed Project Leadership [2] model. > >? > > >? > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > >? > [2] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > >? > > >? > -- > >? > Slawek Kaplonski > >? > Principal Software Engineer > >? > Red Hat? ? ? ? ? ? ? ? ? ? ? ? ? ? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Wed Jul 13 00:27:45 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Jul 2022 19:27:45 -0500 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> <479542002.91408.1657570253503@mail.yahoo.com> <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> Message-ID: <181f4f4636e.1179ff511475329.1733911049127188418@ghanshyammann.com> ---- On Tue, 12 Jul 2022 16:18:57 -0500 Dale Smith wrote --- > Hi gmann and Albert, > I'd like to put my hand up for PTL of Adjutant if you are unable, Albert. > Catalyst Cloud continue to have an interest in keeping this project active and maintained, and I am an early contributor/reviewer of Adjutant codebase alongside Adrian Turjak in 2015/2016. Thanks Dale, please propose a patch in governance to update the PTL info. Example: https://review.opendev.org/c/openstack/governance/+/807884 -gmann > > > > cheers, > > Dale Smith > > > > On 12/07/22 08:10, Albert Braden wrote: > Unfortunately I was not able to get permission to be Adjutant PTL. They didn't say no, but the decision makers are too busy to address the issue. As I settle into this new position, I am realizing that I don't have time to do it anyway, so I will have to regretfully agree to placing Adjutant on the "inactive" list. If circumstances change, I will ask about resurrecting the project. > > Albert > On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann wrote: > > ---- On Fri, 22 Apr 2022 15:53:37 -0500 Ghanshyam Mann wrote --- > > Hi Braden, > > > > Please let us know about the status of your company's permission to maintain the project. > > As we are in Zed cycle development and there is no one to maintain/lead this project we > > need to start thinking about the next steps mentioned in the leaderless project etherpad > > > Hi Braden, > > We have not heard back from you if you can help in maintaining the Adjutant. > > As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project > list > - https://review.opendev.org/c/openstack/governance/+/849153/1 > > -gmann > > > - https://etherpad.opendev.org/p/zed-leaderless > > > > -gmann > > > > ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- > > > I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. > > > On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. > > > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > > > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > From park0kyung0won at dgist.ac.kr Wed Jul 13 05:45:33 2022 From: park0kyung0won at dgist.ac.kr (=?UTF-8?B?67CV6rK97JuQ?=) Date: Wed, 13 Jul 2022 14:45:33 +0900 (KST) Subject: Guide for Openstack installation with HA, OVN ? Message-ID: <321848520.267831.1657691133322.JavaMail.root@mailwas2> An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Wed Jul 13 06:17:03 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Wed, 13 Jul 2022 01:17:03 -0500 Subject: Guide for Openstack installation with HA, OVN ? In-Reply-To: <321848520.267831.1657691133322.JavaMail.root@mailwas2> References: <321848520.267831.1657691133322.JavaMail.root@mailwas2> Message-ID: Any idea on what you want to use to deploy your cluster? https://docs.openstack.org/openstack-ansible/latest/ https://wiki.openstack.org/wiki/TripleO ??? Cheers! On Wed, Jul 13, 2022 at 12:53 AM ??? wrote: > Hello > > > I've tried minimal installation of openstack following official > documentation > > Now I want to install openstack with > > > 1. High availability configuration - keystone, placement, neutron, glance, > cinder, ... > > 2. Open Virtual Network for neutron driver > > > But it's hard to find documentation online > > Could you provide some links for the material? > > > Thank you! > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Wed Jul 13 07:36:37 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 13 Jul 2022 13:06:37 +0530 Subject: [horizon] Cancelling Today's Weekly meeting Message-ID: Hello Team, As discussed in the last weakly meeting, I am on vaccination this week. So there will be no horizon weekly meeting today. If anything urgent, please reach out to horizon core team. Thanks & regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Wed Jul 13 08:37:32 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 13 Jul 2022 10:37:32 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> Message-ID: <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Well... they expired... Is this a matter of just contacting JetBrains? I assume coolsvap won't be able to do this any longer, but IMHO this may just be handled by someone from the foundation/TC. mnaser maybe? xD CC'ing you just in case you'd like to step up, otherwise I'll try contacting JetBrains on my own ;) Thanks! Daniel On 8/7/22 13:43, Lajos Katona wrote: > Hi, > Thanks for asking, I have the same problem, my license also expired this > week. > > Lajos Katona > > Daniel Mellado > ezt > ?rta (id?pont: 2022. j?l. 8., P, 10:04): > > So... no news about this? Should we just assume that the licenses will > be no longer? Bummer... > > On 7/7/22 13:08, Daniel Mellado wrote: > > Just noticed that as well, thanks for bringing this up Eyal! > > > > On 7/7/22 12:04, Eyal B wrote: > >> Hello, > >> > >> Will the licenses be renewed ? they ended on July 5 > >> > >> Eyal > >> > >> On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni > > >> >> wrote: > >> > >> ??? Sorry for the typo, It'd be July 5, 2022 > >> > >> > >> ??? On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray > > >> ??? >> > wrote: > >> > >> ??????? Hi Swapnil,____ > >> > >> ??????? We?re at July 2021 already ? so they expire at the end > of this > >> ??????? month?____ > >> > >> ??????? __ __ > >> > >> ??????? *From: *Swapnil Kulkarni > >> ??????? >> > >> ??????? *Date: *Tuesday, 6 July 2021 at 17:50 > >> ??????? *To: *openstack-discuss at lists.openstack.org > > >> ??????? > > >> ??????? > >> ??????? >> > >> ??????? *Subject: *[all] PyCharm Licenses Renewed till July 2021____ > >> > >> ??????? Hello,____ > >> > >> ??????? __ __ > >> > >> ??????? Happy to inform you the open source developer?license for > >> ??????? Pycharm has been renewed for 1 additional year till July > 2021. > >> ____ > >> > >> > >> ??????? ____ > >> > >> ??????? Best?Regards, > >> ??????? Swapnil Kulkarni > >> ??????? coolsvap at gmail dot com____ > >> > >> ??????? __ __ > >> > > From thierry at openinfra.dev Wed Jul 13 09:04:54 2022 From: thierry at openinfra.dev (Thierry Carrez) Date: Wed, 13 Jul 2022 11:04:54 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: Daniel Mellado wrote: > Well... they expired... Is this a matter of just contacting JetBrains? I > assume coolsvap won't be able to do this any longer, but IMHO this may > just be handled by someone from the foundation/TC. > > mnaser maybe? xD > > CC'ing you just in case you'd like to step up, otherwise I'll try > contacting JetBrains on my own ;) I'd recommend that someone relying on JetBrains handles the relationship. This is why Swapnil was handling it before :) -- Thierry Carrez From senrique at redhat.com Wed Jul 13 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 13 Jul 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 07-13-2022 Message-ID: This is a bug report from 07-06-2022 to 07-13-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/os-brick/+bug/1981455 "RBD disconnect fails with AttributeError for startswith." Fix proposed to master. Medium - https://bugs.launchpad.net/cinder/+bug/1981068 "Dell PowerStore - cinder cannot delete volumes." No patch proposed to master yet. - https://bugs.launchpad.net/cinder/+bug/1981420 "Dell PowerMax - error when creating synchronous volumes." Low - https://bugs.launchpad.net/cinder/+bug/1980870 "Dell PowerMax driver may deadlock moving volumes between SGs." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1981354 "Infinidat Cinder driver does not return all iSCSI portals for multipath storage." Fix proposed to master. Incomplete - https://bugs.launchpad.net/cinder/+bug/1981211 "[stable/yoga] Update attachment failed for attachment." Unassigned. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Wed Jul 13 11:21:34 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 13 Jul 2022 13:21:34 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: <123a6045-0ab2-e5d4-f200-53c6d5493137@redhat.com> Totally agree, In any case, and if there's anyone who would be able to rely on JetBrains please feel free to take this over, I've sent an email to JetBrains asking for this. Initial reply below: ##- Please type your reply above this line -## Hello, Thanks for contacting JetBrains Community Support. Your request (4161236) has been received and is being reviewed by our staff. To add additional comments, reply to this email or follow the link below: https://community-support.jetbrains.com/hc/requests/4161236 Thanks! ;) On 13/7/22 11:04, Thierry Carrez wrote: > Daniel Mellado wrote: >> Well... they expired... Is this a matter of just contacting JetBrains? >> I assume coolsvap won't be able to do this any longer, but IMHO this >> may just be handled by someone from the foundation/TC. >> >> mnaser maybe? xD >> >> CC'ing you just in case you'd like to step up, otherwise I'll try >> contacting JetBrains on my own ;) > > I'd recommend that someone relying on JetBrains handles the > relationship. This is why Swapnil was handling it before :) > From smooney at redhat.com Wed Jul 13 11:48:23 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Jul 2022 12:48:23 +0100 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> On Wed, 2022-07-13 at 11:04 +0200, Thierry Carrez wrote: > Daniel Mellado wrote: > > Well... they expired... Is this a matter of just contacting JetBrains? I > > assume coolsvap won't be able to do this any longer, but IMHO this may > > just be handled by someone from the foundation/TC. > > > > mnaser maybe? xD > > > > CC'ing you just in case you'd like to step up, otherwise I'll try > > contacting JetBrains on my own ;) > > I'd recommend that someone relying on JetBrains handles the > relationship. This is why Swapnil was handling it before :) i belive you can still use the comunity edition by the way without a liceince for opensouce or personal work and im not sure it the current lience is jut for future updates or if you can only use the softwhere during the licene periord. if its just for future updates you can continue to use it while you reach out to to JetBrains. i have used inteliJ and pycharm form time to time. but normally use emacs/nano but on rare ocations having a full fledge IDE has been handy for debuging but in general my experice is eventlet tends to break it out side of simple unit tests but the gevent compat option can help. thierry maybe the foundation could help get opendev added to https://www.jetbrains.com/community/opensource/#partner but in general its proably better for existing users to reach out to opensource at jetbrains.com > From fungi at yuggoth.org Wed Jul 13 12:20:21 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Jul 2022 12:20:21 +0000 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> Message-ID: <20220713122021.kscrifp3l5pe7od3@yuggoth.org> On 2022-07-13 12:48:23 +0100 (+0100), Sean Mooney wrote: [...] > thierry maybe the foundation could help get opendev added to > https://www.jetbrains.com/community/opensource/#partner [...] Maybe you meant OpenInfra (the foundation)? Given the OpenDev Collaboratory's strong stance in favor of purely free/libre open source developer tools, OpenDev partnering with a proprietary software vendor in order to get developers gratis access to closed-source tools would definitely send the wrong signal. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lokendrarathour at gmail.com Wed Jul 13 12:13:55 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 13 Jul 2022 17:43:55 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Team, Any input on this case raised. Thanks, Lokendra On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour wrote: > Hi Shephard/Swogat, > I tried changing the setting as suggested and it looks like it has failed > at step 4 with error: > > :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | > tripleo_keystone_resources : Create identity public endpoint | undercloud | > 0:24:47.736198 | 2.21s > 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | > TASK | Create identity internal endpoint > 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | > FATAL | Create identity internal endpoint | undercloud | error={"changed": > false, "extra_data": {"data": null, "details": "The request you have made > requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list > services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, > The request you have made requires authentication."} > 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 > > > Checking further the endpoint list: > I see only one endpoint for keystone is gettin created. > > DeprecationWarning > > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > | ID | Region | Service Name | Service > Type | Enabled | Interface | URL | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity > | True | admin | http://30.30.30.173:35357 | > | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity > | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | > | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity > | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > > > it looks like something related to the SSL, we have also verified that the > GUI login screen shows that Certificates are applied. > exploring more in logs, meanwhile any suggestions or know observation > would be of great help. > thanks again for the support. > > Best Regards, > Lokendra > > > On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan > wrote: > >> I had faced a similar kind of issue, for ip based setup you need to >> specify the domain name as the ip that you are going to use, this error is >> showing up because the ssl is ip based but the fqdns seems to be >> undercloud.com or overcloud.example.com. >> I think for undercloud you can change the undercloud.conf. >> >> And will it work if we specify clouddomain parameter to the IP address >> for overcloud? because it seems he has not specified the clouddomain >> parameter and overcloud.example.com is the default domain for >> overcloud.example.com. >> >> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >> wrote: >> >>> What is the domain name you have specified in the undercloud.conf file? >>> And what is the fqdn name used for the generation of the SSL cert? >>> >>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >>> wrote: >>> >>>> Hi Team, >>>> We were trying to install overcloud with SSL enabled for which the UC >>>> is installed, but OC install is getting failed at step 4: >>>> >>>> ERROR >>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>>> exact error", "rc": 1} >>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>> 600, in urlopen\n chunked=chunked)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>> in _make_request\n self._validate_conn(conn)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>> in _validate_conn\n conn.connect()\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>> connect\n _match_hostname(cert, self.assert_hostname or >>>> server_hostname)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>> handling of the above exception, another exception occurred:\n\nTraceback >>>> (most recent call last):\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>> send\n timeout=timeout\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>> increment\n raise MaxRetryError(_pool, url, error or >>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>> send\n r = adapter.send(request, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>> in get_discovery\n disc = Discover(session, url, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>> in __init__\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>> in get_version_data\n resp = session.get(url, headers=headers, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>> request\n resp = send(**kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>> in _send_request\n raise >>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File \"\", line 102, in \n File \"\", line >>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>> run_globals)\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 185, in \n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 181, in main\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>> line 407, in __call__\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 141, in run\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 517, in search_services\n services = self.list_services()\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>> File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>> line 32, in _identity_client\n 'identity', min_version=2, >>>> max_version='3.latest')\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 271, in get_endpoint_data\n service_catalog = >>>> self.get_access(session).service_catalog\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 206, in get_auth_ref\n self._plugin = >>>> self._do_create_plugin(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>> versioned identity endpoints when attempting to authenticate. Please check >>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.271914 | 2.47s >>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.273659 | 2.47s >>>> >>>> PLAY RECAP >>>> ********************************************************************* >>>> localhost : ok=0 changed=0 unreachable=0 >>>> failed=0 skipped=2 rescued=0 ignored=0 >>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>> failed=0 skipped=198 rescued=0 ignored=0 >>>> undercloud : ok=28 changed=7 unreachable=0 >>>> failed=1 skipped=3 rescued=0 ignored=0 >>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> >>>> >>>> in the deploy.sh: >>>> >>>> openstack overcloud deploy --templates \ >>>> -r /home/stack/templates/roles_data.yaml \ >>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>> --baremetal-deployment >>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>> --network-config \ >>>> -e /home/stack/templates/environment.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>> \ >>>> -e /home/stack/templates/ironic-config.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>> -e /home/stack/containers-prepare-parameter.yaml >>>> >>>> Addition lines as highlighted in yellow were passed with modifications: >>>> tls-endpoints-public-ip.yaml: >>>> Passed as is in the defaults. >>>> enable-tls.yaml: >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Enable SSL on OpenStack Public Endpoints >>>> # description: | >>>> # Use this environment to pass in certificates for SSL deployments. >>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>> # environments must also be used. >>>> parameter_defaults: >>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>> # Type: boolean >>>> HorizonSecureCookies: True >>>> >>>> # Specifies the default CA cert to use if TLS is used for services in >>>> the public network. >>>> # Type: string >>>> PublicTLSCAFile: >>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>> >>>> # The content of the SSL certificate (without Key) in PEM format. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> SSLCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> # The content of an SSL intermediate CA certificate in PEM format. >>>> # Type: string >>>> SSLIntermediateCertificate: '' >>>> >>>> # The content of the SSL Key in PEM format. >>>> # Type: string >>>> SSLKey: | >>>> -----BEGIN PRIVATE KEY----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END PRIVATE KEY----- >>>> >>>> # ****************************************************** >>>> # Static parameters - these are values that must be >>>> # included in the environment but should not be changed. >>>> # ****************************************************** >>>> # The filepath of the certificate as it will be stored in the >>>> controller. >>>> # Type: string >>>> DeployedSSLCertificatePath: >>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>> >>>> # ********************* >>>> # End static parameters >>>> # ********************* >>>> >>>> inject-trust-anchor.yaml >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>> # description: | >>>> # When using an SSL certificate signed by a CA that is not in the >>>> default >>>> # list of CAs, this environment allows adding a custom CA certificate >>>> to >>>> # the overcloud nodes. >>>> parameter_defaults: >>>> # The content of a CA's SSL certificate file in PEM format. This is >>>> evaluated on the client side. >>>> # Mandatory. This parameter must be set by the user. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> resource_registry: >>>> OS::TripleO::NodeTLSCAData: >>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>> >>>> >>>> >>>> >>>> The procedure to create such files was followed using: >>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>> >>>> >>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>>> certificate, without DNS. * >>>> >>>> Any idea around this error would be of great help. >>>> >>>> -- >>>> skype: lokendrarathour >>>> >>>> >>>> > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Wed Jul 13 12:55:07 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Wed, 13 Jul 2022 08:55:07 -0400 Subject: [OpenStack-Ansible][Neutron] Guide for Openstack installation with HA, OVN ? Message-ID: Hello. I would like to extend Park's question with a specific question of my own that could be used as an example the 'first time deployer' experience. My question is about Neutron deployment as it is specified in the various openstack_user_config.yml examples. openstack_user_config.yml.prod.example declares network-infra_hosts and network-agent_hosts, whereas openstack_user_config.yml.singlebond.example only declares network_hosts. For the novice trying to choose which example file to customize for our deployment, the following concerns arise: - If I use network_hosts, does that deploy both the infra and the agent on each network_host? - If I use network-infra_hosts and network-agent_hosts, but give both the same set of host names/IPs, will it deploy correctly and produce a working Neutron service? - If I successfully deploy a smaller cluster using network_hosts, and then grow my cluster, what criteria indicate the point at which I need to switch to infra_hosts and agent_hosts, and how many of each should I deploy? In reading through the Neutron documentation, I see some 'what' and some 'how', but very little 'why'. Perhaps a section titled 'Practical Deployment Considerations'? At this point I just want to know how to make Neutron deployment work right and how to be able to tell, after deployment, that it is working right. Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu On Wed, Jul 13, 2022 at 1:46 AM ??? wrote: > Hello > > > I've tried minimal installation of openstack following official > documentation > > Now I want to install openstack with > > > 1. High availability configuration - keystone, placement, neutron, glance, > cinder, ... > > 2. Open Virtual Network for neutron driver > > > But it's hard to find documentation online > > Could you provide some links for the material? > > > Thank you! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jul 13 13:02:48 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Jul 2022 14:02:48 +0100 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <20220713122021.kscrifp3l5pe7od3@yuggoth.org> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> <20220713122021.kscrifp3l5pe7od3@yuggoth.org> Message-ID: On Wed, 2022-07-13 at 12:20 +0000, Jeremy Stanley wrote: > On 2022-07-13 12:48:23 +0100 (+0100), Sean Mooney wrote: > [...] > > thierry maybe the foundation could help get opendev added to > > https://www.jetbrains.com/community/opensource/#partner > [...] > > Maybe you meant OpenInfra (the foundation)? > > Given the OpenDev Collaboratory's strong stance in favor of purely > free/libre open source developer tools, OpenDev partnering with a > proprietary software vendor in order to get developers gratis access > to closed-source tools would definitely send the wrong signal. sorry yes i ment OpenInfra but i dont really think it needed jsut that was the only thing i saw that the foundation could do that user could not do themselve by reaching out to jetbrains directly for there personal licences. From dmellado at redhat.com Wed Jul 13 14:43:55 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 13 Jul 2022 16:43:55 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: <03861c73-b7b4-d011-39e6-184f0e1587a0@redhat.com> Hi all, besides all the comments which I really appreciate, I've gotten a response from JetBrains and the licenses should be active again for one more year (2023). Any volunteer to handle the JetBrains relationship from the community side? Best! Daniel On 13/7/22 11:04, Thierry Carrez wrote: > Daniel Mellado wrote: >> Well... they expired... Is this a matter of just contacting JetBrains? >> I assume coolsvap won't be able to do this any longer, but IMHO this >> may just be handled by someone from the foundation/TC. >> >> mnaser maybe? xD >> >> CC'ing you just in case you'd like to step up, otherwise I'll try >> contacting JetBrains on my own ;) > > I'd recommend that someone relying on JetBrains handles the > relationship. This is why Swapnil was handling it before :) > From noonedeadpunk at gmail.com Wed Jul 13 14:58:42 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 13 Jul 2022 16:58:42 +0200 Subject: [OpenStack-Ansible][Neutron] Guide for Openstack installation with HA, OVN ? In-Reply-To: References: Message-ID: Hey, Well, documentation and examples you're referring to considers mostly using lxb or ovs for deployment. We have more network scenarios described in os_netron documentation. Specifically OVN-related doc is placed here: https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-ovn.html It also mentions commands on how to verify the state of the deployment. For OSA default setup you can also check some diagrams here that might be helpful in understanding deployment description and would actually answer on question "Why": https://docs.openstack.org/openstack-ansible/latest/reference/architecture/container-networking.html https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html ??, 13 ???. 2022 ?. ? 15:01, Dave Hall : > Hello. > > I would like to extend Park's question with a specific question of my own > that could be used as an example the 'first time deployer' experience. > > My question is about Neutron deployment as it is specified in the various > openstack_user_config.yml examples. > > openstack_user_config.yml.prod.example declares network-infra_hosts and > network-agent_hosts, whereas openstack_user_config.yml.singlebond.example > only declares network_hosts. > > For the novice trying to choose which example file to customize for our > deployment, the following concerns arise: > > - If I use network_hosts, does that deploy both the infra and the > agent on each network_host? > - If I use network-infra_hosts and network-agent_hosts, but give both > the same set of host names/IPs, will it deploy correctly and produce a > working Neutron service? > - If I successfully deploy a smaller cluster using network_hosts, and > then grow my cluster, what criteria indicate the point at which I need to > switch to infra_hosts and agent_hosts, and how many of each should I deploy? > > In reading through the Neutron documentation, I see some 'what' and some > 'how', but very little 'why'. Perhaps a section titled 'Practical > Deployment Considerations'? > > At this point I just want to know how to make Neutron deployment work > right and how to be able to tell, after deployment, that it is working > right. > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > On Wed, Jul 13, 2022 at 1:46 AM ??? wrote: > >> Hello >> >> >> I've tried minimal installation of openstack following official >> documentation >> >> Now I want to install openstack with >> >> >> 1. High availability configuration - keystone, placement, neutron, >> glance, cinder, ... >> >> 2. Open Virtual Network for neutron driver >> >> >> But it's hard to find documentation online >> >> Could you provide some links for the material? >> >> >> Thank you! >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.castro.leon at cern.ch Wed Jul 13 15:05:26 2022 From: jose.castro.leon at cern.ch (Jose Castro Leon) Date: Wed, 13 Jul 2022 17:05:26 +0200 Subject: [infra] Tarballs are not accessible anymore Message-ID: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Hi, I don't know if someone noticed already but the tarballs are not accessible anymore, is that expected? https://tarballs.opendev.org/openstack/ Cheers Jose Castro Leon CERN Cloud Infrastructure From noonedeadpunk at gmail.com Wed Jul 13 15:14:21 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 13 Jul 2022 17:14:21 +0200 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> References: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Message-ID: Infra folks are already aware of the issue and working on the service recovery. Issue happened due to an incident in a hosting provider where tarballs are located. So a bit of patience would be appreciated. ??, 13 ???. 2022 ?. ? 17:12, Jose Castro Leon : > > Hi, > I don't know if someone noticed already but the tarballs are not > accessible anymore, is that expected? > > https://tarballs.opendev.org/openstack/ > > Cheers > > Jose Castro Leon > CERN Cloud Infrastructure > From cboylan at sapwetik.org Wed Jul 13 15:53:38 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Jul 2022 08:53:38 -0700 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: References: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Message-ID: <5d292115-bb05-4b58-9810-95d9b7d4e7b8@www.fastmail.com> On Wed, Jul 13, 2022, at 8:14 AM, Dmitriy Rabotyagov wrote: > Infra folks are already aware of the issue and working on the service > recovery. Issue happened due to an incident in a hosting provider > where tarballs are located. So a bit of patience would be appreciated. > > ??, 13 ???. 2022 ?. ? 17:12, Jose Castro Leon : >> >> Hi, >> I don't know if someone noticed already but the tarballs are not >> accessible anymore, is that expected? >> >> https://tarballs.opendev.org/openstack/ >> >> Cheers >> >> Jose Castro Leon >> CERN Cloud Infrastructure >> Tarballs appear to be accessible now. We are serving them from our openafs read only replica on the second openafs fileserver. Note, that any publishing to openafs is likely to fail until we get the read write volumes online again. I would avoid making releases until given the all clear. From zigo at debian.org Wed Jul 13 16:21:33 2022 From: zigo at debian.org (Thomas Goirand) Date: Wed, 13 Jul 2022 18:21:33 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> Message-ID: On 7/12/22 14:14, Stephen Finucane wrote: > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: >> Hi Stephen, >> >> I hope you don't mind I ping and up this thread. >> >> Thanks a lot for this work. Any more progress here? > > We've uncapped warlock in openstack/requirements [1]. We just need the glance > folks to remove their own cap now [2] so that we can raise the version in upper > constraint. > > Stephen > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 Hi ! I see these 2 are now merged, so it's job (well) done, right? Cheers, Thomas Goirand (zigo) From jose.castro.leon at cern.ch Wed Jul 13 16:46:14 2022 From: jose.castro.leon at cern.ch (Jose Castro Leon) Date: Wed, 13 Jul 2022 18:46:14 +0200 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: <5d292115-bb05-4b58-9810-95d9b7d4e7b8@www.fastmail.com> Message-ID: <2addbbe3-12c1-4650-bb34-6af831e70513@email.android.com> An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Wed Jul 13 17:53:10 2022 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 13 Jul 2022 20:53:10 +0300 Subject: CRITICAL rally [-] Unhandled error: KeyError: 'openstack' Edit | Openstack wallaby tripleo | centos 8 stream In-Reply-To: References: Message-ID: hi! Try to use `rally env create --from-sysenv --name existing` instead. ??, 12 ???. 2022 ?. ? 22:40, Swogat Pradhan : > Hi, > > I am using openstack wallaby in tripleo architecture. > I googled around and found i can use openstack-rally for testing the > openstack deployment and will be able to genrate report also. > so i tried installing openstack rally using "yum install openstack-rally", > "pip3 install openstack-rally" and finally cloned the git repo and ran > "python3 setup.py install" but no matter what i do i am getting the error > 'unhandled error : keyerror 'openstack'' > > (overcloud) [root at hkg2director ~]# rally deployment create --fromenv > --name=existing > > +--------------------------------------+----------------------------+----------+------------------+--------+ > | uuid | created_at | name | status | active | > > +--------------------------------------+----------------------------+----------+------------------+--------+ > | 484aae52-a690-4163-828b-16adcaa0d8fb | 2022-06-07T05:48:39.039296 | > existing | deploy->finished | | > > +--------------------------------------+----------------------------+----------+------------------+--------+ > Using deployment: 484aae52-a690-4163-828b-16adcaa0d8fb > (overcloud) [root at hkg2director ~]# rally deployment show > 484aae52-a690-4163-828b-16adcaa0d8fb > Command failed, please check log for more info > 2022-06-07 13:48:58.651 482053 CRITICAL rally [-] Unhandled error: > KeyError: 'openstack' > 2022-06-07 13:48:58.651 482053 ERROR rally Traceback (most recent call > last): > 2022-06-07 13:48:58.651 482053 ERROR rally File "/bin/rally", line 10, in > > 2022-06-07 13:48:58.651 482053 ERROR rally sys.exit(main()) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/main.py", line 40, in main > 2022-06-07 13:48:58.651 482053 ERROR rally return cliutils.run(sys.argv, > categories) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/cliutils.py", line 669, > in run > 2022-06-07 13:48:58.651 482053 ERROR rally ret = fn(*fn_args, **fn_kwargs) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/envutils.py", line 142, > in inner > 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/plugins/__init__.py", line > 59, in wrapper > 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/commands/deployment.py", > line 205, in show > 2022-06-07 13:48:58.651 482053 ERROR rally creds = > deployment["credentials"]["openstack"][0] > 2022-06-07 13:48:58.651 482053 ERROR rally KeyError: 'openstack' > 2022-06-07 13:48:58.651 482053 ERROR rally > > can someone please help me in fixing this issue or give any suggestion on > which tool to use to test the openstack deployment and benchmark. > > Also is tempest available for wallaby?? i checked the opendev and github > repos last tags available are for victoria. > > > With regards, > > Swogat pradhan > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From przemyslaw.basa at redge.com Wed Jul 13 14:06:33 2022 From: przemyslaw.basa at redge.com (Przemyslaw Basa) Date: Wed, 13 Jul 2022 16:06:33 +0200 Subject: [placement] running out of VCPU resource Message-ID: Hello I have a fresh Xena deployment. I'm able to spawn one VM per project per compute node then I get placement errors like Jul 13 15:32:27 g-os-controller-placement-container-a796c019 placement-api[1821]: 2022-07-13 15:32:27.683 1821 WARNING placement.objects.allocation [req-314a1352-457e-4166-8d8c-ef58e6d926ad 81c8738a7d4e46b3a0ae270eccf852c9 36e66b27e5144df5ba4a2270695fea34 - default default] Over capacity for VCPU on resource provider 16f620c0-8c6f-4984-8d58-e2c00d1b32da. Needed: 1, Used: 13318, Capacity: 256.0 Number in Used field seemed strange, it looked to me more like memory sum not used VCPU count. root at os-install:~# openstack resource provider show 16f620c0-8c6f-4984-8d58-e2c00d1b32da --allocations -c allocations -f value {'b6da8a02-a96c-464e-a6c4-19c96c83dd44': {'resources': {'MEMORY_MB': 12288, 'VCPU': 4}}, '212798a3-6753-443d-8e7c-5c3be3f4ab54': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1024, 'VCPU': 1}}} I've done some digging in sources and found SQL (in placement/objects/allocation.py) that is supposed to generate these values. MariaDB [placement]> SELECT rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, inv.reserved, inv.allocation_ratio, allocs.used FROM resource_providers AS rp JOIN inventories AS inv ON rp.id = inv.resource_provider_id LEFT JOIN ( SELECT resource_provider_id, resource_class_id, SUM(used) AS used FROM allocations WHERE resource_class_id IN (0, 1, 2) AND resource_provider_id IN (5) GROUP BY resource_provider_id, resource_class_id ) AS allocs ON inv.resource_provider_id = allocs.resource_provider_id AND inv.resource_class_id = allocs.resource_class_id WHERE rp.id IN (5) AND inv.resource_class_id IN (0, 1, 2); +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | used | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 37 | 0 | 128 | 0 | 2 | 13318 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 37 | 1 | 1031723 | 2048 | 1 | NULL | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 37 | 2 | 901965 | 2 | 1 | NULL | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ Individual parts shows correct data, join messing up numbers: MariaDB [placement]> SELECT resource_provider_id, resource_class_id, SUM(used) AS used FROM allocations WHERE resource_class_id IN (0, 1, 2) AND resource_provider_id IN (5) GROUP BY resource_provider_id, resource_class_id; +----------------------+-------------------+-------+ | resource_provider_id | resource_class_id | used | +----------------------+-------------------+-------+ | 5 | 0 | 5 | | 5 | 1 | 13312 | | 5 | 2 | 1 | +----------------------+-------------------+-------+ MariaDB [placement]> SELECT rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, inv.reserved, inv.allocation_ratio, inv.resource_provider_id, inv.resource_class_id FROM resource_providers AS rp JOIN inventories AS inv ON rp.id = inv.resource_provider_id WHERE rp.id IN (5) AND inv.resource_class_id IN (0, 1, 2); +----+--------------------------------------+------------+-------------------+---------+----------+------------------+----------------------+-------------------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | resource_provider_id | resource_class_id | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+----------------------+-------------------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 38 | 0 | 128 | 0 | 2 | 5 | 0 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 38 | 1 | 1031723 | 2048 | 1 | 5 | 1 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 38 | 2 | 901965 | 2 | 1 | 5 | 2 | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+----------------------+-------------------+ Query behaves differently when there is more than one resource_provider_id, it shows correct values then. Any tips how to fix this situation? I'm not brave enough to tinker with this query myself. Regards, Przemyslaw Basa From adam at adampankow.com Wed Jul 13 14:44:13 2022 From: adam at adampankow.com (Adam Pankow) Date: Wed, 13 Jul 2022 14:44:13 +0000 Subject: Changing Ubuntu Cloud Repo On Instance Message-ID: Ubuntu instance images seem to utilize "nova.clouds.archive.ubuntu.com" as their default repository. It seems that either this server does not efficiently route to an alternate mirror, or it itself is a mirror. This results in quite abysmal download speeds that I have seen. Would there be any downside to picking any other Ubuntu mirror, that is definitively more geographically close to me, but not explicitly labeled a Nova/Cloud mirror? i.e. would there be issues encountered, or features lost, by not using Ubuntu's designated Nova/Cloud repo? From vikarnatathe at gmail.com Wed Jul 13 16:30:26 2022 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Wed, 13 Jul 2022 22:00:26 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Lokendra, Are you able to access all the tabs in the OpenStack dashboard without any error? If not, please retry generating the certificate. Also, share the openssl.cnf or server.cnf. On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour wrote: > Hi Team, > Any input on this case raised. > > Thanks, > Lokendra > > > On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Shephard/Swogat, >> I tried changing the setting as suggested and it looks like it has failed >> at step 4 with error: >> >> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >> tripleo_keystone_resources : Create identity public endpoint | undercloud | >> 0:24:47.736198 | 2.21s >> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >> TASK | Create identity internal endpoint >> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >> FATAL | Create identity internal endpoint | undercloud | error={"changed": >> false, "extra_data": {"data": null, "details": "The request you have made >> requires authentication.", "response": >> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >> The request you have made requires authentication."} >> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >> >> >> Checking further the endpoint list: >> I see only one endpoint for keystone is gettin created. >> >> DeprecationWarning >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >> | ID | Region | Service Name | Service >> Type | Enabled | Interface | URL | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity >> | True | admin | http://30.30.30.173:35357 | >> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity >> | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | >> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity >> | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >> >> >> it looks like something related to the SSL, we have also verified that >> the GUI login screen shows that Certificates are applied. >> exploring more in logs, meanwhile any suggestions or know observation >> would be of great help. >> thanks again for the support. >> >> Best Regards, >> Lokendra >> >> >> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan >> wrote: >> >>> I had faced a similar kind of issue, for ip based setup you need to >>> specify the domain name as the ip that you are going to use, this error is >>> showing up because the ssl is ip based but the fqdns seems to be >>> undercloud.com or overcloud.example.com. >>> I think for undercloud you can change the undercloud.conf. >>> >>> And will it work if we specify clouddomain parameter to the IP address >>> for overcloud? because it seems he has not specified the clouddomain >>> parameter and overcloud.example.com is the default domain for >>> overcloud.example.com. >>> >>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >>> wrote: >>> >>>> What is the domain name you have specified in the undercloud.conf file? >>>> And what is the fqdn name used for the generation of the SSL cert? >>>> >>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Team, >>>>> We were trying to install overcloud with SSL enabled for which the UC >>>>> is installed, but OC install is getting failed at step 4: >>>>> >>>>> ERROR >>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": "MODULE >>>>> FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>> 600, in urlopen\n chunked=chunked)\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>> in _make_request\n self._validate_conn(conn)\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>> in _validate_conn\n conn.connect()\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>> server_hostname)\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>> (most recent call last):\n File >>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>> send\n timeout=timeout\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>> in get_discovery\n disc = Discover(session, url, >>>>> authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>> in __init__\n authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>> authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>> request\n resp = send(**kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>> in _send_request\n raise >>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>> run_globals)\n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>> line 185, in \n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>> line 181, in main\n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>> line 407, in __call__\n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>> line 141, in run\n File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>> 517, in search_services\n services = self.list_services()\n File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>> File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>> max_version='3.latest')\n File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>> **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>> **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 271, in get_endpoint_data\n service_catalog = >>>>> self.get_access(session).service_catalog\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>> line 206, in get_auth_ref\n self._plugin = >>>>> self._do_create_plugin(session)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:01.271914 | 2.47s >>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:01.273659 | 2.47s >>>>> >>>>> PLAY RECAP >>>>> ********************************************************************* >>>>> localhost : ok=0 changed=0 unreachable=0 >>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>> >>>>> >>>>> in the deploy.sh: >>>>> >>>>> openstack overcloud deploy --templates \ >>>>> -r /home/stack/templates/roles_data.yaml \ >>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>> --baremetal-deployment >>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>> --network-config \ >>>>> -e /home/stack/templates/environment.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>> \ >>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>> >>>>> Addition lines as highlighted in yellow were passed with modifications: >>>>> tls-endpoints-public-ip.yaml: >>>>> Passed as is in the defaults. >>>>> enable-tls.yaml: >>>>> >>>>> # ******************************************************************* >>>>> # This file was created automatically by the sample environment >>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>> # Users are recommended to make changes to a copy of the file instead >>>>> # of the original, if any customizations are needed. >>>>> # ******************************************************************* >>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>> # description: | >>>>> # Use this environment to pass in certificates for SSL deployments. >>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>> # environments must also be used. >>>>> parameter_defaults: >>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>> # Type: boolean >>>>> HorizonSecureCookies: True >>>>> >>>>> # Specifies the default CA cert to use if TLS is used for services >>>>> in the public network. >>>>> # Type: string >>>>> PublicTLSCAFile: >>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>> >>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>> # Type: string >>>>> SSLRootCertificate: | >>>>> -----BEGIN CERTIFICATE----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END CERTIFICATE----- >>>>> >>>>> SSLCertificate: | >>>>> -----BEGIN CERTIFICATE----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END CERTIFICATE----- >>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>> # Type: string >>>>> SSLIntermediateCertificate: '' >>>>> >>>>> # The content of the SSL Key in PEM format. >>>>> # Type: string >>>>> SSLKey: | >>>>> -----BEGIN PRIVATE KEY----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END PRIVATE KEY----- >>>>> >>>>> # ****************************************************** >>>>> # Static parameters - these are values that must be >>>>> # included in the environment but should not be changed. >>>>> # ****************************************************** >>>>> # The filepath of the certificate as it will be stored in the >>>>> controller. >>>>> # Type: string >>>>> DeployedSSLCertificatePath: >>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>> >>>>> # ********************* >>>>> # End static parameters >>>>> # ********************* >>>>> >>>>> inject-trust-anchor.yaml >>>>> >>>>> # ******************************************************************* >>>>> # This file was created automatically by the sample environment >>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>> # Users are recommended to make changes to a copy of the file instead >>>>> # of the original, if any customizations are needed. >>>>> # ******************************************************************* >>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>> # description: | >>>>> # When using an SSL certificate signed by a CA that is not in the >>>>> default >>>>> # list of CAs, this environment allows adding a custom CA >>>>> certificate to >>>>> # the overcloud nodes. >>>>> parameter_defaults: >>>>> # The content of a CA's SSL certificate file in PEM format. This is >>>>> evaluated on the client side. >>>>> # Mandatory. This parameter must be set by the user. >>>>> # Type: string >>>>> SSLRootCertificate: | >>>>> -----BEGIN CERTIFICATE----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END CERTIFICATE----- >>>>> >>>>> resource_registry: >>>>> OS::TripleO::NodeTLSCAData: >>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>> >>>>> >>>>> >>>>> >>>>> The procedure to create such files was followed using: >>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>> >>>>> >>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>> IP-based certificate, without DNS. * >>>>> >>>>> Any idea around this error would be of great help. >>>>> >>>>> -- >>>>> skype: lokendrarathour >>>>> >>>>> >>>>> >> >> >> > > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Wed Jul 13 17:32:42 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 13 Jul 2022 23:02:42 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: HI Vikarna, Thanks for the inputs. I am note able to access any tabs in GUI. [image: image.png] to re-state, we are failing at the time of deployment at step4 : PLAY [External deployment step 4] ********************************************** 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | TASK | External deployment step 4 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | OK | External deployment step 4 | undercloud -> localhost | result={ "changed": false, "msg": "Use --start-at-task 'External deployment step 4' to resume from this task" } [WARNING]: ('undercloud -> localhost', '525400ae-089b-870a-fab6-0000000000d7') missing from stats 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | INCLUDED | /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml | undercloud 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | TASK | Clean up legacy Cinder keystone catalog entries 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | OK | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv2', 'service_type': 'volumev2'} 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:24.204562 | 2.48s 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | OK | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:26.122584 | 4.40s 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:26.124296 | 4.40s 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | TASK | Manage Keystone resources for OpenStack services 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | TIMING | Manage Keystone resources for OpenStack services | undercloud | 0:11:26.169842 | 0.03s 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | TASK | Gather variables for each operating system 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | TIMING | tripleo_keystone_resources : Gather variables for each operating system | undercloud | 0:11:26.253383 | 0.04s 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | TASK | Create Keystone Admin resources 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | TIMING | tripleo_keystone_resources : Create Keystone Admin resources | undercloud | 0:11:26.299608 | 0.03s 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | INCLUDED | /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | undercloud 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | TASK | Create default domain 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | OK | Create default domain | undercloud 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | TIMING | tripleo_keystone_resources : Create default domain | undercloud | 0:11:28.437360 | 2.09s 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | TASK | Create admin and service projects 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | TIMING | tripleo_keystone_resources : Create admin and service projects | undercloud | 0:11:28.483468 | 0.03s 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | INCLUDED | /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | undercloud 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | TASK | Async creation of Keystone project 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | CHANGED | Async creation of Keystone project | undercloud | item=admin 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.238078 | 0.72s 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | CHANGED | Async creation of Keystone project | undercloud | item=service 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.586587 | 1.06s 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.587916 | 1.07s 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | TASK | Check Keystone project status 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | WAITING | Check Keystone project status | undercloud | 30 retries left 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | OK | Check Keystone project status | undercloud | item=admin 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.260666 | 5.66s 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | OK | Check Keystone project status | undercloud | item=service 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.494729 | 5.89s 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.498771 | 5.89s 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | TASK | Create admin role 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | OK | Create admin role | undercloud 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | TIMING | tripleo_keystone_resources : Create admin role | undercloud | 0:11:37.725949 | 2.20s 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | TASK | Create _member_ role 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | SKIPPED | Create _member_ role | undercloud 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | 0:11:37.783369 | 0.04s 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | TASK | Create admin user 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | CHANGED | Create admin user | undercloud 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | TIMING | tripleo_keystone_resources : Create admin user | undercloud | 0:11:41.145472 | 3.34s 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | TASK | Assign admin role to admin project for admin user 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | OK | Assign admin role to admin project for admin user | undercloud 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | TIMING | tripleo_keystone_resources : Assign admin role to admin project for admin user | undercloud | 0:11:44.288848 | 3.13s 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | TASK | Assign _member_ role to admin project for admin user 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | SKIPPED | Assign _member_ role to admin project for admin user | undercloud 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | TIMING | tripleo_keystone_resources : Assign _member_ role to admin project for admin user | undercloud | 0:11:44.346479 | 0.04s 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | TASK | Create identity service 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | OK | Create identity service | undercloud 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | TIMING | tripleo_keystone_resources : Create identity service | undercloud | 0:11:46.022362 | 1.66s 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | TASK | Create identity public endpoint 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | OK | Create identity public endpoint | undercloud 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:11:48.233349 | 2.19s 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | TASK | Create identity internal endpoint 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request you have made requires authentication."} 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | TIMING | tripleo_keystone_resources : Create identity internal endpoint | undercloud | 0:11:50.660654 | 2.41s PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 overcloud-controller-0 : ok=437 changed=103 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-1 : ok=435 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-2 : ok=432 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 undercloud : ok=39 changed=7 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0 Also : (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf [req] default_bits = 2048 prompt = no default_md = sha256 distinguished_name = dn [dn] C=IN ST=UTTAR PRADESH L=NOIDA O=HSC OU=HSC emailAddress=demo at demo.com v3.ext: (undercloud) [stack at undercloud oc-cert]$ cat v3.ext authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] IP.1=fd00:fd00:fd00:9900::81 Using these files we create other certificates. Please check and let me know in case we need anything else. On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe wrote: > Hi Lokendra, > > Are you able to access all the tabs in the OpenStack dashboard without any > error? If not, please retry generating the certificate. Also, share the > openssl.cnf or server.cnf. > > On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour > wrote: > >> Hi Team, >> Any input on this case raised. >> >> Thanks, >> Lokendra >> >> >> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Shephard/Swogat, >>> I tried changing the setting as suggested and it looks like it has >>> failed at step 4 with error: >>> >>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>> 0:24:47.736198 | 2.21s >>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>> TASK | Create identity internal endpoint >>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>> FATAL | Create identity internal endpoint | undercloud | error={"changed": >>> false, "extra_data": {"data": null, "details": "The request you have made >>> requires authentication.", "response": >>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>> The request you have made requires authentication."} >>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>> >>> >>> Checking further the endpoint list: >>> I see only one endpoint for keystone is gettin created. >>> >>> DeprecationWarning >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>> | ID | Region | Service Name | Service >>> Type | Enabled | Interface | URL | >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity >>> | True | admin | http://30.30.30.173:35357 | >>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity >>> | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | >>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity >>> | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>> >>> >>> it looks like something related to the SSL, we have also verified that >>> the GUI login screen shows that Certificates are applied. >>> exploring more in logs, meanwhile any suggestions or know observation >>> would be of great help. >>> thanks again for the support. >>> >>> Best Regards, >>> Lokendra >>> >>> >>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>> swogatpradhan22 at gmail.com> wrote: >>> >>>> I had faced a similar kind of issue, for ip based setup you need to >>>> specify the domain name as the ip that you are going to use, this error is >>>> showing up because the ssl is ip based but the fqdns seems to be >>>> undercloud.com or overcloud.example.com. >>>> I think for undercloud you can change the undercloud.conf. >>>> >>>> And will it work if we specify clouddomain parameter to the IP address >>>> for overcloud? because it seems he has not specified the clouddomain >>>> parameter and overcloud.example.com is the default domain for >>>> overcloud.example.com. >>>> >>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >>>> wrote: >>>> >>>>> What is the domain name you have specified in the undercloud.conf file? >>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>> >>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Team, >>>>>> We were trying to install overcloud with SSL enabled for which the UC >>>>>> is installed, but OC install is getting failed at step 4: >>>>>> >>>>>> ERROR >>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>> in _validate_conn\n conn.connect()\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>> server_hostname)\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>> (most recent call last):\n File >>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>> send\n timeout=timeout\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>> in get_discovery\n disc = Discover(session, url, >>>>>> authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>> in __init__\n authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>> authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>> request\n resp = send(**kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>> in _send_request\n raise >>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>> run_globals)\n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>> line 185, in \n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>> line 181, in main\n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>> line 407, in __call__\n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>> line 141, in run\n File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>> File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>> max_version='3.latest')\n File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>> **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>> **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>> self.get_access(session).service_catalog\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>> self._do_create_plugin(session)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:01.271914 | 2.47s >>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:01.273659 | 2.47s >>>>>> >>>>>> PLAY RECAP >>>>>> ********************************************************************* >>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>> >>>>>> >>>>>> in the deploy.sh: >>>>>> >>>>>> openstack overcloud deploy --templates \ >>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>> --baremetal-deployment >>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>> --network-config \ >>>>>> -e /home/stack/templates/environment.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>> \ >>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>> >>>>>> Addition lines as highlighted in yellow were passed with >>>>>> modifications: >>>>>> tls-endpoints-public-ip.yaml: >>>>>> Passed as is in the defaults. >>>>>> enable-tls.yaml: >>>>>> >>>>>> # ******************************************************************* >>>>>> # This file was created automatically by the sample environment >>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>> # of the original, if any customizations are needed. >>>>>> # ******************************************************************* >>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>> # description: | >>>>>> # Use this environment to pass in certificates for SSL deployments. >>>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>>> # environments must also be used. >>>>>> parameter_defaults: >>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>> # Type: boolean >>>>>> HorizonSecureCookies: True >>>>>> >>>>>> # Specifies the default CA cert to use if TLS is used for services >>>>>> in the public network. >>>>>> # Type: string >>>>>> PublicTLSCAFile: >>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>> >>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>> # Type: string >>>>>> SSLRootCertificate: | >>>>>> -----BEGIN CERTIFICATE----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END CERTIFICATE----- >>>>>> >>>>>> SSLCertificate: | >>>>>> -----BEGIN CERTIFICATE----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END CERTIFICATE----- >>>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>>> # Type: string >>>>>> SSLIntermediateCertificate: '' >>>>>> >>>>>> # The content of the SSL Key in PEM format. >>>>>> # Type: string >>>>>> SSLKey: | >>>>>> -----BEGIN PRIVATE KEY----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END PRIVATE KEY----- >>>>>> >>>>>> # ****************************************************** >>>>>> # Static parameters - these are values that must be >>>>>> # included in the environment but should not be changed. >>>>>> # ****************************************************** >>>>>> # The filepath of the certificate as it will be stored in the >>>>>> controller. >>>>>> # Type: string >>>>>> DeployedSSLCertificatePath: >>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>> >>>>>> # ********************* >>>>>> # End static parameters >>>>>> # ********************* >>>>>> >>>>>> inject-trust-anchor.yaml >>>>>> >>>>>> # ******************************************************************* >>>>>> # This file was created automatically by the sample environment >>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>> # of the original, if any customizations are needed. >>>>>> # ******************************************************************* >>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>> # description: | >>>>>> # When using an SSL certificate signed by a CA that is not in the >>>>>> default >>>>>> # list of CAs, this environment allows adding a custom CA >>>>>> certificate to >>>>>> # the overcloud nodes. >>>>>> parameter_defaults: >>>>>> # The content of a CA's SSL certificate file in PEM format. This is >>>>>> evaluated on the client side. >>>>>> # Mandatory. This parameter must be set by the user. >>>>>> # Type: string >>>>>> SSLRootCertificate: | >>>>>> -----BEGIN CERTIFICATE----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END CERTIFICATE----- >>>>>> >>>>>> resource_registry: >>>>>> OS::TripleO::NodeTLSCAData: >>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> The procedure to create such files was followed using: >>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>> >>>>>> >>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>> IP-based certificate, without DNS. * >>>>>> >>>>>> Any idea around this error would be of great help. >>>>>> >>>>>> -- >>>>>> skype: lokendrarathour >>>>>> >>>>>> >>>>>> >>> >>> >>> >> >> -- >> > -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From cboylan at sapwetik.org Wed Jul 13 19:41:37 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Jul 2022 12:41:37 -0700 Subject: Changing Ubuntu Cloud Repo On Instance In-Reply-To: References: Message-ID: <493a1c2c-0aff-4704-ba74-7fcfdc2d900f@www.fastmail.com> On Wed, Jul 13, 2022, at 7:44 AM, Adam Pankow wrote: > Ubuntu instance images seem to utilize "nova.clouds.archive.ubuntu.com" > as their default repository. It seems that either this server does not > efficiently route to an alternate mirror, or it itself is a mirror. > This results in quite abysmal download speeds that I have seen. Would > there be any downside to picking any other Ubuntu mirror, that is > definitively more geographically close to me, but not explicitly > labeled a Nova/Cloud mirror? i.e. would there be issues encountered, or > features lost, by not using Ubuntu's designated Nova/Cloud repo? The setting of the repo location is likely to be baked into whatever image you grabbed. Running some DNS queries I suspect that *.clouds.archive.ubuntu.com is used to track requests from various cloud providers to provide insight into things like popularity of Ubuntu on different clouds. I wouldn't expect there to be any problems using a different mirror as the packages should all be kept roughly in sync. The biggest thing to keep in mind is probably reliability and adjusting if necessary. From gael.therond at bitswalk.com Wed Jul 13 20:06:57 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 13 Jul 2022 22:06:57 +0200 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: Hi Julia! Thanks a lot for those explanations :-) Most of it confirm my understanding, I now have a clearer point of view that will let me select our test users for the service. Regarding aruba switches, those are pretty cool, even if as you pointed it, this feature can actually lead you to some weird if not dangerous situations x) Ok noticed about the horizon issue, it can be a little bit tricky for our end users to understand that tbh as they will for sure expect the IP selected by neutron and display on the dashboard to be the one used by the node even on a full flat network such as the provisioning network, but for now we will deal with it by explaining them. Regarding my point 2, yeah yeah I knew the purpose of direct deploy I just explicited it I don?t know why, my point was rather: At first, when I configured our ironic deployment I had that weird issue where if I don?t put the pxe_filter option to noop but dnsmasq, deploying anything is failing as the conductor doesn?t correctly erase the ?ignore ? part of the string on the dhcp_host_filter file of dnsmasq. If I make this filter as noop then obviously I don?t need neutron to provide the ironic-provision-network anymore as anyone plugged on ports with my VLAN101 set as native VLAN will be able to get an ip from the PXE dnsmasq. I?m still having hard time to map how ironic needs both PXE dedicated dnsmasq for introspection and then can use neutron dnsmasq dhcp once you want to provision a host? Is that because neutron (kinda) lack for dhcp options support on its managed subnets ? All in all it?s pretty clearer to me about the multi tenancy networking requirements now thanks to you! Le mar. 12 juil. 2022 ? 00:13, Julia Kreger a ?crit : > Greetings! Hopefully these answers help! > > On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND > wrote: > > > > I everyone, I?m currently working back again with Ironic and it?s > amazing! > > > > However, during our demo session to our users few questions arise. > > > > We?re currently deploying nodes using a private vlan that can?t be > reached from outside of the Openstack network fabric (vlan 101 - > 192.168.101.0/24) and everything is fine with this provisioning network > as our ToR switch all know about it and other Control plan VLANs such as > the internal APIs VLAN which allow the IPA Ramdisk to correctly and > seamlessly be able to contact the internal IRONIC APIs. > > Nice, I've had my lab configured like this in the past. > > > > > (When you declare a port as a trunk allowed all vlan on a aruba switch > it seems it automatically analyse the CIDR your host try to reach from your > VLAN and route everything to the corresponding VLAN that match the > destination IP). > > > > Ugh, that... could be fun :\ > > > So know, I still get few tiny issues: > > > > 1?/- When I spawn a nova instance on a ironic host that is set to use > flat network (From horizon as a user), why does the nova wizard still ask > for a neutron network if it?s not set on the provisioned host by the IPA > ramdisk right after the whole disk image copy? Is that some missing > development on horizon or did I missed something? > > Horizon just is not aware... and you can actually have entirely > different DHCP pools on the same flat network, so that neutron network > is intended for the instance's addressing to utilize. > > Ironic does just ask from an allocation from a provisioning network, > which can and *should* be a different network than the tenant network. > > > > > 2?/- In a flat network layout deployment using direct deploy scenario > for images, am I still supposed to create a ironic provisioning network in > neutron? > > > > From my understanding (and actually my tests) we don?t, as any host > booting on the provisioning vlan will catch up an IP and initiate the bootp > sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, > but it?s a bit confusing as many documentation explicitly require for this > network to exist on neutron. > > Yes. Direct is short hand for "Copy it over the network and write it > directly to disk". It still needs an IP address on the provisioning > network (think, subnet instead of distinct L2 broadcast domain). > > When you ask nova for an instance, it sends over what the machine > should use as a "VIF" (neutron port), however that is never actually > bound configuration wise into neutron until after the deployment > completes. > > It *could* be that your neutron config is such that it just works > anyway, but I suspect upstream contributors would be a bit confused if > you reported an issue and had no provisioning network defined. > > > > > 3?/- My whole Openstack network setup is using Openvswitch and vxlan > tunnels on top of a spine/leaf architecture using aruba CX8360 switches > (for both spine and leafs), am I required to use either the > networking-generic-switch driver or a vendor neutron driver ? If that?s > right, how will this driver be able to instruct the switch to assign the > host port the correct openvswitch vlan id and register the correct vxlan to > openvswitch from this port? I mean, ok neutron know the vxlan and > openvswitch the tunnel vlan id/interface but what is the glue of all that? > > If your happy with flat networks, no. > > If you want tenant isolation networking wise, yes. > > NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port > level local link configuration (well, Ironic includes the port > information (local link connection, physical network, and some other > details) to Neutron with the port binding request. > > Those ML2 drivers, then either request the switch configuration be > updated, or take locally configured credentials to modify port > configuration in Neutron, and logs into the switch to toggle the > access port's configuration which the baremetal node is attached to. > > Generally, they are not vxlan network aware, and at least with > networking-generic-switch vlan ID numbers are expected and allocated > via neutron. > > Sort of like the software is logging into the switch and running > something along the lines of "conf t;int gi0/21;switchport mode > access;switchport access vlan 391 ; wri mem" > > > > > 4?/- I?ve successfully used openstack cloud oriented CentOS and debian > images or snapshot of VMs to provision my hosts, this is an awesome > feature, but I?m wondering if there is a way to let those host cloud-init > instance to request for neutron metadata endpoint? > > > > Generally yes, you *can* use network attached metadata with neutron > *as long as* your switches know to direct the traffic for the metadata > IP to the Neutron metadata service(s). > > We know of operators who ahve done it without issues, but often that > additional switch configured route is not always the best hting. > Generally we recommend enabling and using configuration drives, so the > metadata is able to be picked up by cloud-init. > > > > I was a bit surprised about the ironic networking part as I was > expecting the IPA ramdisk to at least be able to set the host os with the > appropriate network configuration file for whole disk images that do not > use encryption by injecting those information from the neutron api into the > host disk while mounted (right after the image dd). > > > > IPA has no knowledge of how to modify the host OS in this regard. > modifying the host OS has generally been something the ironic > community has avoided since it is not exactly cloudy to have to do so. > Generally most clouds are running with DHCP, so as long as that is > enabled and configured, things should generally "just work". > > Hopefully that provides a little more context. Nothing prevents you > from writing your own hardware manager that does exactly this, for > what it is worth. > > > All in all I really like the ironic approach of the baremetal > provisioning process, and I?m pretty sure that I?m just missing a bit of > understanding of the networking part but it?s really the most confusing > part of it to me as I feel like if there is a missing link in between > neutron and the host HW or the switches. > > > > Thanks! It is definitely one of the more complex parts given there are > many moving parts, and everyone wants (or needs) to have their > networking configured just a little differently. > > Hopefully I've kind of put some of the details out there, if you need > more information, please feel free to reach out, and also please feel > free to ask questions in #openstack-ironic on irc.oftc.net. > > > Thanks a lot anyone that will take time to explain me this :-) > > :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jul 13 20:35:03 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Jul 2022 20:35:03 +0000 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> References: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Message-ID: <20220713203503.4dzsfmykc4inbwht@yuggoth.org> On 2022-07-13 17:05:26 +0200 (+0200), Jose Castro Leon wrote: > I don't know if someone noticed already but the tarballs are not accessible > anymore, is that expected? [...] Just to wrap this up (hopefully), the primary server was brought back into service by the provider at 18:08 UTC and we don't see evidence to indicate any of our volumes were in a degraded (non-redundant) state after that time. Writes should be back also as of roughly 19:20 UTC, so as far as I'm aware we're in the clear for the past hour or so. If you notice anything out of the ordinary (again) please do let us know! We always appreciate the info, since we don't really operate like a traditional service provider (our infrastructure collaborators don't "carry pagers" as it were). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rlandy at redhat.com Wed Jul 13 20:39:34 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 13 Jul 2022 16:39:34 -0400 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs Message-ID: Hello All, We have a new gate blocker impacting tripleo-ci-centos-9-scenario010-standalone and ovn-provider master jobs. The deployment fails and the error in the nova_libvirt_init_secret/stdout.log log shows: Error: /etc/ceph/ceph.conf contained an empty fsid definition Check your ceph configuration Details of the failure are in https://bugs.launchpad.net/tripleo/+bug/1981634. Please hold rechecks until we can touch base with the Ceph team to discuss. Thank you, TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Jul 13 21:04:37 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Jul 2022 14:04:37 -0700 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: References: Message-ID: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: > Hello All, > > We have a new gate blocker impacting > tripleo-ci-centos-9-scenario010-standalone and ovn-provider master > jobs. The deployment fails and the error in the > nova_libvirt_init_secret/stdout.log log shows: > > Error: /etc/ceph/ceph.conf contained an empty fsid definition > Check your ceph configuration This message appears to originate from: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 The fsid is in fact unset/empty in the file: https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf Seems that this template, https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, should set the value based on the tripleo_ceph_client_fsid variable value. > > Details of the failure are in https://bugs.launchpad.net/tripleo/+bug/1981634. > > Please hold rechecks until we can touch base with the Ceph team to discuss. > > Thank you, > > TripleO CI From gfidente at redhat.com Wed Jul 13 21:31:41 2022 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 13 Jul 2022 23:31:41 +0200 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> Message-ID: <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> On 7/13/22 23:04, Clark Boylan wrote: > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: >> Hello All, >> >> We have a new gate blocker impacting >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master >> jobs. The deployment fails and the error in the >> nova_libvirt_init_secret/stdout.log log shows: >> >> Error: /etc/ceph/ceph.conf contained an empty fsid definition >> Check your ceph configuration > > This message appears to originate from: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 > > The fsid is in fact unset/empty in the file: https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf > > Seems that this template, https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, should set the value based on the tripleo_ceph_client_fsid variable value. hi Clark, that's my understanding as well that variable appears to be set by the client role [1] as expected; I am unclear why it doesn't show up in the template do we know if this is happening only for scenario010 or for the other scenarios deploying ceph? (001 and 004) 1. https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml -- Giulio Fidente GPG KEY: 08D733BA From melwittt at gmail.com Wed Jul 13 23:16:58 2022 From: melwittt at gmail.com (melanie witt) Date: Wed, 13 Jul 2022 16:16:58 -0700 Subject: [nova][ops] seeking input about local/ephemeral disk encryption feature naming Message-ID: Hi everyone, A potential issue regarding naming has come up during review of the ephemeral storage encryption feature [1][2] patch series [3] and we're looking for input before moving forward with any naming/terminology changes across the specs and the entire patch series. The concern that has been raised is around use of the term "ephemeral" for the name of this feature including traits, extra specs, and image properties [4]. For context, the objective of this feature is to provide users with the ability to specify that all local disks for the instance be encrypted. This includes the root disk and any other local disks. The initial concern is around use of the word "ephemeral" for the root disk. My general interpretation of the word "ephemeral" for storage in nova has been that it means attached storage that only persists for the lifetime of the instance and is destroyed if and when the instance is destroyed. This is in contrast to attached cinder volumes which can persist after instance deletion. But should "ephemeral" ever be used to describe a root disk? Is it incorrect and/or ambiguous to refer to it as such? This is part of what is being discussed in [4]. During discussion, I also realized there is a separate gap in the above interpretation of "ephemeral" in nova. When cinder volumes are attached to an instance, their persistence after the instance is deleted depends on whether the 'delete_on_termination' attribute is set to true in the request payload when the instance is created [5] or when attaching a volume to the instance [6] or updating a volume attached to the instance [7]. This means that in the currently proposed patches, if a user specifies hw:ephemeral_encryption in the extra_specs, for example, and they also have a volume with delete_on_termination=True attached, only the root disk will be encrypted via the extra spec -- the volume would not be encrypted. Encryption of the volume has to be requested in cinder. Could this mislead a user into thinking both the root disk and cinder volume are encrypted when only the root disk is? Because of the above issues, we are considering whether we should change the terminology used in this feature at this stage. Some ideas include "local encryption", "local disk encryption", "disk encryption". IMHO "disk_encryption" is ambiguous in its own way because an attached cinder volume also has a disk. Changing the naming will be a non-trivial amount of work, so we wanted to get additional input before going ahead with such a change. Another thing noted in a comment on another patch in the series [8] is that the os-traits for this feature have already been merged [9]. If we decide to change the naming, should we go ahead and use these traits as-is and have them not match the naming in nova or should we deprecate them and add new traits that match the new name and use those? I hope this makes sense and your input would be much appreciated. Cheers, -melwitt [1] https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-storage-encryption.html [2] https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-encryption-libvirt.html [3] https://review.opendev.org/q/topic:specs%252Fyoga%252Fapproved%252Fephemeral-encryption-libvirt [4] https://review.opendev.org/c/openstack/nova/+/764486/10/nova/api/validation/extra_specs/hw.py#516 [5] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server [6] https://docs.openstack.org/api-ref/compute/?expanded=attach-a-volume-to-an-instance-detail [7] https://docs.openstack.org/api-ref/compute/?expanded=update-a-volume-attachment-detail [8] https://review.opendev.org/c/openstack/nova/+/760456/10/nova/scheduler/request_filter.py#425 [9] https://github.com/openstack/os-traits/blob/f64d50e4dd2f21558fb73dd4b59cd1d4b121b707/os_traits/compute/ephemeral.py From gmann at ghanshyammann.com Thu Jul 14 04:17:15 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Jul 2022 23:17:15 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 14 July 2022 at 1500 UTC In-Reply-To: <181ee8c6a9e.b95207cb391423.5781004865867855521@ghanshyammann.com> References: <181ee8c6a9e.b95207cb391423.5781004865867855521@ghanshyammann.com> Message-ID: <181faecda9d.df712f3146717.261372769673175857@ghanshyammann.com> Hello Everyone, Below is the agenda for Today's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * RBAC feedback in ops meetup ** https://etherpad.opendev.org/p/rbac-zed-ptg#L171 ** https://review.opendev.org/c/openstack/governance/+/847418 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 11 Jul 2022 13:36:28 -0500 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 14 July 2022, at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, 13 July at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From ykarel at redhat.com Thu Jul 14 05:53:16 2022 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 14 Jul 2022 11:23:16 +0530 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> Message-ID: On Thu, Jul 14, 2022 at 3:20 AM Giulio Fidente wrote: > On 7/13/22 23:04, Clark Boylan wrote: > > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: > >> Hello All, > >> > >> We have a new gate blocker impacting > >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master > >> jobs. The deployment fails and the error in the > >> nova_libvirt_init_secret/stdout.log log shows: > >> > >> Error: /etc/ceph/ceph.conf contained an empty fsid definition > >> Check your ceph configuration > > > > This message appears to originate from: > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 > > > > The fsid is in fact unset/empty in the file: > https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf > > > > Seems that this template, > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, > should set the value based on the tripleo_ceph_client_fsid variable value. > > hi Clark, that's my understanding as well > > that variable appears to be set by the client role [1] as expected; I am > unclear why it doesn't show up in the template > > do we know if this is happening only for scenario010 or for the other > scenarios deploying ceph? (001 and 004) > > Scenario001 and 004 are green[2]. [2] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-scenario001-standalone&job_name=tripleo-ci-centos-9-scenario004-standalone&skip=0 1. > > https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml > > -- > Giulio Fidente > GPG KEY: 08D733BA > > > Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Thu Jul 14 06:22:23 2022 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 14 Jul 2022 11:52:23 +0530 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> Message-ID: On Thu, Jul 14, 2022 at 11:23 AM Yatin Karel wrote: > > > On Thu, Jul 14, 2022 at 3:20 AM Giulio Fidente > wrote: > >> On 7/13/22 23:04, Clark Boylan wrote: >> > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: >> >> Hello All, >> >> >> >> We have a new gate blocker impacting >> >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master >> >> jobs. The deployment fails and the error in the >> >> nova_libvirt_init_secret/stdout.log log shows: >> >> >> >> Error: /etc/ceph/ceph.conf contained an empty fsid definition >> >> Check your ceph configuration >> > >> > This message appears to originate from: >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 >> > >> > The fsid is in fact unset/empty in the file: >> https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf >> > >> > Seems that this template, >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, >> should set the value based on the tripleo_ceph_client_fsid variable value. >> >> hi Clark, that's my understanding as well >> >> that variable appears to be set by the client role [1] as expected; I am >> unclear why it doesn't show up in the template >> >> do we know if this is happening only for scenario010 or for the other >> scenarios deploying ceph? (001 and 004) >> >> Scenario001 and 004 are green[2]. > > [2] > https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-scenario001-standalone&job_name=tripleo-ci-centos-9-scenario004-standalone&skip=0 > > Sorry for mislooking, scenario004 is also impacted, even it also failed in the test patch of the original patch https://review.opendev.org/c/openstack/tripleo-heat-templates/+/849580/. For now it's being reverted to unblock gate https://review.opendev.org/c/openstack/tripleo-ansible/+/849732 > 1. >> >> https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml >> >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> >> Regards > Yatin Karel > Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Thu Jul 14 06:38:00 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 14 Jul 2022 12:08:00 +0530 Subject: [cinder] Extending Driver Merge Deadline Message-ID: Hello Argonauts, As discussed in yesterday's Cinder meeting[1], given the number of drivers proposed for Zed cycle (8 new Drivers[2]) and the limited review bandwidth (cores are working on development tasks), we've decided to extend the driver merge deadline from R-12 to R-10 i.e. from 15th July to 29th July. R-10 is also the deadline for Manila driver merge deadline[3]. [1] https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-07-13-14.00.log.html#l-129 [2] https://etherpad.opendev.org/p/cinder-zed-new-drivers [3] https://releases.openstack.org/zed/schedule.html#z-manila-new-driver-deadline Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jul 14 09:02:37 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Jul 2022 10:02:37 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> Message-ID: On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: > On 7/12/22 14:14, Stephen Finucane wrote: > > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > > > Hi Stephen, > > > > > > I hope you don't mind I ping and up this thread. > > > > > > Thanks a lot for this work. Any more progress here? > > > > We've uncapped warlock in openstack/requirements [1]. We just need the glance > > folks to remove their own cap now [2] so that we can raise the version in upper > > constraint. > > > > Stephen > > > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > > [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > Hi ! > > I see these 2 are now merged, so it's job (well) done, right? I'd assume so, yes. We just need to wait for the machinery to do its job and bump the upper constraint now. Stephen > > Cheers, > > Thomas Goirand (zigo) > From rlandy at redhat.com Thu Jul 14 10:41:43 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Thu, 14 Jul 2022 06:41:43 -0400 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> Message-ID: On Thu, Jul 14, 2022 at 2:28 AM Yatin Karel wrote: > On Thu, Jul 14, 2022 at 11:23 AM Yatin Karel wrote: > >> >> >> On Thu, Jul 14, 2022 at 3:20 AM Giulio Fidente >> wrote: >> >>> On 7/13/22 23:04, Clark Boylan wrote: >>> > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: >>> >> Hello All, >>> >> >>> >> We have a new gate blocker impacting >>> >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master >>> >> jobs. The deployment fails and the error in the >>> >> nova_libvirt_init_secret/stdout.log log shows: >>> >> >>> >> Error: /etc/ceph/ceph.conf contained an empty fsid definition >>> >> Check your ceph configuration >>> > >>> > This message appears to originate from: >>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 >>> > >>> > The fsid is in fact unset/empty in the file: >>> https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf >>> > >>> > Seems that this template, >>> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, >>> should set the value based on the tripleo_ceph_client_fsid variable value. >>> >>> hi Clark, that's my understanding as well >>> >>> that variable appears to be set by the client role [1] as expected; I am >>> unclear why it doesn't show up in the template >>> >>> do we know if this is happening only for scenario010 or for the other >>> scenarios deploying ceph? (001 and 004) >>> >>> Scenario001 and 004 are green[2]. >> >> [2] >> https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-scenario001-standalone&job_name=tripleo-ci-centos-9-scenario004-standalone&skip=0 >> >> Sorry for mislooking, scenario004 is also impacted, even it also failed > in the test patch of the original patch > https://review.opendev.org/c/openstack/tripleo-heat-templates/+/849580/. > For now it's being reverted to unblock gate > https://review.opendev.org/c/openstack/tripleo-ansible/+/849732 > The revert is merged - and the gate should be cleared now. Thank you > 1. >>> >>> https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml >>> >>> -- >>> Giulio Fidente >>> GPG KEY: 08D733BA >>> >>> >>> Regards >> Yatin Karel >> > > Regards > Yatin Karel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jul 14 12:51:30 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Jul 2022 13:51:30 +0100 Subject: Changing Ubuntu Cloud Repo On Instance In-Reply-To: References: Message-ID: <07b119fe6c2478512a5c6e7248b791bd695ba8b2.camel@redhat.com> On Wed, 2022-07-13 at 14:44 +0000, Adam Pankow wrote: > Ubuntu instance images seem to utilize "nova.clouds.archive.ubuntu.com" as their default repository. It seems that either this server does not efficiently route to an alternate mirror, or it itself is a mirror. This results in quite abysmal download speeds that I have seen. Would there be any downside to picking any other Ubuntu mirror, that is definitively more geographically close to me, but not explicitly labeled a Nova/Cloud mirror? i.e. would there be issues encountered, or features lost, by not using Ubuntu's designated Nova/Cloud repo? > not that im aware off. i think canonical are probly using this fqdn to track what installs are cloud based and what are native i highly droubt changing it would have a negitive effectr on your users expericne and using a geogravically colocated mirror would likely improve it. this is not to my knolage anything that nova or openstack ever had any discussions with teh ubuntu comunity about and it is not done at our request so i would just test it and deploy what works best for you and your users. if you have storage capasity and want to conserve external bandwith you might even consider hosting your own caching proxy/mirror to use as a default in the openstack cloud itself. that is quite common for ci enviornments. From zigo at debian.org Thu Jul 14 13:38:34 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 14 Jul 2022 15:38:34 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. Message-ID: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> Hi everyone! I know the usual answer: "we don't do that", or "this is unsupported in current version", however, as always, that's the way things are: OpenStack isn't alone living in Debian, and Debian Unstable now has Python 3.11, which isn't going to change simply because the OpenStack community decides that "it's not supported". So it'd be nice if we could support it. I'm quite sure that, as usual, the upload of a new Python interpreter version will break my world. I'll try to summit patches when I can, but I also expect help if possible. The challenge is: Debian Bookworm will be shipping Zed and Python 3.11, most likely... Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu Jul 14 13:51:39 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 14 Jul 2022 15:51:39 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> Message-ID: <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> On 7/14/22 11:02, Stephen Finucane wrote: > On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: >> On 7/12/22 14:14, Stephen Finucane wrote: >>> On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: >>>> Hi Stephen, >>>> >>>> I hope you don't mind I ping and up this thread. >>>> >>>> Thanks a lot for this work. Any more progress here? >>> >>> We've uncapped warlock in openstack/requirements [1]. We just need the glance >>> folks to remove their own cap now [2] so that we can raise the version in upper >>> constraint. >>> >>> Stephen >>> >>> [1] https://review.opendev.org/c/openstack/requirements/+/849284 >>> [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 >> >> Hi ! >> >> I see these 2 are now merged, so it's job (well) done, right? > > I'd assume so, yes. We just need to wait for the machinery to do its job and > bump the upper constraint now. > > Stephen Hi Stephen, I uploaded a patched version of warlock to Unstable with the test fixed for the new jsonschema. However, when looking at the python-jsonschema pseudo-excuse, I can see that version 4.6.0 is breaking a bunch of other OpenStack projects: https://release.debian.org/britney/pseudo-excuses-experimental.html#python-jsonschema This includes: - designate - ironic - nova - sahara I'll try to see what I can do to fix these, maybe some of the failures are unrelated (I haven't investigated yet). Cheers, Thomas Goirand (zigo) From smooney at redhat.com Thu Jul 14 14:01:14 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Jul 2022 15:01:14 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> Message-ID: <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> On Thu, 2022-07-14 at 15:38 +0200, Thomas Goirand wrote: > Hi everyone! > > I know the usual answer: "we don't do that", or "this is unsupported in > current version", however, as always, that's the way things are: > OpenStack isn't alone living in Debian, and Debian Unstable now has > Python 3.11, which isn't going to change simply because the OpenStack > community decides that "it's not supported". > > So it'd be nice if we could support it. I'm quite sure that, as usual, > the upload of a new Python interpreter version will break my world. I'll > try to summit patches when I can, but I also expect help if possible. > > The challenge is: Debian Bookworm will be shipping Zed and Python 3.11, > most likely... thanks for the heads up. last time i tried using bookworm i had to force 3.9 via a virual enve to fully deploy a working openstack. i know that you and the eventlet folk have actuly resolved the issue i hit already so 3.10 shoudl work now but i suspect evently will be the most fragil thing with getting 3.11 to work. in terms of offical testing runtime you are correct that "this is unsupported in current version" since we determin the supported/tested interperters for a release at the start fo the cycle. and currenlty we only test 3.8 and 3.9 with experimatal support for 3.10 im not against reviewing/merging patches that are needed for 3.11 as long as we do not break 3.8 for next cycle im hoping we will drop ubuntu 20.04 in favor of ubuntu 22.04 and we can rais our min version to 3.9 and add 3.10 and 3.11 support formally to our testing runtimes if those are aviable to test with in lts releases such as debian 11 or ubuntu 22.04 so making a start on that in zed on a best effort baisis i think makes sense. the only thing is that if we have to choose betten 3.8 support and 3.11 we need to ensure we maintian the agreed runtime supprot but i doubt we will need to make that choice. do we currently have 3.11 aviable in any of the ci images? i belive we have 22.04 image aviable is it installbale there or do we have debian bookworm images we can use to add a non voting tox py311 job to the relevent project repos? > > Cheers, > > Thomas Goirand (zigo) > From fungi at yuggoth.org Thu Jul 14 14:30:49 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Jul 2022 14:30:49 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: <20220714143048.gxznifh7oeaaqldi@yuggoth.org> On 2022-07-14 15:01:14 +0100 (+0100), Sean Mooney wrote: [...] > do we currently have 3.11 aviable in any of the ci images? i > belive we have 22.04 image aviable is it installbale there or do > we have debian bookworm images we can use to add a non voting tox > py311 job to the relevent project repos? Not to my knowledge, no. Ubuntu inherits most of their packages from Debian, which has only just added a Python 3.11 pre-release, so it will take time to end up even in Ubuntu under development (Ubuntu Kinetic which is slated to become 22.10 still only has python3.10 packages for the moment). It's probable they'll backport a python3.11 package to Jammy once available, though there's no guarantee, and based on historical backports it probably won't be until upstream 3.11.1 is tagged at the very earliest. Keep in mind that what Debian has at the moment is a package of Python 3.11.0b4, since 3.11.0 isn't even scheduled for an upstream release until October (two days before we're planning to release OpenStack Zed). Further, it's not even in Debian bookworm yet, and it's hard to predict how soon it will be able to transition out of unstable either. Let's be clear, what's being asked here is that OpenStack not just test against the newest available Python release, but in fact to continually test against pre-releases of the next Python while it's still being developed. While I understand that this would be nice, I hardly think it's a reasonable thing to expect. We have a hard enough time just keeping up with actual releases of Python which are current at the time we start a development cycle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rdhasman at redhat.com Thu Jul 14 14:47:00 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 14 Jul 2022 20:17:00 +0530 Subject: [cinder] Festival of XS reviews Message-ID: Hello Argonauts, We will be having our monthly festival of XS reviews tomorrow i.e. 15th July (Friday) from 1400-1600 UTC. Following are some additional details: Date: 15th July, 2022 Time: 1400-1600 UTC Meeting link: https://meetpad.opendev.org/cinder-festival-of-reviews etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Thu Jul 14 03:19:11 2022 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Thu, 14 Jul 2022 08:49:11 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Lokendra, The CN field is missing. Can you add that and generate the certificate again. CN=ipaddress Also add dns.1=ipaddress under alt_names for precaution. Vikarna On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, wrote: > HI Vikarna, > Thanks for the inputs. > I am note able to access any tabs in GUI. > [image: image.png] > > to re-state, we are failing at the time of deployment at step4 : > > > PLAY [External deployment step 4] > ********************************************** > 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | > TASK | External deployment step 4 > 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | > OK | External deployment step 4 | undercloud -> localhost | result={ > "changed": false, > "msg": "Use --start-at-task 'External deployment step 4' to resume > from this task" > } > [WARNING]: ('undercloud -> localhost', > '525400ae-089b-870a-fab6-0000000000d7') > missing from stats > 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | > TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s > 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | > INCLUDED | > /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml > | undercloud > 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | > TASK | Clean up legacy Cinder keystone catalog entries > 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | > OK | Clean up legacy Cinder keystone catalog entries | undercloud | > item={'service_name': 'cinderv2', 'service_type': 'volumev2'} > 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:24.204562 | 2.48s > 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | > OK | Clean up legacy Cinder keystone catalog entries | undercloud | > item={'service_name': 'cinderv3', 'service_type': 'volume'} > 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:26.122584 | 4.40s > 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:26.124296 | 4.40s > 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | > TASK | Manage Keystone resources for OpenStack services > 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | > TIMING | Manage Keystone resources for OpenStack services | undercloud | > 0:11:26.169842 | 0.03s > 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | > TASK | Gather variables for each operating system > 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | > TIMING | tripleo_keystone_resources : Gather variables for each operating > system | undercloud | 0:11:26.253383 | 0.04s > 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | > TASK | Create Keystone Admin resources > 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | > TIMING | tripleo_keystone_resources : Create Keystone Admin resources | > undercloud | 0:11:26.299608 | 0.03s > 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | > INCLUDED | > /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | > undercloud > 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | > TASK | Create default domain > 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | > OK | Create default domain | undercloud > 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | > TIMING | tripleo_keystone_resources : Create default domain | undercloud | > 0:11:28.437360 | 2.09s > 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | > TASK | Create admin and service projects > 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | > TIMING | tripleo_keystone_resources : Create admin and service projects | > undercloud | 0:11:28.483468 | 0.03s > 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | > INCLUDED | > /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | > undercloud > 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | > TASK | Async creation of Keystone project > 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | > CHANGED | Async creation of Keystone project | undercloud | item=admin > 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | > TIMING | tripleo_keystone_resources : Async creation of Keystone project | > undercloud | 0:11:29.238078 | 0.72s > 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | > CHANGED | Async creation of Keystone project | undercloud | item=service > 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | > TIMING | tripleo_keystone_resources : Async creation of Keystone project | > undercloud | 0:11:29.586587 | 1.06s > 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | > TIMING | tripleo_keystone_resources : Async creation of Keystone project | > undercloud | 0:11:29.587916 | 1.07s > 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | > TASK | Check Keystone project status > 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | > WAITING | Check Keystone project status | undercloud | 30 retries left > 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | > OK | Check Keystone project status | undercloud | item=admin > 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | > TIMING | tripleo_keystone_resources : Check Keystone project status | > undercloud | 0:11:35.260666 | 5.66s > 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | > OK | Check Keystone project status | undercloud | item=service > 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | > TIMING | tripleo_keystone_resources : Check Keystone project status | > undercloud | 0:11:35.494729 | 5.89s > 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | > TIMING | tripleo_keystone_resources : Check Keystone project status | > undercloud | 0:11:35.498771 | 5.89s > 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | > TASK | Create admin role > 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | > OK | Create admin role | undercloud > 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | > TIMING | tripleo_keystone_resources : Create admin role | undercloud | > 0:11:37.725949 | 2.20s > 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | > TASK | Create _member_ role > 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | > SKIPPED | Create _member_ role | undercloud > 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | > TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | > 0:11:37.783369 | 0.04s > 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | > TASK | Create admin user > 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | > CHANGED | Create admin user | undercloud > 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | > TIMING | tripleo_keystone_resources : Create admin user | undercloud | > 0:11:41.145472 | 3.34s > 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | > TASK | Assign admin role to admin project for admin user > 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | > OK | Assign admin role to admin project for admin user | undercloud > 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | > TIMING | tripleo_keystone_resources : Assign admin role to admin project > for admin user | undercloud | 0:11:44.288848 | 3.13s > 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | > TASK | Assign _member_ role to admin project for admin user > 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | > SKIPPED | Assign _member_ role to admin project for admin user | undercloud > 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | > TIMING | tripleo_keystone_resources : Assign _member_ role to admin project > for admin user | undercloud | 0:11:44.346479 | 0.04s > 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | > TASK | Create identity service > 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | > OK | Create identity service | undercloud > 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | > TIMING | tripleo_keystone_resources : Create identity service | undercloud > | 0:11:46.022362 | 1.66s > 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | > TASK | Create identity public endpoint > 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | > OK | Create identity public endpoint | undercloud > 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | > TIMING | tripleo_keystone_resources : Create identity public endpoint | > undercloud | 0:11:48.233349 | 2.19s > 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | > TASK | Create identity internal endpoint > 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | > FATAL | Create identity internal endpoint | undercloud | error={"changed": > false, "extra_data": {"data": null, "details": "The request you have made > requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list > services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, > The request you have made requires authentication."} > 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | > TIMING | tripleo_keystone_resources : Create identity internal endpoint | > undercloud | 0:11:50.660654 | 2.41s > > PLAY RECAP > ********************************************************************* > localhost : ok=1 changed=0 unreachable=0 > failed=0 skipped=2 rescued=0 ignored=0 > overcloud-controller-0 : ok=437 changed=103 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-1 : ok=435 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-2 : ok=432 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 > failed=0 skipped=198 rescued=0 ignored=0 > undercloud : ok=39 changed=7 unreachable=0 > failed=1 skipped=6 rescued=0 ignored=0 > > Also : > (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf > [req] > default_bits = 2048 > prompt = no > default_md = sha256 > distinguished_name = dn > [dn] > C=IN > ST=UTTAR PRADESH > L=NOIDA > O=HSC > OU=HSC > emailAddress=demo at demo.com > > v3.ext: > (undercloud) [stack at undercloud oc-cert]$ cat v3.ext > authorityKeyIdentifier=keyid,issuer > basicConstraints=CA:FALSE > keyUsage = digitalSignature, nonRepudiation, keyEncipherment, > dataEncipherment > subjectAltName = @alt_names > [alt_names] > IP.1=fd00:fd00:fd00:9900::81 > > Using these files we create other certificates. > Please check and let me know in case we need anything else. > > > On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe > wrote: > >> Hi Lokendra, >> >> Are you able to access all the tabs in the OpenStack dashboard without >> any error? If not, please retry generating the certificate. Also, share the >> openssl.cnf or server.cnf. >> >> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour >> wrote: >> >>> Hi Team, >>> Any input on this case raised. >>> >>> Thanks, >>> Lokendra >>> >>> >>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Shephard/Swogat, >>>> I tried changing the setting as suggested and it looks like it has >>>> failed at step 4 with error: >>>> >>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>> 0:24:47.736198 | 2.21s >>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>> TASK | Create identity internal endpoint >>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>> FATAL | Create identity internal endpoint | undercloud | error={"changed": >>>> false, "extra_data": {"data": null, "details": "The request you have made >>>> requires authentication.", "response": >>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>> The request you have made requires authentication."} >>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>> >>>> >>>> Checking further the endpoint list: >>>> I see only one endpoint for keystone is gettin created. >>>> >>>> DeprecationWarning >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>> | ID | Region | Service Name | Service >>>> Type | Enabled | Interface | URL | >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>> identity | True | admin | http://30.30.30.173:35357 >>>> | >>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>> | >>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>> | >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>> >>>> >>>> it looks like something related to the SSL, we have also verified that >>>> the GUI login screen shows that Certificates are applied. >>>> exploring more in logs, meanwhile any suggestions or know observation >>>> would be of great help. >>>> thanks again for the support. >>>> >>>> Best Regards, >>>> Lokendra >>>> >>>> >>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> I had faced a similar kind of issue, for ip based setup you need to >>>>> specify the domain name as the ip that you are going to use, this error is >>>>> showing up because the ssl is ip based but the fqdns seems to be >>>>> undercloud.com or overcloud.example.com. >>>>> I think for undercloud you can change the undercloud.conf. >>>>> >>>>> And will it work if we specify clouddomain parameter to the IP address >>>>> for overcloud? because it seems he has not specified the clouddomain >>>>> parameter and overcloud.example.com is the default domain for >>>>> overcloud.example.com. >>>>> >>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >>>>> wrote: >>>>> >>>>>> What is the domain name you have specified in the undercloud.conf >>>>>> file? >>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>> >>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Team, >>>>>>> We were trying to install overcloud with SSL enabled for which the >>>>>>> UC is installed, but OC install is getting failed at step 4: >>>>>>> >>>>>>> ERROR >>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>> server_hostname)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>> (most recent call last):\n File >>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>> send\n timeout=timeout\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>> authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>> authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>> request\n resp = send(**kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>> in _send_request\n raise >>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>> run_globals)\n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>> line 185, in \n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>> line 181, in main\n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>> line 407, in __call__\n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>> line 141, in run\n File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>> File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>> max_version='3.latest')\n File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>> **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>> **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>> self.get_access(session).service_catalog\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>> self._do_create_plugin(session)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:01.271914 | 2.47s >>>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:01.273659 | 2.47s >>>>>>> >>>>>>> PLAY RECAP >>>>>>> ********************************************************************* >>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>> >>>>>>> >>>>>>> in the deploy.sh: >>>>>>> >>>>>>> openstack overcloud deploy --templates \ >>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>> --baremetal-deployment >>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>> --network-config \ >>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>> \ >>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>> >>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>> modifications: >>>>>>> tls-endpoints-public-ip.yaml: >>>>>>> Passed as is in the defaults. >>>>>>> enable-tls.yaml: >>>>>>> >>>>>>> # ******************************************************************* >>>>>>> # This file was created automatically by the sample environment >>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>>> # of the original, if any customizations are needed. >>>>>>> # ******************************************************************* >>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>> # description: | >>>>>>> # Use this environment to pass in certificates for SSL deployments. >>>>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>>>> # environments must also be used. >>>>>>> parameter_defaults: >>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>> # Type: boolean >>>>>>> HorizonSecureCookies: True >>>>>>> >>>>>>> # Specifies the default CA cert to use if TLS is used for services >>>>>>> in the public network. >>>>>>> # Type: string >>>>>>> PublicTLSCAFile: >>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>> >>>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>>> # Type: string >>>>>>> SSLRootCertificate: | >>>>>>> -----BEGIN CERTIFICATE----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END CERTIFICATE----- >>>>>>> >>>>>>> SSLCertificate: | >>>>>>> -----BEGIN CERTIFICATE----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END CERTIFICATE----- >>>>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>>>> # Type: string >>>>>>> SSLIntermediateCertificate: '' >>>>>>> >>>>>>> # The content of the SSL Key in PEM format. >>>>>>> # Type: string >>>>>>> SSLKey: | >>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END PRIVATE KEY----- >>>>>>> >>>>>>> # ****************************************************** >>>>>>> # Static parameters - these are values that must be >>>>>>> # included in the environment but should not be changed. >>>>>>> # ****************************************************** >>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>> controller. >>>>>>> # Type: string >>>>>>> DeployedSSLCertificatePath: >>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>> >>>>>>> # ********************* >>>>>>> # End static parameters >>>>>>> # ********************* >>>>>>> >>>>>>> inject-trust-anchor.yaml >>>>>>> >>>>>>> # ******************************************************************* >>>>>>> # This file was created automatically by the sample environment >>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>>> # of the original, if any customizations are needed. >>>>>>> # ******************************************************************* >>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>> # description: | >>>>>>> # When using an SSL certificate signed by a CA that is not in the >>>>>>> default >>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>> certificate to >>>>>>> # the overcloud nodes. >>>>>>> parameter_defaults: >>>>>>> # The content of a CA's SSL certificate file in PEM format. This >>>>>>> is evaluated on the client side. >>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>> # Type: string >>>>>>> SSLRootCertificate: | >>>>>>> -----BEGIN CERTIFICATE----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END CERTIFICATE----- >>>>>>> >>>>>>> resource_registry: >>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> The procedure to create such files was followed using: >>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>> >>>>>>> >>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>> IP-based certificate, without DNS. * >>>>>>> >>>>>>> Any idea around this error would be of great help. >>>>>>> >>>>>>> -- >>>>>>> skype: lokendrarathour >>>>>>> >>>>>>> >>>>>>> >>>> >>>> >>>> >>> >>> -- >>> >> > > -- > ~ Lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From renliang at uniontech.com Thu Jul 14 06:48:14 2022 From: renliang at uniontech.com (=?utf-8?B?5Lu75Lqu?=) Date: Thu, 14 Jul 2022 14:48:14 +0800 Subject: [skyline]A problem with skyline packaging RPM Message-ID: Hello, a series of packages for skyline are provided at https://pypi.org/user/99cloud/, but only the package of skyline-apiserver. skyline-apiserver requires dependencies of other packages when building. For example, skyline-config,skyline-log,skyline-policy-manager. These packages do not provide the corresponding source packages, we want to package these packages into rpm. Please provide source packages or other better ways to build. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jul 14 14:54:58 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Jul 2022 15:54:58 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> Message-ID: <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> On Thu, 2022-07-14 at 15:51 +0200, Thomas Goirand wrote: > On 7/14/22 11:02, Stephen Finucane wrote: > > On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: > > > On 7/12/22 14:14, Stephen Finucane wrote: > > > > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > > > > > Hi Stephen, > > > > > > > > > > I hope you don't mind I ping and up this thread. > > > > > > > > > > Thanks a lot for this work. Any more progress here? > > > > > > > > We've uncapped warlock in openstack/requirements [1]. We just need the glance > > > > folks to remove their own cap now [2] so that we can raise the version in upper > > > > constraint. > > > > > > > > Stephen > > > > > > > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > > > > [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > > > > > Hi ! > > > > > > I see these 2 are now merged, so it's job (well) done, right? > > > > I'd assume so, yes. We just need to wait for the machinery to do its job and > > bump the upper constraint now. > > > > Stephen > > Hi Stephen, > > I uploaded a patched version of warlock to Unstable with the test fixed > for the new jsonschema. However, when looking at the python-jsonschema > pseudo-excuse, I can see that version 4.6.0 is breaking a bunch of other > OpenStack projects: > > https://release.debian.org/britney/pseudo-excuses-experimental.html#python-jsonschema > > This includes: > - designate > - ironic > - nova The nova fix was trivial enough: https://review.opendev.org/c/openstack/nova/+/849867 I'll have to let someone else fix the other projects. We'll see these flagged as soon as a patch to bump the upper-constraint of jsonschema hits openstack/requirements (which will happen once the python-glanceclient upper constraint bump to same merges). Stephen > - sahara > > I'll try to see what I can do to fix these, maybe some of the failures > are unrelated (I haven't investigated yet). > > Cheers, > > Thomas Goirand (zigo) > From katonalala at gmail.com Thu Jul 14 15:05:36 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Jul 2022 17:05:36 +0200 Subject: [neutron] Drivers meeting - Friday 14.7.2022 - cancelled Message-ID: Hi Neutron Drivers! Due to the lack of agenda, let's cancel tomorrow's drivers meeting. See You on the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Jul 14 16:09:09 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 14 Jul 2022 18:09:09 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> Message-ID: Ironic was not too bad either: https://review.opendev.org/c/openstack/ironic/+/849882 Similar for Nova: https://review.opendev.org/c/openstack/nova/+/849881 On Thu, Jul 14, 2022 at 5:08 PM Stephen Finucane wrote: > On Thu, 2022-07-14 at 15:51 +0200, Thomas Goirand wrote: > > On 7/14/22 11:02, Stephen Finucane wrote: > > > On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: > > > > On 7/12/22 14:14, Stephen Finucane wrote: > > > > > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > > > > > > Hi Stephen, > > > > > > > > > > > > I hope you don't mind I ping and up this thread. > > > > > > > > > > > > Thanks a lot for this work. Any more progress here? > > > > > > > > > > We've uncapped warlock in openstack/requirements [1]. We just need > the glance > > > > > folks to remove their own cap now [2] so that we can raise the > version in upper > > > > > constraint. > > > > > > > > > > Stephen > > > > > > > > > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > > > > > [2] > https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > > > > > > > Hi ! > > > > > > > > I see these 2 are now merged, so it's job (well) done, right? > > > > > > I'd assume so, yes. We just need to wait for the machinery to do its > job and > > > bump the upper constraint now. > > > > > > Stephen > > > > Hi Stephen, > > > > I uploaded a patched version of warlock to Unstable with the test fixed > > for the new jsonschema. However, when looking at the python-jsonschema > > pseudo-excuse, I can see that version 4.6.0 is breaking a bunch of other > > OpenStack projects: > > > > > https://release.debian.org/britney/pseudo-excuses-experimental.html#python-jsonschema > > > > This includes: > > - designate > > - ironic > > - nova > > The nova fix was trivial enough: > > https://review.opendev.org/c/openstack/nova/+/849867 > > I'll have to let someone else fix the other projects. We'll see these > flagged as > soon as a patch to bump the upper-constraint of jsonschema hits > openstack/requirements (which will happen once the python-glanceclient > upper > constraint bump to same merges). > > Stephen > > > - sahara > > > > I'll try to see what I can do to fix these, maybe some of the failures > > are unrelated (I haven't investigated yet). > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Thu Jul 14 16:31:54 2022 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Thu, 14 Jul 2022 18:31:54 +0200 Subject: [kuryr] Proposal to clean up Kuryr core reviewers Message-ID: Hello, We went through the list of current Kuryr core reviewers on the last PTG session and we noticed a couple of people that are not active Kuryr contributors anymore. I would to propose removing the following contributors from the Kuryr core team: - Irena Berezovsky - Gal Sagie - Liping Mao I take this opportunity to thank all of them for their contributions to the Kuryr project. I will wait one week for any feedback before proceeding with the removal. Thank you, Maysa Macedo -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Jul 14 16:45:31 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 14 Jul 2022 09:45:31 -0700 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: On Wed, Jul 13, 2022 at 1:07 PM Ga?l THEROND wrote: > > Hi Julia! > > Thanks a lot for those explanations :-) Most of it confirm my understanding, I now have a clearer point of view that will let me select our test users for the service. > > Regarding aruba switches, those are pretty cool, even if as you pointed it, this feature can actually lead you to some weird if not dangerous situations x) > > Ok noticed about the horizon issue, it can be a little bit tricky for our end users to understand that tbh as they will for sure expect the IP selected by neutron and display on the dashboard to be the one used by the node even on a full flat network such as the provisioning network, but for now we will deal with it by explaining them. A challenging point here is there is no true way to hint that this is the case upfront. Nova acts as an abstraction layer in between and it really needs that networking information piece of the puzzle to generate metadata for an instance. I think, embracing it and also supporting an ML2 integrated configuration where individual switch ports are changed, is ultimately the most powerful configuration, but the challenge we hear from operators upstream is generally network operations groups don't want software toggling switchport vlan assignments. I get why as I've worked in NetOps in the past, it is largely a trust issue, I've just not figured out concrete ways to build the trust needed there. :( > > Regarding my point 2, yeah yeah I knew the purpose of direct deploy I just explicited it I don?t know why, my point was rather: > > At first, when I configured our ironic deployment I had that weird issue where if I don?t put the pxe_filter option to noop but dnsmasq, deploying anything is failing as the conductor doesn?t correctly erase the ?ignore ? part of the string on the dhcp_host_filter file of dnsmasq. If I make this filter as noop then obviously I don?t need neutron to provide the ironic-provision-network anymore as anyone plugged on ports with my VLAN101 set as native VLAN will be able to get an ip from the PXE dnsmasq. I was wondering how you were making it work! This explains a lot, and is really not the intended pattern of use. But it is a pattern upstream generally sees in more "standalone", or cases of direct interaction with Ironic's API. > > I?m still having hard time to map how ironic needs both PXE dedicated dnsmasq for introspection and then can use neutron dnsmasq dhcp once you want to provision a host? Is that because neutron (kinda) lack for dhcp options support on its managed subnets ? > At this point, dnsmasq for introspection is *largely* for the purposes of discovering hardware you don't know about and supporting the oldest introspection workflow where inspection is directly triggered with the introspection service. Depending on the version of Ironic, and if you have a mac address already known to Ironic, you can trigger the inspection workflow directly with ironic directly with the state machine, and it will populate network configuration in neutron to perform introspection on the node. Neutron doesn't really lack dhcp options support on it's subnets, although it is very dnsmasq focused. The challenge we tend to see here is getting things properly aligned host configuration and networking wise for PXE boot operations doesn't always align perfectly, so it becomes just easier to get things to initially work as you did. > All in all it?s pretty clearer to me about the multi tenancy networking requirements now thanks to you! Excellent to hear! If you feel like anything is missing in our documentation, we do welcome patches! I do suspect the whole bit about introspection dnsmasq might need to be further highlighted or delineated in the documentation. -Julia > > Le mar. 12 juil. 2022 ? 00:13, Julia Kreger a ?crit : >> >> Greetings! Hopefully these answers help! >> >> On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND wrote: >> > >> > I everyone, I?m currently working back again with Ironic and it?s amazing! >> > >> > However, during our demo session to our users few questions arise. >> > >> > We?re currently deploying nodes using a private vlan that can?t be reached from outside of the Openstack network fabric (vlan 101 - 192.168.101.0/24) and everything is fine with this provisioning network as our ToR switch all know about it and other Control plan VLANs such as the internal APIs VLAN which allow the IPA Ramdisk to correctly and seamlessly be able to contact the internal IRONIC APIs. >> >> Nice, I've had my lab configured like this in the past. >> >> > >> > (When you declare a port as a trunk allowed all vlan on a aruba switch it seems it automatically analyse the CIDR your host try to reach from your VLAN and route everything to the corresponding VLAN that match the destination IP). >> > >> >> Ugh, that... could be fun :\ >> >> > So know, I still get few tiny issues: >> > >> > 1?/- When I spawn a nova instance on a ironic host that is set to use flat network (From horizon as a user), why does the nova wizard still ask for a neutron network if it?s not set on the provisioned host by the IPA ramdisk right after the whole disk image copy? Is that some missing development on horizon or did I missed something? >> >> Horizon just is not aware... and you can actually have entirely >> different DHCP pools on the same flat network, so that neutron network >> is intended for the instance's addressing to utilize. >> >> Ironic does just ask from an allocation from a provisioning network, >> which can and *should* be a different network than the tenant network. >> >> > >> > 2?/- In a flat network layout deployment using direct deploy scenario for images, am I still supposed to create a ironic provisioning network in neutron? >> > >> > From my understanding (and actually my tests) we don?t, as any host booting on the provisioning vlan will catch up an IP and initiate the bootp sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, but it?s a bit confusing as many documentation explicitly require for this network to exist on neutron. >> >> Yes. Direct is short hand for "Copy it over the network and write it >> directly to disk". It still needs an IP address on the provisioning >> network (think, subnet instead of distinct L2 broadcast domain). >> >> When you ask nova for an instance, it sends over what the machine >> should use as a "VIF" (neutron port), however that is never actually >> bound configuration wise into neutron until after the deployment >> completes. >> >> It *could* be that your neutron config is such that it just works >> anyway, but I suspect upstream contributors would be a bit confused if >> you reported an issue and had no provisioning network defined. >> >> > >> > 3?/- My whole Openstack network setup is using Openvswitch and vxlan tunnels on top of a spine/leaf architecture using aruba CX8360 switches (for both spine and leafs), am I required to use either the networking-generic-switch driver or a vendor neutron driver ? If that?s right, how will this driver be able to instruct the switch to assign the host port the correct openvswitch vlan id and register the correct vxlan to openvswitch from this port? I mean, ok neutron know the vxlan and openvswitch the tunnel vlan id/interface but what is the glue of all that? >> >> If your happy with flat networks, no. >> >> If you want tenant isolation networking wise, yes. >> >> NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port >> level local link configuration (well, Ironic includes the port >> information (local link connection, physical network, and some other >> details) to Neutron with the port binding request. >> >> Those ML2 drivers, then either request the switch configuration be >> updated, or take locally configured credentials to modify port >> configuration in Neutron, and logs into the switch to toggle the >> access port's configuration which the baremetal node is attached to. >> >> Generally, they are not vxlan network aware, and at least with >> networking-generic-switch vlan ID numbers are expected and allocated >> via neutron. >> >> Sort of like the software is logging into the switch and running >> something along the lines of "conf t;int gi0/21;switchport mode >> access;switchport access vlan 391 ; wri mem" >> >> > >> > 4?/- I?ve successfully used openstack cloud oriented CentOS and debian images or snapshot of VMs to provision my hosts, this is an awesome feature, but I?m wondering if there is a way to let those host cloud-init instance to request for neutron metadata endpoint? >> > >> >> Generally yes, you *can* use network attached metadata with neutron >> *as long as* your switches know to direct the traffic for the metadata >> IP to the Neutron metadata service(s). >> >> We know of operators who ahve done it without issues, but often that >> additional switch configured route is not always the best hting. >> Generally we recommend enabling and using configuration drives, so the >> metadata is able to be picked up by cloud-init. >> >> >> > I was a bit surprised about the ironic networking part as I was expecting the IPA ramdisk to at least be able to set the host os with the appropriate network configuration file for whole disk images that do not use encryption by injecting those information from the neutron api into the host disk while mounted (right after the image dd). >> > >> >> IPA has no knowledge of how to modify the host OS in this regard. >> modifying the host OS has generally been something the ironic >> community has avoided since it is not exactly cloudy to have to do so. >> Generally most clouds are running with DHCP, so as long as that is >> enabled and configured, things should generally "just work". >> >> Hopefully that provides a little more context. Nothing prevents you >> from writing your own hardware manager that does exactly this, for >> what it is worth. >> >> > All in all I really like the ironic approach of the baremetal provisioning process, and I?m pretty sure that I?m just missing a bit of understanding of the networking part but it?s really the most confusing part of it to me as I feel like if there is a missing link in between neutron and the host HW or the switches. >> > >> >> Thanks! It is definitely one of the more complex parts given there are >> many moving parts, and everyone wants (or needs) to have their >> networking configured just a little differently. >> >> Hopefully I've kind of put some of the details out there, if you need >> more information, please feel free to reach out, and also please feel >> free to ask questions in #openstack-ironic on irc.oftc.net. >> >> > Thanks a lot anyone that will take time to explain me this :-) >> >> :) From arnaud.morin at gmail.com Thu Jul 14 17:31:15 2022 From: arnaud.morin at gmail.com (Arnaud) Date: Thu, 14 Jul 2022 19:31:15 +0200 Subject: =?US-ASCII?Q?Re=3A_=5Bnova=5D=5Bops=5D_seeking_input_about_local?= =?US-ASCII?Q?/ephemeral_disk_encryption_feature_naming?= In-Reply-To: References: Message-ID: <5DA30C81-328C-4F99-A2F0-9EB40457E68B@gmail.com> Hi, Very good point about the naming. My quick opinion is that ephemeral is not perfect but was used for years so some users are used to it anyway. Renaming now should be for a very understandable naming. Anyway, I'll forward this to some of my colleagues that could have a much stronger opinion on this. Cheers, Arnaud Le 14 juillet 2022 01:16:58 GMT+02:00, melanie witt a ?crit?: >Hi everyone, > >A potential issue regarding naming has come up during review of the >ephemeral storage encryption feature [1][2] patch series [3] and we're >looking for input before moving forward with any naming/terminology >changes across the specs and the entire patch series. > >The concern that has been raised is around use of the term "ephemeral" >for the name of this feature including traits, extra specs, and image >properties [4]. > >For context, the objective of this feature is to provide users with the >ability to specify that all local disks for the instance be encrypted. >This includes the root disk and any other local disks. > >The initial concern is around use of the word "ephemeral" for the root disk. > >My general interpretation of the word "ephemeral" for storage in nova >has been that it means attached storage that only persists for the >lifetime of the instance and is destroyed if and when the instance is >destroyed. This is in contrast to attached cinder volumes which can >persist after instance deletion. > >But should "ephemeral" ever be used to describe a root disk? Is it >incorrect and/or ambiguous to refer to it as such? > >This is part of what is being discussed in [4]. > >During discussion, I also realized there is a separate gap in the above >interpretation of "ephemeral" in nova. When cinder volumes are attached >to an instance, their persistence after the instance is deleted depends >on whether the 'delete_on_termination' attribute is set to true in the >request payload when the instance is created [5] or when attaching a >volume to the instance [6] or updating a volume attached to the instance >[7]. > >This means that in the currently proposed patches, if a user specifies >hw:ephemeral_encryption in the extra_specs, for example, and they also >have a volume with delete_on_termination=True attached, only the root >disk will be encrypted via the extra spec -- the volume would not be >encrypted. Encryption of the volume has to be requested in cinder. > >Could this mislead a user into thinking both the root disk and cinder >volume are encrypted when only the root disk is? > >Because of the above issues, we are considering whether we should change >the terminology used in this feature at this stage. Some ideas include >"local encryption", "local disk encryption", "disk encryption". IMHO >"disk_encryption" is ambiguous in its own way because an attached cinder >volume also has a disk. > >Changing the naming will be a non-trivial amount of work, so we wanted >to get additional input before going ahead with such a change. > >Another thing noted in a comment on another patch in the series [8] is >that the os-traits for this feature have already been merged [9]. If we >decide to change the naming, should we go ahead and use these traits >as-is and have them not match the naming in nova or should we deprecate >them and add new traits that match the new name and use those? > >I hope this makes sense and your input would be much appreciated. > >Cheers, >-melwitt > >[1] >https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-storage-encryption.html >[2] >https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-encryption-libvirt.html >[3] >https://review.opendev.org/q/topic:specs%252Fyoga%252Fapproved%252Fephemeral-encryption-libvirt >[4] >https://review.opendev.org/c/openstack/nova/+/764486/10/nova/api/validation/extra_specs/hw.py#516 >[5] >https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server >[6] >https://docs.openstack.org/api-ref/compute/?expanded=attach-a-volume-to-an-instance-detail >[7] >https://docs.openstack.org/api-ref/compute/?expanded=update-a-volume-attachment-detail >[8] >https://review.opendev.org/c/openstack/nova/+/760456/10/nova/scheduler/request_filter.py#425 >[9] >https://github.com/openstack/os-traits/blob/f64d50e4dd2f21558fb73dd4b59cd1d4b121b707/os_traits/compute/ephemeral.py > -------------- next part -------------- An HTML attachment was scrubbed... URL: From przemyslaw.basa at redge.com Thu Jul 14 18:37:05 2022 From: przemyslaw.basa at redge.com (Przemyslaw Basa) Date: Thu, 14 Jul 2022 20:37:05 +0200 Subject: [placement] running out of VCPU resource In-Reply-To: References: Message-ID: <7dbedb95-0af7-97dc-3f76-bb308aaf52f2@redge.com> Hi, Well i think i figured it out. Following Xena deployment instructions mariadb was installed in version 10.6.5 and it seems to be some kind of bug in this version. Upgrading to 10.6.8 fixed this particular issue for me. I've checked some older and newer versions (10.5.6, 10.8.3) and problematic query behaves there like in 10.6.8. Here's how I've done my tests if someone is interested: % docker run --rm --detach --name mariadb-10.6.5 --env MYSQL_ROOT_PASSWORD=test mariadb:10.6.5 % docker run --rm --detach --name mariadb-10.6.8 --env MYSQL_ROOT_PASSWORD=test mariadb:10.6.8 % docker exec -i mariadb-10.6.5 mysql -u root -ptest < tables_dump.sql % docker exec -i mariadb-10.6.8 mysql -u root -ptest < tables_dump.sql % docker exec -i mariadb-10.6.5 mysql -u root -ptest -t test < test.sql +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | used | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 0 | 128 | 0 | 2 | 13318 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 1 | 1031723 | 2048 | 1 | NULL | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 2 | 901965 | 2 | 1 | NULL | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ % docker exec -i mariadb-10.6.8 mysql -u root -ptest -t test < test.sql +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | used | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 0 | 128 | 0 | 2 | 5 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 1 | 1031723 | 2048 | 1 | 13312 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 2 | 901965 | 2 | 1 | 1 | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ % cat tables_dump.sql create database test; connect test; CREATE TABLE `allocations` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `resource_provider_id` int(11) NOT NULL, `consumer_id` varchar(36) NOT NULL, `resource_class_id` int(11) NOT NULL, `used` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `allocations_resource_provider_class_used_idx` (`resource_provider_id`,`resource_class_id`,`used`), KEY `allocations_resource_class_id_idx` (`resource_class_id`), KEY `allocations_consumer_id_idx` (`consumer_id`) ) ENGINE=InnoDB AUTO_INCREMENT=547 DEFAULT CHARSET=utf8mb3; CREATE TABLE `inventories` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `resource_provider_id` int(11) NOT NULL, `resource_class_id` int(11) NOT NULL, `total` int(11) NOT NULL, `reserved` int(11) NOT NULL, `min_unit` int(11) NOT NULL, `max_unit` int(11) NOT NULL, `step_size` int(11) NOT NULL, `allocation_ratio` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `uniq_inventories0resource_provider_resource_class` (`resource_provider_id`,`resource_class_id`), KEY `inventories_resource_class_id_idx` (`resource_class_id`), KEY `inventories_resource_provider_id_idx` (`resource_provider_id`), KEY `inventories_resource_provider_resource_class_idx` (`resource_provider_id`,`resource_class_id`) ) ENGINE=InnoDB AUTO_INCREMENT=24 DEFAULT CHARSET=utf8mb3; CREATE TABLE `resource_providers` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `uuid` varchar(36) NOT NULL, `name` varchar(200) DEFAULT NULL, `generation` int(11) DEFAULT NULL, `root_provider_id` int(11) DEFAULT NULL, `parent_provider_id` int(11) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `uniq_resource_providers0uuid` (`uuid`), UNIQUE KEY `uniq_resource_providers0name` (`name`), KEY `resource_providers_name_idx` (`name`), KEY `resource_providers_parent_provider_id_idx` (`parent_provider_id`), KEY `resource_providers_root_provider_id_idx` (`root_provider_id`), KEY `resource_providers_uuid_idx` (`uuid`), CONSTRAINT `resource_providers_ibfk_1` FOREIGN KEY (`parent_provider_id`) REFERENCES `resource_providers` (`id`), CONSTRAINT `resource_providers_ibfk_2` FOREIGN KEY (`root_provider_id`) REFERENCES `resource_providers` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb3; INSERT INTO `allocations` VALUES ('2022-07-07 23:08:10',NULL,329,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',1,12288),('2022-07-07 23:08:10',NULL,332,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',0,4),('2022-07-08 06:26:28',NULL,335,4,'aec7aaea-10df-451b-b2ce-847099ee0110',1,2048),('2022-07-08 06:26:28',NULL,338,4,'aec7aaea-10df-451b-b2ce-847099ee0110',0,2),('2022-07-12 08:53:21',NULL,400,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',1,16384),('2022-07-12 08:53:21',NULL,403,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',0,2),('2022-07-14 08:24:27',NULL,538,5,'9681447d-57ec-45c7-af48-63be3c7201da',2,1),('2022-07-14 08:24:27',NULL,541,5,'9681447d-57ec-45c7-af48-63be3c7201da',1,1024),('2022-07-14 08:24:27',NULL,544,5,'9681447d-57ec-45c7-af48-63be3c7201da',0,1); INSERT INTO `resource_providers` VALUES ('2022-07-04 11:59:49','2022-07-13 13:03:08',1,'6ac81bb4-50ef-4784-8a64-9031afeaaa9d','p-os-compute01.openstack.local',50,1,NULL),('2022-07-04 12:00:49','2022-07-13 13:03:07',4,'a324b3b9-f8c8-4279-bf63-a27163fcf792','g-os-compute01.openstack.local',42,4,NULL),('2022-07-04 12:03:57','2022-07-14 08:24:27',5,'16f620c0-8c6f-4984-8d58-e2c00d1b32da','t-os-compute01.openstack.local',50,5,NULL); INSERT INTO `inventories` VALUES ('2022-07-04 11:59:50','2022-07-11 09:24:04',1,1,0,128,0,1,128,1,2),('2022-07-04 11:59:50','2022-07-11 09:24:04',4,1,1,1031723,2048,1,1031723,1,1),('2022-07-04 11:59:50','2022-07-11 09:24:04',7,1,2,901965,2,1,901965,1,1),('2022-07-04 12:01:53','2022-07-04 14:59:53',10,4,0,128,0,1,128,1,2),('2022-07-04 12:01:53','2022-07-04 14:59:53',13,4,1,1031723,2048,1,1031723,1,1),('2022-07-04 12:01:53','2022-07-04 14:59:53',16,4,2,901965,2,1,901965,1,1),('2022-07-04 12:03:57','2022-07-14 07:16:08',17,5,0,128,0,1,128,1,2),('2022-07-04 12:03:57','2022-07-14 07:09:11',20,5,1,1031723,2048,1,1031723,1,1),('2022-07-04 12:03:57','2022-07-14 07:09:11',23,5,2,901965,2,1,901965,1,1); % cat test.sql SELECT rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, inv.reserved, inv.allocation_ratio, allocs.used FROM resource_providers AS rp JOIN inventories AS inv ON rp.id = inv.resource_provider_id LEFT JOIN ( SELECT resource_provider_id, resource_class_id, SUM(used) AS used FROM allocations WHERE resource_class_id IN (0, 1, 2) AND resource_provider_id IN (5) GROUP BY resource_provider_id, resource_class_id ) AS allocs ON inv.resource_provider_id = allocs.resource_provider_id AND inv.resource_class_id = allocs.resource_class_id WHERE rp.id IN (5) AND inv.resource_class_id IN (0,1,2) ; Regards, Przemyslaw Basa From cboylan at sapwetik.org Thu Jul 14 19:10:57 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 14 Jul 2022 12:10:57 -0700 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <20220714143048.gxznifh7oeaaqldi@yuggoth.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> Message-ID: <9ce7dcbf-8ece-407c-ad6f-e3aea7db58f8@www.fastmail.com> On Thu, Jul 14, 2022, at 7:30 AM, Jeremy Stanley wrote: > On 2022-07-14 15:01:14 +0100 (+0100), Sean Mooney wrote: > [...] >> do we currently have 3.11 aviable in any of the ci images? i >> belive we have 22.04 image aviable is it installbale there or do >> we have debian bookworm images we can use to add a non voting tox >> py311 job to the relevent project repos? > > Not to my knowledge, no. Ubuntu inherits most of their packages from > Debian, which has only just added a Python 3.11 pre-release, so it > will take time to end up even in Ubuntu under development (Ubuntu > Kinetic which is slated to become 22.10 still only has python3.10 > packages for the moment). It's probable they'll backport a > python3.11 package to Jammy once available, though there's no > guarantee, and based on historical backports it probably won't be > until upstream 3.11.1 is tagged at the very earliest. > > Keep in mind that what Debian has at the moment is a package of > Python 3.11.0b4, since 3.11.0 isn't even scheduled for an upstream > release until October (two days before we're planning to release > OpenStack Zed). Further, it's not even in Debian bookworm yet, and > it's hard to predict how soon it will be able to transition out of > unstable either. > > Let's be clear, what's being asked here is that OpenStack not just > test against the newest available Python release, but in fact to > continually test against pre-releases of the next Python while it's > still being developed. While I understand that this would be nice, I > hardly think it's a reasonable thing to expect. We have a hard > enough time just keeping up with actual releases of Python which are > current at the time we start a development cycle. Big ++ on this last point. I think right now the most important thing you can do to ensure the transition to python3.11 goes as smoothly as possible is to shore up and improve the current python3.10 testing. > -- > Jeremy Stanley > > Attachments: > * signature.asc From ozzzo at yahoo.com Thu Jul 14 20:22:55 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Thu, 14 Jul 2022 20:22:55 +0000 (UTC) Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> <479542002.91408.1657570253503@mail.yahoo.com> <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> Message-ID: <702797503.1744449.1657830175373@mail.yahoo.com> Fantastic! Thank you for stepping up Dale! On Tuesday, July 12, 2022, 05:30:59 PM EDT, Dale Smith wrote: Hi gmann and Albert, I'd like to put my hand up for PTL of Adjutant if you are unable, Albert. Catalyst Cloud continue to have an interest in keeping this project active and maintained, and I am an early contributor/reviewer of Adjutant codebase alongside Adrian Turjak in 2015/2016. cheers, Dale Smith On 12/07/22 08:10, Albert Braden wrote: Unfortunately I was not able to get permission to be Adjutant PTL. They didn't say no, but the decision makers are too busy to address the issue. As I settle into this new position, I am realizing that I don't have time to do it anyway, so I will have to regretfully agree to placing Adjutant on the "inactive" list. If circumstances change, I will ask about resurrecting the project. Albert On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann wrote: ---- On Fri, 22 Apr 2022 15:53:37 -0500? Ghanshyam Mann wrote --- > Hi Braden, > > Please let us know about the status of your company's permission to maintain the project. > As we are in Zed cycle development and there is no one to maintain/lead this project we > need to start thinking about the next steps mentioned in the leaderless project etherpad > Hi Braden, We have not heard back from you if you can help in maintaining the Adjutant. As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project list - https://review.opendev.org/c/openstack/governance/+/849153/1 -gmann > - https://etherpad.opendev.org/p/zed-leaderless > > -gmann > >? ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- >? >? ? ? ? ? ? ? ? I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. >? >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote:? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? Hi, >? > >? > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. >? > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. >? > >? > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html >? > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html >? > >? > -- >? > Slawek Kaplonski >? > Principal Software Engineer >? > Red Hat? ? ? ? ? ? ? ? ? ? ? ? ? ? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jul 14 21:45:48 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Jul 2022 21:45:48 +0000 Subject: [tc] August 2022 OpenInfra Board Sync In-Reply-To: <20220630142207.rwtyc3apyhd2gyjv@yuggoth.org> References: <20220630142207.rwtyc3apyhd2gyjv@yuggoth.org> Message-ID: <20220714214548.ywqabxb2gg25qfb2@yuggoth.org> On 2022-06-30 14:22:08 +0000 (+0000), Jeremy Stanley wrote: [...] > I've also created a Framadate poll has in order to identify a few > preferred dates and times. Responses must be submitted by Friday, > 2022-07-15, in order to provide the board members with time to pick > from the available options. The poll can be found here: > https://framadate.org/atdFRM8YeUtauSgC [...] If you're interested in participating, please remember to mark your preferred/available times at the above URL. I plan on closing it tomorrow so I can try to pick a few convenient dates and times to suggest to the OpenInfra Board of Directors. Thanks again! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mdulko at redhat.com Fri Jul 15 09:17:48 2022 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Fri, 15 Jul 2022 11:17:48 +0200 Subject: [kuryr] Proposal to clean up Kuryr core reviewers In-Reply-To: References: Message-ID: On Thu, 2022-07-14 at 18:31 +0200, Maysa De Macedo Souza wrote: > Hello, > > We went through the list of current Kuryr core reviewers on the last > PTG session and we noticed a couple of people that are not active > Kuryr contributors anymore. I would to propose removing the following > contributors from the Kuryr core team: > > - Irena Berezovsky > - Gal Sagie > - Liping Mao > > I take this opportunity to thank all of them for their contributions > to the Kuryr project. > > I will wait one week for any feedback before proceeding with the > removal. Thanks for raising this Maysa, I believe it makes sense to keep the list of core reviewers up to date. > Thank you, > Maysa Macedo > From zigo at debian.org Fri Jul 15 09:56:25 2022 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Jul 2022 11:56:25 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: On 7/14/22 16:01, Sean Mooney wrote: > do we currently have 3.11 aviable in any of the ci images? i belive we have 22.04 image aviable is it installbale there > or do we have debian bookworm images we can use to add a non voting tox py311 job to the relevent project repos? Hi, Currently, we only have Python 3.11 beta 4 (ie: 3.11.0~b4-1) available in Debian Unstable. It wont be available in Bookworm until the python 3.10 -> 3.11 transition is over in Debian Unstable. During this process, Python 3.11 will only be an available Python version, but not the default. It will then become the default Python 3, and then, Python 3.10 will be removed from Unstable. THEN Python 3.11 will be fully the Bookworm version. This probably will take a few months. FYI, I very much know the patches will be done on a best effort basis only. I'm fine with that, and I'm used to discuss it with the community, and do backports of patches that land in master. My mail was just a call to the community so that we keep in mind that it's coming. I have no idea what the breakages will be (yet), but I'm looking forward figuring it out. Over the years, I kind of have fun doing so, even if I still think breaking the world every few months is a terrible idea. In a more general way, I am convince that it's always best for all of us if we can find a way to test with the latest everything, including the interpreter. Waiting for Ubuntu to have the latest interpreter is IMO broken by design, because the Python version transition always happen in Debian Unstable first (and made by the same person that maintains the Python interpreter in both Debian and Ubuntu: Matthias Klose, aka doko). Not only for the interpreter, if we could find a way to test things in Debian Unstable, always, as non-voting jobs, we would see the failures early. I'd love we he had such a non-voting job, that would also use the latest packages from PyPi, just so that we could at least know what will happen in a near future. Your thoughts everyone? Cheers, Thomas Goirand (zigo) From zigo at debian.org Fri Jul 15 10:05:31 2022 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Jul 2022 12:05:31 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <20220714143048.gxznifh7oeaaqldi@yuggoth.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> Message-ID: <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> On 7/14/22 16:30, Jeremy Stanley wrote: > Let's be clear, what's being asked here is that OpenStack not just > test against the newest available Python release, but in fact to > continually test against pre-releases of the next Python while it's > still being developed. I'm not asking for that. :) All what I'm asking, is that when Python RC releases are out, and I report a bug, the community has the intention to fix it as early as possible, at least in master (and maybe help with backports if it's very tricky: I can manage trivial backporting by myself). That's enough for me, really (at least it has been enough in the past...). > We have a hard > enough time just keeping up with actual releases of Python which are > current at the time we start a development cycle. Yeah, though it'd be nice if we could have the latest interpreter in use in Unstable for a non-voting job, starting when the interpreter is released (or at least when the first RCs are out). We discussed this already, didn't we? I know it's a "would be nice thing" and that nobody has time to work on this... :/ Cheers, Thomas Goirand (zigo) From bence.romsics at gmail.com Fri Jul 15 10:51:21 2022 From: bence.romsics at gmail.com (Bence Romsics) Date: Fri, 15 Jul 2022 12:51:21 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: References: Message-ID: Hi, Uploaded the same content to github for long term storage: https://github.com/rubasov/neutron-rally -- Bence From fungi at yuggoth.org Fri Jul 15 11:50:49 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jul 2022 11:50:49 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: <20220715115049.3casmovqq3qgq2ag@yuggoth.org> On 2022-07-15 11:56:25 +0200 (+0200), Thomas Goirand wrote: [...] > Not only for the interpreter, if we could find a way to test things in > Debian Unstable, always, as non-voting jobs, we would see the failures > early. I'd love we he had such a non-voting job, that would also use the > latest packages from PyPi, just so that we could at least know what will > happen in a near future. > > Your thoughts everyone? My thought is that there is some irony in the timing of your question, since just yesterday[*] the OpenStack Technical Committee seems to have reached a consensus that CentOS is no longer a stable enough platform for pre-merge testing. Not necessarily anything to do with running OpenStack on it specifically, but just that generally it's the place where minimally-tested things go to see if there are any problems with them before they wind up in RHEL. That's a pretty close parallel to Debian's unstable/testing distributions as the place where problems get worked out before they get into the next stable release. Taking things from an OpenDev Collaboratory perspective, we've struggled to keep images for frequently-changing distros like Fedora, Gentoo, or OpenSUSE Tumbleweed working at all because things frequently change which break our ability to build or boot those images. More static distributions like Debian stable, Ubuntu LTS, or CentOS (before it became Stream), have a much less frequent update cycle and so are far easier for us to plan for and stay on top of. [*] https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-14-15.00.log.html#l-31 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jul 15 12:01:39 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jul 2022 12:01:39 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> Message-ID: <20220715120139.5vi563hsbznjimbk@yuggoth.org> On 2022-07-15 12:05:31 +0200 (+0200), Thomas Goirand wrote: [...] > All what I'm asking, is that when Python RC releases are out, and > I report a bug, the community has the intention to fix it as early > as possible, at least in master (and maybe help with backports if > it's very tricky: I can manage trivial backporting by myself). > That's enough for me, really (at least it has been enough in the > past...). [...] I don't think fixes have been refused in the past simply because they address a problem observed with a newer Python interpreter than we test on. Just be aware that when it comes to pre-merge testing of proposed changes for OpenStack, the time to decide what platforms and interpreter versions we'll test with is at the end of the previous cycle. For Zed that's Python 3.8 and 3.9, though projects are encouraged to also try 3.10 if they can (we got the ability to test that partway into the development cycle). We have to set these expectations before we begin work on a new version of OpenStack, so that we don't change our testing goals for developers while they're in the middle of trying to write the software. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From swogatpradhan22 at gmail.com Thu Jul 14 20:47:33 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 15 Jul 2022 02:17:33 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: I was facing a similar kind of issue. https://bugzilla.redhat.com/show_bug.cgi?id=2089442 Here is the solution that helped me fix it. Also make sure the cn that you will use is reachable from undercloud (maybe) script should take care of it. Also please follow Mr. Tathe's mail to add the cn first. With regards Swogat Pradhan On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe wrote: > Hi Lokendra, > > The CN field is missing. Can you add that and generate the certificate > again. > > CN=ipaddress > > Also add dns.1=ipaddress under alt_names for precaution. > > Vikarna > > On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, > wrote: > >> HI Vikarna, >> Thanks for the inputs. >> I am note able to access any tabs in GUI. >> [image: image.png] >> >> to re-state, we are failing at the time of deployment at step4 : >> >> >> PLAY [External deployment step 4] >> ********************************************** >> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >> TASK | External deployment step 4 >> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >> OK | External deployment step 4 | undercloud -> localhost | result={ >> "changed": false, >> "msg": "Use --start-at-task 'External deployment step 4' to resume >> from this task" >> } >> [WARNING]: ('undercloud -> localhost', >> '525400ae-089b-870a-fab6-0000000000d7') >> missing from stats >> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >> INCLUDED | >> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >> | undercloud >> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >> TASK | Clean up legacy Cinder keystone catalog entries >> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:24.204562 | 2.48s >> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv3', 'service_type': 'volume'} >> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:26.122584 | 4.40s >> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:26.124296 | 4.40s >> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >> TASK | Manage Keystone resources for OpenStack services >> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >> TIMING | Manage Keystone resources for OpenStack services | undercloud | >> 0:11:26.169842 | 0.03s >> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >> TASK | Gather variables for each operating system >> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >> TIMING | tripleo_keystone_resources : Gather variables for each operating >> system | undercloud | 0:11:26.253383 | 0.04s >> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >> TASK | Create Keystone Admin resources >> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >> undercloud | 0:11:26.299608 | 0.03s >> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >> INCLUDED | >> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >> undercloud >> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >> TASK | Create default domain >> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >> OK | Create default domain | undercloud >> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >> TIMING | tripleo_keystone_resources : Create default domain | undercloud | >> 0:11:28.437360 | 2.09s >> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >> TASK | Create admin and service projects >> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >> TIMING | tripleo_keystone_resources : Create admin and service projects | >> undercloud | 0:11:28.483468 | 0.03s >> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >> INCLUDED | >> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >> undercloud >> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >> TASK | Async creation of Keystone project >> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >> CHANGED | Async creation of Keystone project | undercloud | item=admin >> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >> undercloud | 0:11:29.238078 | 0.72s >> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >> CHANGED | Async creation of Keystone project | undercloud | item=service >> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >> undercloud | 0:11:29.586587 | 1.06s >> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >> undercloud | 0:11:29.587916 | 1.07s >> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >> TASK | Check Keystone project status >> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >> WAITING | Check Keystone project status | undercloud | 30 retries left >> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >> OK | Check Keystone project status | undercloud | item=admin >> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >> TIMING | tripleo_keystone_resources : Check Keystone project status | >> undercloud | 0:11:35.260666 | 5.66s >> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >> OK | Check Keystone project status | undercloud | item=service >> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >> TIMING | tripleo_keystone_resources : Check Keystone project status | >> undercloud | 0:11:35.494729 | 5.89s >> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >> TIMING | tripleo_keystone_resources : Check Keystone project status | >> undercloud | 0:11:35.498771 | 5.89s >> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >> TASK | Create admin role >> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >> OK | Create admin role | undercloud >> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >> 0:11:37.725949 | 2.20s >> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >> TASK | Create _member_ role >> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >> SKIPPED | Create _member_ role | undercloud >> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | >> 0:11:37.783369 | 0.04s >> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >> TASK | Create admin user >> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >> CHANGED | Create admin user | undercloud >> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >> 0:11:41.145472 | 3.34s >> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >> TASK | Assign admin role to admin project for admin user >> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >> OK | Assign admin role to admin project for admin user | undercloud >> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >> TIMING | tripleo_keystone_resources : Assign admin role to admin project >> for admin user | undercloud | 0:11:44.288848 | 3.13s >> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >> TASK | Assign _member_ role to admin project for admin user >> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >> SKIPPED | Assign _member_ role to admin project for admin user | undercloud >> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >> TIMING | tripleo_keystone_resources : Assign _member_ role to admin project >> for admin user | undercloud | 0:11:44.346479 | 0.04s >> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >> TASK | Create identity service >> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >> OK | Create identity service | undercloud >> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >> TIMING | tripleo_keystone_resources : Create identity service | undercloud >> | 0:11:46.022362 | 1.66s >> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >> TASK | Create identity public endpoint >> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >> OK | Create identity public endpoint | undercloud >> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >> TIMING | tripleo_keystone_resources : Create identity public endpoint | >> undercloud | 0:11:48.233349 | 2.19s >> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >> TASK | Create identity internal endpoint >> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >> FATAL | Create identity internal endpoint | undercloud | error={"changed": >> false, "extra_data": {"data": null, "details": "The request you have made >> requires authentication.", "response": >> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >> The request you have made requires authentication."} >> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >> undercloud | 0:11:50.660654 | 2.41s >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=1 changed=0 unreachable=0 >> failed=0 skipped=2 rescued=0 ignored=0 >> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >> failed=0 skipped=198 rescued=0 ignored=0 >> undercloud : ok=39 changed=7 unreachable=0 >> failed=1 skipped=6 rescued=0 ignored=0 >> >> Also : >> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >> [req] >> default_bits = 2048 >> prompt = no >> default_md = sha256 >> distinguished_name = dn >> [dn] >> C=IN >> ST=UTTAR PRADESH >> L=NOIDA >> O=HSC >> OU=HSC >> emailAddress=demo at demo.com >> >> v3.ext: >> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >> authorityKeyIdentifier=keyid,issuer >> basicConstraints=CA:FALSE >> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >> dataEncipherment >> subjectAltName = @alt_names >> [alt_names] >> IP.1=fd00:fd00:fd00:9900::81 >> >> Using these files we create other certificates. >> Please check and let me know in case we need anything else. >> >> >> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe >> wrote: >> >>> Hi Lokendra, >>> >>> Are you able to access all the tabs in the OpenStack dashboard without >>> any error? If not, please retry generating the certificate. Also, share the >>> openssl.cnf or server.cnf. >>> >>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Team, >>>> Any input on this case raised. >>>> >>>> Thanks, >>>> Lokendra >>>> >>>> >>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Shephard/Swogat, >>>>> I tried changing the setting as suggested and it looks like it has >>>>> failed at step 4 with error: >>>>> >>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>>> 0:24:47.736198 | 2.21s >>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>> TASK | Create identity internal endpoint >>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>> FATAL | Create identity internal endpoint | undercloud | >>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>> request you have made requires authentication.", "response": >>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>> The request you have made requires authentication."} >>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>> >>>>> >>>>> Checking further the endpoint list: >>>>> I see only one endpoint for keystone is gettin created. >>>>> >>>>> DeprecationWarning >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>> | ID | Region | Service Name | >>>>> Service Type | Enabled | Interface | URL >>>>> | >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>> identity | True | admin | http://30.30.30.173:35357 >>>>> | >>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>> | >>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>> | >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>> >>>>> >>>>> it looks like something related to the SSL, we have also verified that >>>>> the GUI login screen shows that Certificates are applied. >>>>> exploring more in logs, meanwhile any suggestions or know observation >>>>> would be of great help. >>>>> thanks again for the support. >>>>> >>>>> Best Regards, >>>>> Lokendra >>>>> >>>>> >>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> I had faced a similar kind of issue, for ip based setup you need to >>>>>> specify the domain name as the ip that you are going to use, this error is >>>>>> showing up because the ssl is ip based but the fqdns seems to be >>>>>> undercloud.com or overcloud.example.com. >>>>>> I think for undercloud you can change the undercloud.conf. >>>>>> >>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>> address for overcloud? because it seems he has not specified the >>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>> domain for overcloud.example.com. >>>>>> >>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> What is the domain name you have specified in the undercloud.conf >>>>>>> file? >>>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>>> >>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Team, >>>>>>>> We were trying to install overcloud with SSL enabled for which the >>>>>>>> UC is installed, but OC install is getting failed at step 4: >>>>>>>> >>>>>>>> ERROR >>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>> server_hostname)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>> (most recent call last):\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>> send\n timeout=timeout\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>> last):\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>> last):\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>> authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>> authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>> in _send_request\n raise >>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>> run_globals)\n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>> line 185, in \n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>> line 181, in main\n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>> line 407, in __call__\n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>> line 141, in run\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>> File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>> max_version='3.latest')\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>> **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>> **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>> self._do_create_plugin(session)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:01.271914 | 2.47s >>>>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:01.273659 | 2.47s >>>>>>>> >>>>>>>> PLAY RECAP >>>>>>>> ********************************************************************* >>>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>> >>>>>>>> >>>>>>>> in the deploy.sh: >>>>>>>> >>>>>>>> openstack overcloud deploy --templates \ >>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>> --baremetal-deployment >>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>> --network-config \ >>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>> \ >>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>> >>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>> modifications: >>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>> Passed as is in the defaults. >>>>>>>> enable-tls.yaml: >>>>>>>> >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # This file was created automatically by the sample environment >>>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>> instead >>>>>>>> # of the original, if any customizations are needed. >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>> # description: | >>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>> deployments. >>>>>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>>>>> # environments must also be used. >>>>>>>> parameter_defaults: >>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>> # Type: boolean >>>>>>>> HorizonSecureCookies: True >>>>>>>> >>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>> services in the public network. >>>>>>>> # Type: string >>>>>>>> PublicTLSCAFile: >>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>> >>>>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>>>> # Type: string >>>>>>>> SSLRootCertificate: | >>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END CERTIFICATE----- >>>>>>>> >>>>>>>> SSLCertificate: | >>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END CERTIFICATE----- >>>>>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>>>>> # Type: string >>>>>>>> SSLIntermediateCertificate: '' >>>>>>>> >>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>> # Type: string >>>>>>>> SSLKey: | >>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END PRIVATE KEY----- >>>>>>>> >>>>>>>> # ****************************************************** >>>>>>>> # Static parameters - these are values that must be >>>>>>>> # included in the environment but should not be changed. >>>>>>>> # ****************************************************** >>>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>>> controller. >>>>>>>> # Type: string >>>>>>>> DeployedSSLCertificatePath: >>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>> >>>>>>>> # ********************* >>>>>>>> # End static parameters >>>>>>>> # ********************* >>>>>>>> >>>>>>>> inject-trust-anchor.yaml >>>>>>>> >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # This file was created automatically by the sample environment >>>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>> instead >>>>>>>> # of the original, if any customizations are needed. >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>> # description: | >>>>>>>> # When using an SSL certificate signed by a CA that is not in the >>>>>>>> default >>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>> certificate to >>>>>>>> # the overcloud nodes. >>>>>>>> parameter_defaults: >>>>>>>> # The content of a CA's SSL certificate file in PEM format. This >>>>>>>> is evaluated on the client side. >>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>> # Type: string >>>>>>>> SSLRootCertificate: | >>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END CERTIFICATE----- >>>>>>>> >>>>>>>> resource_registry: >>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The procedure to create such files was followed using: >>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>>> >>>>>>>> >>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>> IP-based certificate, without DNS. * >>>>>>>> >>>>>>>> Any idea around this error would be of great help. >>>>>>>> >>>>>>>> -- >>>>>>>> skype: lokendrarathour >>>>>>>> >>>>>>>> >>>>>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> >>> >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Fri Jul 15 12:44:22 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 15 Jul 2022 14:44:22 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: On Fri, 15 Jul 2022 at 11:59, Thomas Goirand wrote: > Not only for the interpreter, if we could find a way to test things in > Debian Unstable, always, as non-voting jobs, we would see the failures > early. I'd love we he had such a non-voting job, that would also use the > latest packages from PyPi, just so that we could at least know what will > happen in a near future. Well, we can have periodic and experimental, master-only jobs to test things on Debian unstable because it's always interesting to see the upcoming breakage (or better yet - be able to pinpoint it to a certain change happening in Debian unstable that caused it). The job would only utilise the interpreter and the helper binaries (like ovs) - all targets I can think of are capable only of using pip-installed deps and not Debian packages so that part we cannot really cover reliably at all. If that makes sense to you, we can work towards that direction. The biggest issue will still be the bootability/usability of the infra image though. -yoctozepto From hberaud at redhat.com Fri Jul 15 13:52:15 2022 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jul 2022 15:52:15 +0200 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: <42c2a184499470bdaa62a16b5f59def2a59e08dd.camel@redhat.com> References: <42c2a184499470bdaa62a16b5f59def2a59e08dd.camel@redhat.com> Message-ID: Hello, Thank you to all the people who shared feedback. Because we haven't heard any objections for more than a week, I'll move forward to add Takashi to oslo core. Thank you, Takashi, again for your continuous great work! Cheers! Le mar. 12 juil. 2022 ? 17:50, Stephen Finucane a ?crit : > On Thu, 2022-06-30 at 15:39 +0200, Herve Beraud wrote: > > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member > of the oslo core team. > > During the last months Takashi has been a significant contributor to the > oslo projects. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > > +1 from me. It would be great to have tkajinam onboard. > > Stephen > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jul 15 13:53:09 2022 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jul 2022 15:53:09 +0200 Subject: Propose to add Tobias Urdin as Tooz core reviewer In-Reply-To: <573e57d95ca7553239e576f8b41f07b006dab513.camel@redhat.com> References: <573e57d95ca7553239e576f8b41f07b006dab513.camel@redhat.com> Message-ID: Hello, Thank you to all the people who shared feedback. Because we haven't heard any objections for more than a week, I'll move forward to add Tobias to the tooz core members. Thank you, Tobias, again for your continuous great work! Cheers! Le mar. 12 juil. 2022 ? 17:50, Stephen Finucane a ?crit : > On Thu, 2022-06-30 at 15:43 +0200, Herve Beraud wrote: > > Hello everybody, > > It is my pleasure to propose Tobias Urdin (tobias-urdin) as a new member > of the Tooz project core team. > > During the last months Tobias has been a significant contributor to the > Tooz project. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > > +1 from me! > > Stephen > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jul 15 14:14:04 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 15 Jul 2022 15:14:04 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: <8a89850f144f27e82dddc33b61087c9c74caead6.camel@redhat.com> On Fri, 2022-07-15 at 14:44 +0200, Rados?aw Piliszek wrote: > On Fri, 15 Jul 2022 at 11:59, Thomas Goirand wrote: > > Not only for the interpreter, if we could find a way to test things in > > Debian Unstable, always, as non-voting jobs, we would see the failures > > early. I'd love we he had such a non-voting job, that would also use the > > latest packages from PyPi, just so that we could at least know what will > > happen in a near future. > > Well, we can have periodic and experimental, master-only jobs to test > things on Debian unstable because it's always interesting to see the > upcoming breakage (or better yet - be able to pinpoint it to a certain > change happening in Debian unstable that caused it). > for the nova ci we have moved centos 9 stream jobs to the periodic-weekly pipeline and we are going to monitor it in our weekly meeting. i dont really have any objection to adding a debian testing or debian stable based job there too. be it in the form of a tempest job or tox job. we dont really have the capsity either in review bandwith or ci resouce to have enstable distros in a voting or non voting capsity on every patch i.e. check and gate pipelines. but a weekly periodic its proably doable. one thing we have to be carful of however is a concern i raised last cycle with extending 3.6 support. some of the 3.6 deprecated fature were drop in 3.10 and more feature that were depfreced in later releases are droped in 3.11. while we can try and support the newr releases 3.9 compatiable will be needed for a long time. so if 3.11 or 3.12 is fundementaly in compatibel with 3.9 due to a libary change ectra that will be problematic sicne cento 9 derived distro will be on 3.9 for several years to come. i dont actully know the exact lifecycle uels for how often new centos/rhel majory reseas happen but its usuarly aroud the .6 release of the current release that the .0 release of the next majory version happen so every 5 years or so. i really dont know the plans for rhel 10 but for rhel9/centos 9 lifetime 3.9 will be the python version used so we need to balnce fast moving distors like debian and slow moving distors like centos and enable both. that might mean we will ahve to drop some deps, avoid some features or utilise compatablity libs in some cases similar to six or mock the lib vs unittest.mock so when fixing any 3.11 incompatiablity we need to still maintain 3.8 compatiablity for zed and 3.9 compatiblity in AA+ i woudl guess it will be the CC or DD release before we coudl consdier droping 3.9 suppot but i think we could drop 3.8 in AA. > The job would > only utilise the interpreter and the helper binaries (like ovs) - all > targets I can think of are capable only of using pip-installed deps > and not Debian packages so that part we cannot really cover reliably > at all. If that makes sense to you, we can work towards that > direction. The biggest issue will still be the bootability/usability > of the infra image though. > > -yoctozepto > From katonalala at gmail.com Fri Jul 15 14:59:54 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 15 Jul 2022 16:59:54 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: References: Message-ID: Thanks Bence, Really appreciated. Bence Romsics ezt ?rta (id?pont: 2022. j?l. 15., P, 13:02): > Hi, > > Uploaded the same content to github for long term storage: > > https://github.com/rubasov/neutron-rally > > -- > Bence > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Jul 15 15:09:35 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 15 Jul 2022 17:09:35 +0200 Subject: [release] Release countdown for week R-11, Jul 18 - 22 Message-ID: <6129e077-53f0-37c0-0085-76d2a44a286d@est.tech> Development Focus ----------------- We are now past the Zed-2 milestone, and entering the last development phase of the cycle. Teams should be focused on implementing planned work for the cycle. Now is a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- Looking ahead to the end of the release cycle, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last ? feature release before Non-client library freeze (August 26th, 2022). ? Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their ? last feature release before Client library freeze (September 1st, 2022) * Deliverables following a cycle-with-rc model (that would be most ? services) observe a Feature freeze on that same date, September 1st, 2022. ? Any feature addition beyond that date should be discussed on the ? mailing-list and get PTL approval. After feature freeze, cycle-with-rc ? deliverables need to produce a first release candidate (and a stable ? branch) before RC1 deadline (September12th, 2022) * Deliverables following cycle-with-intermediary model can release as ? necessary, but in all cases before Final RC deadline (September 29th, 2022) Upcoming Deadlines & Dates -------------------------- Non-client library freeze: August 26th, 2022 (R-6 week) Client library freeze: September 1st, 2022 (R-5 week) Zed-3 milestone: September 1st, 2022 (R-5 week) El?d Ill?s irc: elodilles From gmann at ghanshyammann.com Fri Jul 15 17:34:33 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Jul 2022 12:34:33 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 15 July 2022: Reading: 5 min Message-ID: <18202ed28dd.11ffe868e170056.2005547301817008828@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on 14 July. Most of the meeting discussions are summarized in this email. Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-14-15.00.log.html * Next TC weekly meeting will be on 21 July Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by 20 July. 2. What we completed this week: ========================= * We continued the CentOS steam testing discussion and by considering all the point we agreed to make CentOS stream jobs testing periodic but keep it in testing runtime. Monitor, debug, and report the failure to CentOS stream team. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[2], Two are completed and others items are in-progress. Open Reviews ----------------- * Seven open reviews for ongoing activities[3]. Consistent and Secure Default RBAC ------------------------------------------- We have a good amount of discussion and review in the goal document updates[4] and I have updated the patch by resolving the review comments. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[5]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- Dale Smith volunteer to be PTL for Adjutant project [6] Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[7][8]. Project updates ------------------- * Add charmed k8s operators to OpenStack Charms[9] * Adding Skyline as Emerging Technology[10] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [12] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847413 [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/847418 [5] https://review.opendev.org/c/openstack/governance/+/836888 [6] https://review.opendev.org/c/openstack/governance/+/849606 [7] https://etherpad.opendev.org/p/zuul-config-error-openstack [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [9] https://review.opendev.org/c/openstack/governance/+/849997 [10] https://review.opendev.org/c/openstack/governance/+/849155 [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From kdhall at binghamton.edu Fri Jul 15 21:10:53 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Fri, 15 Jul 2022 17:10:53 -0400 Subject: [OpenStack-Ansible][Nova] OSA install Yoga on Debian Bullseye Backports Message-ID: Hello, I've just worked most of the way through a fresh install of Yoga on 5 Debian Bullseye systems. All systems are updated to include the Bullseye backports. The documentation doesn't mention backports, but I always install Debian this way - almost without thinking. The specific config is based on openstack-user-config.yaml.prod.example with 3 infrastructure host, two compute hosts, and cinder/glance running on the infrastructure hosts as in the prod example file. First, one surprise: For version of GlusterFS in Bullseye Backports, /usr/sbin/gluster as been moved to a separate package - glusterfs-cli. I installed this manually in the repo containers to get through setup-infrastructure.yml. In setup-openstack.yml, I'm stopped at "TASK [os_nova : Run nova-status upgrade check to validate a healthy configuration]". "nova-status upgrade check" is failing. "nova-manage cell_v2 list_hosts" is not showing any. Oh, and there are warnings about eventlet monkey patching and urllib3. So I'm not quite sure how to dig into this. The nova-api container seems to be running on the infra hosts, and nova-compute.service is up on both compute hosts, although there are warnings about "Timed out waiting for nova-conductor" The nova-api containers are able to ing the compute hosts on br-mgmt. I do have to wonder if this has anything to do with being upgraded to backpors. Any hints on how to analyse this (or how to fix it)? Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Jul 15 21:35:12 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 15 Jul 2022 14:35:12 -0700 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <8a89850f144f27e82dddc33b61087c9c74caead6.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <8a89850f144f27e82dddc33b61087c9c74caead6.camel@redhat.com> Message-ID: <06bc311c-3b7d-4ee5-bb2b-abf0ba8b516c@www.fastmail.com> On Fri, Jul 15, 2022, at 7:14 AM, Sean Mooney wrote: > On Fri, 2022-07-15 at 14:44 +0200, Rados?aw Piliszek wrote: >> On Fri, 15 Jul 2022 at 11:59, Thomas Goirand wrote: >> > Not only for the interpreter, if we could find a way to test things in >> > Debian Unstable, always, as non-voting jobs, we would see the failures >> > early. I'd love we he had such a non-voting job, that would also use the >> > latest packages from PyPi, just so that we could at least know what will >> > happen in a near future. >> >> Well, we can have periodic and experimental, master-only jobs to test >> things on Debian unstable because it's always interesting to see the >> upcoming breakage (or better yet - be able to pinpoint it to a certain >> change happening in Debian unstable that caused it). >> > for the nova ci we have moved centos 9 stream jobs to the > periodic-weekly pipeline and > we are going to monitor it in our weekly meeting. > i dont really have any objection to adding a debian testing or debian > stable based job there too. > be it in the form of a tempest job or tox job. > > we dont really have the capsity either in review bandwith or ci resouce > to have enstable > distros in a voting or non voting capsity on every patch i.e. check and > gate pipelines. > but a weekly periodic its proably doable. > > > one thing we have to be carful of however is a concern i raised last > cycle with extending 3.6 support. > > some of the 3.6 deprecated fature were drop in 3.10 and more feature > that were depfreced in later releases > are droped in 3.11. while we can try and support the newr releases 3.9 > compatiable will be needed for a long > time. so if 3.11 or 3.12 is fundementaly in compatibel with 3.9 due to > a libary change ectra that will be problematic > sicne cento 9 derived distro will be on 3.9 for several years to come. Removals for 3.10 are captured here: https://docs.python.org/3/whatsnew/3.10.html#removed and for 3.11 at https://docs.python.org/3.11/whatsnew/3.11.html#removed if anyone is curious. Considering the number of projects that appear to be currently running python3.8 and 3.10 job successfully, the major problem here is likely going to be if/when our dependencies decide to stop supporting older pythons. Often times we can get away with simply having different requirements for different versions of python, but that may not always work for each dependency. > > i dont actully know the exact lifecycle uels for how often new > centos/rhel majory reseas happen but its usuarly aroud the .6 release > of the current > release that the .0 release of the next majory version happen so every > 5 years or so. i really dont know the plans for rhel 10 but > for rhel9/centos 9 lifetime 3.9 will be the python version used so we > need to balnce fast moving distors like debian and slow moving distors > like > centos and enable both. that might mean we will ahve to drop some deps, > avoid some features or utilise compatablity libs in some cases similar > to > six or mock the lib vs unittest.mock > > so when fixing any 3.11 incompatiablity we need to still maintain 3.8 > compatiablity for zed and 3.9 compatiblity in AA+ > > i woudl guess it will be the CC or DD release before we coudl consdier > droping 3.9 suppot but i think we could drop 3.8 in AA. Python 3.9 EOL is October 2025. With 3.6 we saw many libraries maintain compatibility until the EOL for that version. I'm hopeful that trend will continue and this will largely be a non issue for us. > > >> The job would >> only utilise the interpreter and the helper binaries (like ovs) - all >> targets I can think of are capable only of using pip-installed deps >> and not Debian packages so that part we cannot really cover reliably >> at all. If that makes sense to you, we can work towards that >> direction. The biggest issue will still be the bootability/usability >> of the infra image though. >> >> -yoctozepto >> From the.wade.albright at gmail.com Fri Jul 15 21:35:57 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Fri, 15 Jul 2022 14:35:57 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node Message-ID: Hi, I'm hitting a problem when trying to update the redfish_password for an existing node. I'm curious to know if anyone else has encountered this problem. I'm not sure if I'm just doing something wrong or if there is a bug. Or if the problem is unique to my setup. I have a node already added into ironic with all the driver details set, and things are working fine. I am able to run deployments. Now I need to change the redfish password on the host. So I update the password for redfish access on the host, then use an 'openstack baremetal node set --driver-info redfish_password=' command to set the new redfish_password. Once this has been done, deployment no longer works. I see redfish authentication errors in the logs and the operation fails. I waited a bit to see if there might just be a delay in updating the password, but after awhile it still didn't work. I restarted the conductor, and after that things work fine again. So it seems like the password is cached or something. Is there a way to force the password to update? I even tried removing the redfish credentials and re-adding them, but that didn't work either. Only a conductor restart seems to make the new password work. We are running Xena, using rpm installation on Oracle Linux 8.5. Thanks in advance for any help with this issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Jul 15 22:23:05 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 15 Jul 2022 22:23:05 +0000 Subject: [placement] running out of VCPU resource In-Reply-To: <7dbedb95-0af7-97dc-3f76-bb308aaf52f2@redge.com> References: <7dbedb95-0af7-97dc-3f76-bb308aaf52f2@redge.com> Message-ID: <20220715222305.Horde.rJpH9KqD93oQ0aAofrnqtPq@webmail.nde.ag> Hi, we were facing the same issue and my colleague tracked it down: https://serverfault.com/questions/1064579/openstack-only-building-one-vm-per-machine-in-cluster-then-runs-out-of-resource We have a customized fixed for us which works nicely, but it would surely help to get this fixed in general as it seems to be a reoccurring issue. Zitat von Przemyslaw Basa : > Hi, > > Well i think i figured it out. Following Xena deployment > instructions mariadb was installed in version 10.6.5 and it seems to > be some kind of bug in this version. Upgrading to 10.6.8 fixed this > particular issue for me. > > I've checked some older and newer versions (10.5.6, 10.8.3) and > problematic query behaves there like in 10.6.8. > > Here's how I've done my tests if someone is interested: > > % docker run --rm --detach --name mariadb-10.6.5 --env > MYSQL_ROOT_PASSWORD=test mariadb:10.6.5 > % docker run --rm --detach --name mariadb-10.6.8 --env > MYSQL_ROOT_PASSWORD=test mariadb:10.6.8 > > % docker exec -i mariadb-10.6.5 mysql -u root -ptest < tables_dump.sql > % docker exec -i mariadb-10.6.8 mysql -u root -ptest < tables_dump.sql > > % docker exec -i mariadb-10.6.5 mysql -u root -ptest -t test < test.sql > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | id | uuid | generation | > resource_class_id | total | reserved | allocation_ratio | used | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 0 | 128 | 0 | 2 | 13318 | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 1 | 1031723 | 2048 | 1 | NULL | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 2 | 901965 | 2 | 1 | NULL | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > > % docker exec -i mariadb-10.6.8 mysql -u root -ptest -t test < test.sql > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | id | uuid | generation | > resource_class_id | total | reserved | allocation_ratio | used | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 0 | 128 | 0 | 2 | 5 | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 1 | 1031723 | 2048 | 1 | 13312 | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 2 | 901965 | 2 | 1 | 1 | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > > % cat tables_dump.sql > create database test; > connect test; > > CREATE TABLE `allocations` ( > `created_at` datetime DEFAULT NULL, > `updated_at` datetime DEFAULT NULL, > `id` int(11) NOT NULL AUTO_INCREMENT, > `resource_provider_id` int(11) NOT NULL, > `consumer_id` varchar(36) NOT NULL, > `resource_class_id` int(11) NOT NULL, > `used` int(11) NOT NULL, > PRIMARY KEY (`id`), > KEY `allocations_resource_provider_class_used_idx` > (`resource_provider_id`,`resource_class_id`,`used`), > KEY `allocations_resource_class_id_idx` (`resource_class_id`), > KEY `allocations_consumer_id_idx` (`consumer_id`) > ) ENGINE=InnoDB AUTO_INCREMENT=547 DEFAULT CHARSET=utf8mb3; > > CREATE TABLE `inventories` ( > `created_at` datetime DEFAULT NULL, > `updated_at` datetime DEFAULT NULL, > `id` int(11) NOT NULL AUTO_INCREMENT, > `resource_provider_id` int(11) NOT NULL, > `resource_class_id` int(11) NOT NULL, > `total` int(11) NOT NULL, > `reserved` int(11) NOT NULL, > `min_unit` int(11) NOT NULL, > `max_unit` int(11) NOT NULL, > `step_size` int(11) NOT NULL, > `allocation_ratio` float NOT NULL, > PRIMARY KEY (`id`), > UNIQUE KEY `uniq_inventories0resource_provider_resource_class` > (`resource_provider_id`,`resource_class_id`), > KEY `inventories_resource_class_id_idx` (`resource_class_id`), > KEY `inventories_resource_provider_id_idx` (`resource_provider_id`), > KEY `inventories_resource_provider_resource_class_idx` > (`resource_provider_id`,`resource_class_id`) > ) ENGINE=InnoDB AUTO_INCREMENT=24 DEFAULT CHARSET=utf8mb3; > > CREATE TABLE `resource_providers` ( > `created_at` datetime DEFAULT NULL, > `updated_at` datetime DEFAULT NULL, > `id` int(11) NOT NULL AUTO_INCREMENT, > `uuid` varchar(36) NOT NULL, > `name` varchar(200) DEFAULT NULL, > `generation` int(11) DEFAULT NULL, > `root_provider_id` int(11) DEFAULT NULL, > `parent_provider_id` int(11) DEFAULT NULL, > PRIMARY KEY (`id`), > UNIQUE KEY `uniq_resource_providers0uuid` (`uuid`), > UNIQUE KEY `uniq_resource_providers0name` (`name`), > KEY `resource_providers_name_idx` (`name`), > KEY `resource_providers_parent_provider_id_idx` (`parent_provider_id`), > KEY `resource_providers_root_provider_id_idx` (`root_provider_id`), > KEY `resource_providers_uuid_idx` (`uuid`), > CONSTRAINT `resource_providers_ibfk_1` FOREIGN KEY > (`parent_provider_id`) REFERENCES `resource_providers` (`id`), > CONSTRAINT `resource_providers_ibfk_2` FOREIGN KEY > (`root_provider_id`) REFERENCES `resource_providers` (`id`) > ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb3; > > INSERT INTO `allocations` VALUES ('2022-07-07 > 23:08:10',NULL,329,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',1,12288),('2022-07-07 23:08:10',NULL,332,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',0,4),('2022-07-08 06:26:28',NULL,335,4,'aec7aaea-10df-451b-b2ce-847099ee0110',1,2048),('2022-07-08 06:26:28',NULL,338,4,'aec7aaea-10df-451b-b2ce-847099ee0110',0,2),('2022-07-12 08:53:21',NULL,400,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',1,16384),('2022-07-12 08:53:21',NULL,403,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',0,2),('2022-07-14 08:24:27',NULL,538,5,'9681447d-57ec-45c7-af48-63be3c7201da',2,1),('2022-07-14 08:24:27',NULL,541,5,'9681447d-57ec-45c7-af48-63be3c7201da',1,1024),('2022-07-14 > 08:24:27',NULL,544,5,'9681447d-57ec-45c7-af48-63be3c7201da',0,1); > INSERT INTO `resource_providers` VALUES ('2022-07-04 > 11:59:49','2022-07-13 > 13:03:08',1,'6ac81bb4-50ef-4784-8a64-9031afeaaa9d','p-os-compute01.openstack.local',50,1,NULL),('2022-07-04 12:00:49','2022-07-13 13:03:07',4,'a324b3b9-f8c8-4279-bf63-a27163fcf792','g-os-compute01.openstack.local',42,4,NULL),('2022-07-04 12:03:57','2022-07-14 > 08:24:27',5,'16f620c0-8c6f-4984-8d58-e2c00d1b32da','t-os-compute01.openstack.local',50,5,NULL); > INSERT INTO `inventories` VALUES ('2022-07-04 11:59:50','2022-07-11 > 09:24:04',1,1,0,128,0,1,128,1,2),('2022-07-04 11:59:50','2022-07-11 > 09:24:04',4,1,1,1031723,2048,1,1031723,1,1),('2022-07-04 > 11:59:50','2022-07-11 > 09:24:04',7,1,2,901965,2,1,901965,1,1),('2022-07-04 > 12:01:53','2022-07-04 14:59:53',10,4,0,128,0,1,128,1,2),('2022-07-04 > 12:01:53','2022-07-04 > 14:59:53',13,4,1,1031723,2048,1,1031723,1,1),('2022-07-04 > 12:01:53','2022-07-04 > 14:59:53',16,4,2,901965,2,1,901965,1,1),('2022-07-04 > 12:03:57','2022-07-14 07:16:08',17,5,0,128,0,1,128,1,2),('2022-07-04 > 12:03:57','2022-07-14 > 07:09:11',20,5,1,1031723,2048,1,1031723,1,1),('2022-07-04 > 12:03:57','2022-07-14 07:09:11',23,5,2,901965,2,1,901965,1,1); > > % cat test.sql > SELECT > rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, > inv.reserved, inv.allocation_ratio, allocs.used > FROM > resource_providers AS rp > JOIN inventories AS inv ON rp.id = inv.resource_provider_id > LEFT JOIN ( > SELECT > resource_provider_id, resource_class_id, SUM(used) AS used > FROM > allocations > WHERE > resource_class_id IN (0, 1, 2) > AND resource_provider_id IN (5) > GROUP BY > resource_provider_id, resource_class_id > ) AS allocs ON > inv.resource_provider_id = allocs.resource_provider_id > AND inv.resource_class_id = allocs.resource_class_id > WHERE > rp.id IN (5) > AND inv.resource_class_id IN (0,1,2) > ; > > Regards, > Przemyslaw Basa From iurygregory at gmail.com Fri Jul 15 21:24:36 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 15 Jul 2022 22:24:36 +0100 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Hi Wade, If I understood correctly, you have a node already deployed and you want to change the redfish BMC password via Ironic, this is not possible. Ironic uses the credentials to access the machine and execute the necessary to provision the machine. If you want to change the credentials to access the BMC, you need to directly access it and change in the machine, after that you can change information in Ironic. Em sex., 15 de jul. de 2022 ?s 22:52, Wade Albright < the.wade.albright at gmail.com> escreveu: > Hi, > > I'm hitting a problem when trying to update the redfish_password for an > existing node. I'm curious to know if anyone else has encountered this > problem. I'm not sure if I'm just doing something wrong or if there is a > bug. Or if the problem is unique to my setup. > > I have a node already added into ironic with all the driver details set, > and things are working fine. I am able to run deployments. > > Now I need to change the redfish password on the host. So I update the > password for redfish access on the host, then use an 'openstack baremetal > node set --driver-info redfish_password=' command to set > the new redfish_password. > > Once this has been done, deployment no longer works. I see redfish > authentication errors in the logs and the operation fails. I waited a bit > to see if there might just be a delay in updating the password, but after > awhile it still didn't work. > > I restarted the conductor, and after that things work fine again. So it > seems like the password is cached or something. Is there a way to force the > password to update? I even tried removing the redfish credentials and > re-adding them, but that didn't work either. Only a conductor restart seems > to make the new password work. > > We are running Xena, using rpm installation on Oracle Linux 8.5. > > Thanks in advance for any help with this issue. > -- *Att[]'s* *Iury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Ironic PTL * *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jul 16 01:52:10 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 16 Jul 2022 01:52:10 +0000 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> Message-ID: <20220716015210.7pzcrwfyzcho6opc@yuggoth.org> On 2022-07-09 13:26:36 +0000 (+0000), Jeremy Stanley wrote: [...] > Apparently, ansible-runner currently depends[3] on python-daemon, > which still has a dependency on lockfile[4]. Our uses of > ansible-runner seem to be pretty much limited to TripleO > repositories (hence tagging them in the subject), so it's possible > they could find an alternative to it and solve this dilemma. > Optionally, we could try to help the ansible-runner or python-daemon > maintainers with new implementations of the problem dependencies as > a way out. [...] In the meantime, how does everyone feel about us going ahead and removing the "openstackci" account from the maintainers list for lockfile on PyPI? We haven't depended on it directly since 2015, and it didn't come back into our indirect dependency set until 2018 (presumably that's when TripleO started using ansible-runner?). The odds that we'll need to fix anything in it in the future are pretty small at this point, and if we do we're better off putting that effort into helping the ansible-runner or python-daemon maintainers move off of it instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Sat Jul 16 09:12:16 2022 From: zigo at debian.org (Thomas Goirand) Date: Sat, 16 Jul 2022 11:12:16 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> Message-ID: Hi there! On 7/14/22 18:09, Dmitry Tantsur wrote: > Ironic was not too bad either: > https://review.opendev.org/c/openstack/ironic/+/849882 > > Similar for Nova: https://review.opendev.org/c/openstack/nova/+/849881 > Thanks, I was able to backport these fixes in the Yoga version of Ironic and Nova, and uploaded them to Debian Unstable (currently I'm finishing to build Ironic that had its unit tests passing already). > > - sahara > > > > I'll try to see what I can do to fix these, maybe some of the > failures > > are unrelated (I haven't investigated yet). Only Sahara is missing a fix now. I tracked it down to this: https://github.com/openstack/sahara/blob/master/sahara/utils/api_validator.py#L177 the error being: TypeError: __init__() got an unexpected keyword argument 'types' Looks like "types" has gone away from the parent class. Does anyone know what's going on, and what the replacement is? I first thought it looks like "types" should really be "type_checker", so I tried that, but it didn't work... Once Sahara is fixed, I'm done for all OpenStack packages, and only 2 other Debian packages will need a fix (but I filed Debian bugs against these packages, so we're good...). Cheers, Thomas Goirand (zigo) From the.wade.albright at gmail.com Sat Jul 16 02:26:59 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Fri, 15 Jul 2022 19:26:59 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Hi Lury, Thanks for the reply. I am not trying to use Ironic to change the BMC password. I am changing the password directly on the system, independently of Ironic. Then after that I change the password in Ironic. But it doesn't seem to update in Ironic and any operations on the node fail with a redfish authentication error. After restarting the conductor, node operations work again. On Fri, Jul 15, 2022 at 6:24 PM Iury Gregory wrote: > Hi Wade, > > If I understood correctly, you have a node already deployed and you want > to change the redfish BMC password via Ironic, this is not possible. Ironic > uses the credentials to access the machine and execute the necessary to > provision the machine. > If you want to change the credentia