From massimo.sgaravatto at gmail.com Fri Jul 1 06:04:30 2022 From: massimo.sgaravatto at gmail.com (Massimo Sgaravatto) Date: Fri, 1 Jul 2022 08:04:30 +0200 Subject: [glance][ops] [nova] Disabling an image In-Reply-To: References: <259825a80b6cce7df2743acf6792ad4c598019ab.camel@redhat.com> Message-ID: Converting the image from public to private seems indeed a good idea. Thanks a lot for the hint ! Cheers, Massimo On Thu, Jun 30, 2022 at 2:56 PM Sean Mooney wrote: > On Thu, 2022-06-30 at 14:37 +0200, Massimo Sgaravatto wrote: > > No: I really mean resize > i guess for resize we need to pcy the backing file which we preusmabel > are doing by redownloading the orginal image. it could technically be > copied form the souce > host instead but i think if you change the visiableity rahter then > blocking download that would > hide it form peopel lookign to create new vms with it in the image list > but allow it to consiute > to be used by exsiting instace for rebuild and resize. > > > > On Thu, Jun 30, 2022 at 1:42 PM Sean Mooney wrote: > > > > > On Thu, 2022-06-30 at 10:09 +0200, Massimo Sgaravatto wrote: > > > > Dear all > > > > > > > > What is the blessed method to avoid using an image for new virtual > > > machines > > > > without causing problems for existing instances using that image ? > > > > > > > > If I deactivate the image, I then have problems resizing instances > using > > > > that image [*]: it claims that image download is forbidden since the > > > image > > > > was deactivated > > > i think you mean rebuilding the instance not resizeing right? > > > resize should not need the image since it should use the image info we > > > embed in the nova > > > in the instance_system_metadata table. > > > > > > im not sure if there is a blessed way but i proably would have changed > the > > > visablity to private. > > > > > > > > > > > > > > Thanks, Massimo > > > > > > > > [*] > > > > > > > > > > > > | fault | {'code': 500, 'created': > > > > '2022-06-30T07:57:30Z', 'message': 'Not authorized for image > > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.', 'details': 'Traceback (most > > > recent > > > > call last):\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, in > > > > download\n context, 2, \'data\', args=(image_id,))\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, in > > > > call\n result = getattr(controller, method)(*args, **kwargs)\n > File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", line > > > 670, > > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line > 255, > > > in > > > > data\n resp, body = self.http_client.get(url)\n File > > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line > 395, in > > > > get\n return self.request(url, \'GET\', **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 380, > > > > in request\n return self._handle_response(resp)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 120, > > > > in _handle_response\n raise exc.from_response(resp, > > > > resp.content)\nglanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: > The > > > > requested image has been deactivated. Image data download is > > > > forbidden.\n\nDuring handling of the above exception, another > exception > > > > occurred:\n\nTraceback (most recent call last):\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 201, in > > > > decorated_function\n return function(self, context, *args, > **kwargs)\n > > > > File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", > line > > > > 5950, in finish_resize\n context, instance, migration)\n File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, > in > > > > __exit__\n self.force_reraise()\n File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, > in > > > > force_reraise\n raise self.value\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5932, in > > > > finish_resize\n migration, request_spec)\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5966, in > > > > _finish_resize_helper\n request_spec)\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5902, in > > > > _finish_resize\n self._set_instance_info(instance, old_flavor)\n > File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 227, > in > > > > __exit__\n self.force_reraise()\n File > > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line 200, > in > > > > force_reraise\n raise self.value\n File > > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line > 5890, in > > > > _finish_resize\n block_device_info, power_on)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line > > > 11343, > > > > in finish_migration\n > fallback_from_host=migration.source_compute)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > > > line > > > > 4703, in _create_image\n injection_info, fallback_from_host)\n > File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line > > > 4831, > > > > in _create_and_inject_local_root\n instance, size, > > > fallback_from_host)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", > > > line > > > > 10625, in _try_fetch_image_cache\n > > > > trusted_certs=instance.trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", > > > line > > > > 275, in cache\n *args, **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", > > > line > > > > 638, in create_image\n prepare_template(target=base, *args, > > > **kwargs)\n > > > > File > "/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py", > > > > line 391, in inner\n return f(*args, **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", > > > line > > > > 271, in fetch_func_sync\n fetch_func(target=target, *args, > **kwargs)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/libvirt/utils.py", > line > > > > 395, in fetch_image\n images.fetch_to_raw(context, image_id, > target, > > > > trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/virt/images.py", line 115, in > > > > fetch_to_raw\n fetch(context, image_href, path_tmp, > trusted_certs)\n > > > > File "/usr/lib/python3.6/site-packages/nova/virt/images.py", line > 106, > > > in > > > > fetch\n trusted_certs=trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1300, > in > > > > download\n trusted_certs=trusted_certs)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 379, in > > > > download\n _reraise_translated_image_exception(image_id)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1031, > in > > > > _reraise_translated_image_exception\n raise > > > > new_exc.with_traceback(exc_trace)\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, in > > > > download\n context, 2, \'data\', args=(image_id,))\n File > > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, in > > > > call\n result = getattr(controller, method)(*args, **kwargs)\n > File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", line > > > 670, > > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line > 255, > > > in > > > > data\n resp, body = self.http_client.get(url)\n File > > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line > 395, in > > > > get\n return self.request(url, \'GET\', **kwargs)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 380, > > > > in request\n return self._handle_response(resp)\n File > > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", line > 120, > > > > in _handle_response\n raise exc.from_response(resp, > > > > resp.content)\nnova.exception.ImageNotAuthorized: Not authorized for > > > image > > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.\n'} | > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mrunge at matthias-runge.de Fri Jul 1 06:46:23 2022 From: mrunge at matthias-runge.de (Matthias Runge) Date: Fri, 1 Jul 2022 08:46:23 +0200 Subject: [all][TC] Stats about rechecking patches without reason given In-Reply-To: References: <2224274.SIoALSJNh4@p1> <20220630130620.i5h47yddyxdypefq@yuggoth.org> <95c03dc437aef52231c2283c1d783ae8bbc99ff1.camel@redhat.com> <2709079.EGI8z0umEv@p1> Message-ID: <82fadda5-4219-9314-3128-054b0bc215f0@matthias-runge.de> On 30/06/2022 20:06, Dan Smith wrote: >>> Or vice versa, if there are 20 rechecks for 2 patches, even if neither >>> of them are bare, it's still weird and smth worth reconsidering from >>> project perspective. >> >> I think the idea is to create a culture of debugging and record >> keeping. Yes, I would expect after a few rechecks that maybe the root >> causes would be addressed in this case, but the first step in doing >> that is identifying the problem and making note of it. > > Right, that is the goal. Asking for a message at least sets the > expectation that people are looking at the reasons for the fails. Just > because they don't doesn't mean they aren't, or don't care, but I think > it helps reinforce the desired behavior. If nothing else, it also helps > observers realize "huh, I've seen a bunch of rechecks about $reason > lately, maybe we should look at that". So, what happens with the script, when you add 2 comments, one: "network error during package install, let's try again" and the next message "recheck". In my understanding, that would count as recheck without reason given. (by the script). Maybe it's worth to document how to give a better proof that someone looked into the logs and tried to get to the root cause of a previous CI failure? The other issue I see here is that with CI being flaky, chances seem to get better when doing a recheck. An extreme example: https://review.opendev.org/c/openstack/tripleo-heat-templates/+/844519 required 8 rechecks, no changes in the patch itself, and no dependencies. The CI failed always in different checks. Matthias From christian.rohmann at inovex.de Fri Jul 1 07:10:46 2022 From: christian.rohmann at inovex.de (Christian Rohmann) Date: Fri, 1 Jul 2022 09:10:46 +0200 Subject: [designate] How to avoid NXDOMAIN or stale data during cold start of a (new) machine In-Reply-To: References: <69ab8e54-f419-4cd1-f289-a0b5efb7f723@inovex.de> Message-ID: <81a3d69e-f96b-7607-6625-06fb465cd8f9@inovex.de> On 07/06/2022 02:04, Michael Johnson wrote: > There are two ways zones can be resynced: > 1. Using the "designate-manage pool update" command. This will force > an update/recreate of all of the zones. > 2. When a zone is in ERROR or PENDING for too long, the > WorkerPeriodicRecovery task in producer will attempt to repair the > zone. > > I don't think there is a periodic task that checks the BIND instances > for missing content at the moment. Adding one would have to be done > very carefully as it would be easy to get into race conditions when > new zones are being created and deployed. Just as an update: When playing with this issue of a cold start with no zones and "designate-manage pool update" no fixing it. We found that somebody just ran into the issue of (https://bugs.launchpad.net/designate/+bug/1958409/) and proposed a fix (rndc modzone -> rndc addzone). With this patch the "pool update" does cause all them missing zones to be created in a BIND instance that has either lost it's zones or has just been added to the pool. Regards Christian From amonster369 at gmail.com Fri Jul 1 09:26:58 2022 From: amonster369 at gmail.com (A Monster) Date: Fri, 1 Jul 2022 10:26:58 +0100 Subject: Problem while launching an instance directly from an image "Volume did not finish being created even after we waited 203 seconds or 61 attempts" Message-ID: I've deployed openstack using kolla, when I try to launch an instance directly from any image, after some time waiting for Block Device Mapping I get the following error : Build of instance 4cf01ba2-05b3-44e9-a685-8875d8c96b4e aborted: Volume > 01739e82-9e66-41f7-be74-dfbbdcd6746e did not finish being created even > after we waited 203 seconds or 61 attempts. And its status is creating. I've tried increasing *block_device_allocate_retries=400 *and *block_device_allocate_retries_interval=3 *, however I keep getting the same error. But when I create a volume from an image, then launch an instance from that same volume, it works just fine. Any suggestions for this issue? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Fri Jul 1 13:36:30 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Fri, 1 Jul 2022 19:06:30 +0530 Subject: Zun connector for persistent shared files system Manila Message-ID: Hi, I am using zun for running containers and managing them. I deployed cinder also persistent storage. and it is working fine. I want to mount my Manila shares to be mounted on containers managed by Zun. I can see a Fuxi project and driver for this but it is discontinued now. With Cinder only one container can use the storage volume at a time. If I want to have a shared file system to be mounted on multiple containers simultaneously, it is not possible with cinder. Is there any alternative to Fuxi. is there any other mechanism to use docker Volume support for NFS as shown in the link below? https://docs.docker.com/storage/volumes/ Please advise and give a suggestion. Regards, Vaibhav -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Fri Jul 1 15:31:16 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Fri, 01 Jul 2022 16:31:16 +0100 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups Message-ID: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> tl;dr: Who should be able project-specific stable-maint groups on Gerrit: members of the projects-specific stable-maint group itself or members of stable- maint-core? A recent discussion on #openstack-sdks highlighted some discrepancies in the ownership of various project-specific "stable-maint" groups on Gerrit. As a reminder, any project that specifies "stable:follows-policy" is required to follow the stable branch policy (suprise!). This is documented rather well at [1]. We expect people who have the ability to merge patches to stable branches to understand and apply this policy. Initially the only people that could do this were peopled added to a central Gerrit group called "stable-maint-core" group, however, in recent years this responsibility has been devolved to the projects themselves. Each project with a stable branch now has a project- specific stable maintenance Gerrit group called PROJECTNAME-stable-maint (e.g. nova-stable-maint [2]. The issue here is who should *own* these groups. The owner of a Gerrit group is the only one that's able to modify it. In general, the owner of a Gerrit group is the group itself so for example the owner of python-openstackclient-core is python-openstackclient-core [3]. This means that if you're a member of the group then you can add or remove members, rename the group, set a description etc. However, _most_ PROJECTNAmE-stable-maint groups are owned by the old 'stable- maint-core' group, meaning only members of this global group can modify the project-specific groups. I say _most_ because this isn't applied across the board. The following projects are owned by 'stable-maint-core': * barbican-stable-maint * ceilometer-stable-maint * cinder-stable-maint * designate-stable-maint * glance-stable-maint * heat-stable-maint * horizon-stable-maint * ironic-stable-maint * keystone-stable-maint * manila-stable-maint * neutron-stable-maint * nova-stable-maint * oslo-stable-maint * python-openstackclient-stable-maint * sahara-stable-maint * swift-stable-maint * trove-stable-maint * zaqar-stable-maint However, the following stable groups "own themselves": * ansible-collections-openstack-stable-maint * aodh-stable-maint * congress-stable-maint * freezer-stable-maint * karbor-stable-maint * mistral-stable-maint * neutron-dynamic-routing-stable-maint * neutron-lib-stable-maint * neutron-vpnaas-stable-maint * octavia-stable-maint * openstacksdk-stable-maint * oslo-vmware-stable-maint * ovn-octavia-provider-stable-maint * panko-stable-maint * placement-stable-maint * sahara-dashboard-stable-maint * senlin-dashboard-stable-maint * senlin-stable-maint * telemetry-stable-maint This brings me to my question (finally!): do we want to resolve this discrepancy, and if so, how? Personally, I would lean towards delegating this entirely to the projects but I don't know if this requires TC involvement to do. If we want to insist on the stable-maint group owning all PROJECT-stable-maint branches then we have a lot of cleanup to do! Cheers, Stephen PS: This might be a good moment to do a cleanup of members of various stable branches that have since moved on... [1] https://docs.openstack.org/project-team-guide/stable-branches.html [2] https://review.opendev.org/admin/groups/21ce6c287ea33809980b2dec53915b07830cdb11 [3] https://review.opendev.org/admin/groups/aa0f197fcfbd4fcebeff921567ed3b48cd330a4c From smooney at redhat.com Fri Jul 1 16:15:36 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 01 Jul 2022 17:15:36 +0100 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> Message-ID: <74160f8a845214b60b8f44615dd67c9c25757fdc.camel@redhat.com> On Fri, 2022-07-01 at 16:31 +0100, Stephen Finucane wrote: > tl;dr: Who should be able project-specific stable-maint groups on Gerrit: > members of the projects-specific stable-maint group itself or members of stable- > maint-core? > > A recent discussion on #openstack-sdks highlighted some discrepancies in the > ownership of various project-specific "stable-maint" groups on Gerrit. As a > reminder, any project that specifies "stable:follows-policy" is required to > follow the stable branch policy (suprise!). This is documented rather well at > [1]. We expect people who have the ability to merge patches to stable branches > to understand and apply this policy. Initially the only people that could do > this were peopled added to a central Gerrit group called "stable-maint-core" > group, however, in recent years this responsibility has been devolved to the > projects themselves. Each project with a stable branch now has a project- > specific stable maintenance Gerrit group called PROJECTNAME-stable-maint (e.g. > nova-stable-maint [2]. > > The issue here is who should *own* these groups. The owner of a Gerrit group is > the only one that's able to modify it. In general, the owner of a Gerrit group > is the group itself so for example the owner of python-openstackclient-core is > python-openstackclient-core [3]. This means that if you're a member of the group > then you can add or remove members, rename the group, set a description etc. > However, _most_ PROJECTNAmE-stable-maint groups are owned by the old 'stable- > maint-core' group, meaning only members of this global group can modify the > project-specific groups. I say _most_ because this isn't applied across the > board. The following projects are owned by 'stable-maint-core': > > * barbican-stable-maint > * ceilometer-stable-maint > * cinder-stable-maint > * designate-stable-maint > * glance-stable-maint > * heat-stable-maint > * horizon-stable-maint > * ironic-stable-maint > * keystone-stable-maint > * manila-stable-maint > * neutron-stable-maint > * nova-stable-maint > * oslo-stable-maint > * python-openstackclient-stable-maint > * sahara-stable-maint > * swift-stable-maint > * trove-stable-maint > * zaqar-stable-maint > > However, the following stable groups "own themselves": > > * ansible-collections-openstack-stable-maint > * aodh-stable-maint > * congress-stable-maint > * freezer-stable-maint > * karbor-stable-maint > * mistral-stable-maint > * neutron-dynamic-routing-stable-maint > * neutron-lib-stable-maint > * neutron-vpnaas-stable-maint > * octavia-stable-maint > * openstacksdk-stable-maint > * oslo-vmware-stable-maint > * ovn-octavia-provider-stable-maint > * panko-stable-maint > * placement-stable-maint > * sahara-dashboard-stable-maint > * senlin-dashboard-stable-maint > * senlin-stable-maint > * telemetry-stable-maint > > This brings me to my question (finally!): do we want to resolve this > discrepancy, and if so, how? Personally, I would lean towards delegating this > entirely to the projects but I don't know if this requires TC involvement to do. > If we want to insist on the stable-maint group owning all PROJECT-stable-maint > branches then we have a lot of cleanup to do! i would probably delegate them to be self owned too. one thing that might be workt looking att too is the convention when stackforge was still a thign was to create 2 projects PROJECT-core and PROJECT-release and the release group was the own of the stabel branch instead of PROJECT-stable-maint the release group is mainly for pushing signed tags but it was ofthen the same group of peopel that did that as did stabel backports?kuryr for example https://opendev.org/openstack/project-config/src/branch/master/gerrit/acls/openstack/kuryr.config still uses kuryr-release for stable branch merge rights as does morano. i dont see its use widely currently so maybe an non issue but i had tought at one point -release was encuraged instead of stable-maint when the neutron stadium ectra was being created. https://codesearch.opendev.org/?q=-release&i=nope&literal=nope&files=gerrit%2Facls%2Fopenstack&excludeFiles=&repos= in general i suspect that there are few usease of -release for stable for branch for porject under offcial governance. > > Cheers, > Stephen > > PS: This might be a good moment to do a cleanup of members of various stable > branches that have since moved on... > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > [2] https://review.opendev.org/admin/groups/21ce6c287ea33809980b2dec53915b07830cdb11 > [3] https://review.opendev.org/admin/groups/aa0f197fcfbd4fcebeff921567ed3b48cd330a4c > > From gmann at ghanshyammann.com Fri Jul 1 16:17:26 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 11:17:26 -0500 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> Message-ID: <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> ---- On Fri, 01 Jul 2022 10:31:16 -0500 Stephen Finucane wrote --- > tl;dr: Who should be able project-specific stable-maint groups on Gerrit: > members of the projects-specific stable-maint group itself or members of stable- > maint-core? > > A recent discussion on #openstack-sdks highlighted some discrepancies in the > ownership of various project-specific "stable-maint" groups on Gerrit. As a > reminder, any project that specifies "stable:follows-policy" is required to > follow the stable branch policy (suprise!). This is documented rather well at > [1]. We expect people who have the ability to merge patches to stable branches > to understand and apply this policy. Initially the only people that could do > this were peopled added to a central Gerrit group called "stable-maint-core" > group, however, in recent years this responsibility has been devolved to the > projects themselves. Each project with a stable branch now has a project- > specific stable maintenance Gerrit group called PROJECTNAME-stable-maint (e.g. > nova-stable-maint [2]. > > The issue here is who should *own* these groups. The owner of a Gerrit group is > the only one that's able to modify it. In general, the owner of a Gerrit group > is the group itself so for example the owner of python-openstackclient-core is > python-openstackclient-core [3]. This means that if you're a member of the group > then you can add or remove members, rename the group, set a description etc. > However, _most_ PROJECTNAmE-stable-maint groups are owned by the old 'stable- > maint-core' group, meaning only members of this global group can modify the > project-specific groups. I say _most_ because this isn't applied across the > board. The following projects are owned by 'stable-maint-core': > > * barbican-stable-maint > * ceilometer-stable-maint > * cinder-stable-maint > * designate-stable-maint > * glance-stable-maint > * heat-stable-maint > * horizon-stable-maint > * ironic-stable-maint > * keystone-stable-maint > * manila-stable-maint > * neutron-stable-maint > * nova-stable-maint > * oslo-stable-maint > * python-openstackclient-stable-maint > * sahara-stable-maint > * swift-stable-maint > * trove-stable-maint > * zaqar-stable-maint > > However, the following stable groups "own themselves": > > * ansible-collections-openstack-stable-maint > * aodh-stable-maint > * congress-stable-maint > * freezer-stable-maint > * karbor-stable-maint > * mistral-stable-maint > * neutron-dynamic-routing-stable-maint > * neutron-lib-stable-maint > * neutron-vpnaas-stable-maint > * octavia-stable-maint > * openstacksdk-stable-maint > * oslo-vmware-stable-maint > * ovn-octavia-provider-stable-maint > * panko-stable-maint > * placement-stable-maint > * sahara-dashboard-stable-maint > * senlin-dashboard-stable-maint > * senlin-stable-maint > * telemetry-stable-maint > > This brings me to my question (finally!): do we want to resolve this > discrepancy, and if so, how? Personally, I would lean towards delegating this > entirely to the projects but I don't know if this requires TC involvement to do. > If we want to insist on the stable-maint group owning all PROJECT-stable-maint > branches then we have a lot of cleanup to do! You understanding is right that it is delegated to project etirely. In Xena cycle, TC passed the resolution about decentralize the stable branch core team to projects - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html and We updated the project-team-guide also to have project manage the project specific stable core group - https://review.opendev.org/c/openstack/project-team-guide/+/834794 It is own by the project and they can manage this group the same way they manage the master branch core group. -gmann > > Cheers, > Stephen > > PS: This might be a good moment to do a cleanup of members of various stable > branches that have since moved on... > > [1] https://docs.openstack.org/project-team-guide/stable-branches.html > [2] https://review.opendev.org/admin/groups/21ce6c287ea33809980b2dec53915b07830cdb11 > [3] https://review.opendev.org/admin/groups/aa0f197fcfbd4fcebeff921567ed3b48cd330a4c > > > From gmann at ghanshyammann.com Fri Jul 1 16:23:05 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 11:23:05 -0500 Subject: [all][TC] Stats about rechecking patches without reason given In-Reply-To: <82fadda5-4219-9314-3128-054b0bc215f0@matthias-runge.de> References: <2224274.SIoALSJNh4@p1> <20220630130620.i5h47yddyxdypefq@yuggoth.org> <95c03dc437aef52231c2283c1d783ae8bbc99ff1.camel@redhat.com> <2709079.EGI8z0umEv@p1> <82fadda5-4219-9314-3128-054b0bc215f0@matthias-runge.de> Message-ID: <181ba92b460.1112f627621801.1622041958884190146@ghanshyammann.com> ---- On Fri, 01 Jul 2022 01:46:23 -0500 Matthias Runge wrote --- > On 30/06/2022 20:06, Dan Smith wrote: > >>> Or vice versa, if there are 20 rechecks for 2 patches, even if neither > >>> of them are bare, it's still weird and smth worth reconsidering from > >>> project perspective. > >> > >> I think the idea is to create a culture of debugging and record > >> keeping. Yes, I would expect after a few rechecks that maybe the root > >> causes would be addressed in this case, but the first step in doing > >> that is identifying the problem and making note of it. > > > > Right, that is the goal. Asking for a message at least sets the > > expectation that people are looking at the reasons for the fails. Just > > because they don't doesn't mean they aren't, or don't care, but I think > > it helps reinforce the desired behavior. If nothing else, it also helps > > observers realize "huh, I've seen a bunch of rechecks about $reason > > lately, maybe we should look at that". > > So, what happens with the script, when you add 2 comments, one: "network > error during package install, let's try again" and the next message > "recheck". In this case, you can always mentione the "recheck network error during package install, let's try again" or if you have added a lenthy text for failure and then want to recheck you can add a one line sumamry during recheck. Overall idea is not to literally count the bare recheck but to build a habbit among us that we should look at the failure before we just do recheck. > > In my understanding, that would count as recheck without reason given. > (by the script). Maybe it's worth to document how to give a better proof > that someone looked into the logs and tried to get to the root cause of > a previous CI failure? I think Dan has written a nice document about it including how to debug the failure, - https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures We welcome everyone to extend it to have more detail or example if any case specific to projects and it is not covered. -gmann > > The other issue I see here is that with CI being flaky, chances seem to > get better when doing a recheck. > > An extreme example: > https://review.opendev.org/c/openstack/tripleo-heat-templates/+/844519 > required 8 rechecks, no changes in the patch itself, and no > dependencies. The CI failed always in different checks. > > Matthias > > From fungi at yuggoth.org Fri Jul 1 17:40:24 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Jul 2022 17:40:24 +0000 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <74160f8a845214b60b8f44615dd67c9c25757fdc.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> <74160f8a845214b60b8f44615dd67c9c25757fdc.camel@redhat.com> Message-ID: <20220701174024.o7z7z73yslpju7ep@yuggoth.org> On 2022-07-01 17:15:36 +0100 (+0100), Sean Mooney wrote: [...] > i had tought at one point -release was encuraged instead of > stable-maint when the neutron stadium ectra was being created. [...] Perhaps lost to the annals of time, but these were the -drivers groups, most of which got renamed to -release. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jul 1 17:41:57 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 1 Jul 2022 17:41:57 +0000 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> Message-ID: <20220701174157.dt655llarwntyuh7@yuggoth.org> On 2022-07-01 11:17:26 -0500 (-0500), Ghanshyam Mann wrote: [...] > In Xena cycle, TC passed the resolution about decentralize the > stable branch core team to projects > - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html > > and We updated the project-team-guide also to have project manage > the project specific stable core group > - https://review.opendev.org/c/openstack/project-team-guide/+/834794 > > It is own by the project and they can manage this group the same > way they manage the master branch core group. [...] Except that they aren't, at least not in any practical sense, and that's what was missed. Sounds like I'm free to make the groups in Gerrit all be self-owned? -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Jul 1 18:30:37 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 13:30:37 -0500 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <20220701174157.dt655llarwntyuh7@yuggoth.org> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> <181ba8d8655.114566f1f21538.3219978544313100952@ghanshyammann.com> <20220701174157.dt655llarwntyuh7@yuggoth.org> Message-ID: <181bb077700.c47b047425771.15694433784435277@ghanshyammann.com> ---- On Fri, 01 Jul 2022 12:41:57 -0500 Jeremy Stanley wrote --- > On 2022-07-01 11:17:26 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > In Xena cycle, TC passed the resolution about decentralize the > > stable branch core team to projects > > - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html > > > > and We updated the project-team-guide also to have project manage > > the project specific stable core group > > - https://review.opendev.org/c/openstack/project-team-guide/+/834794 > > > > It is own by the project and they can manage this group the same > > way they manage the master branch core group. > [...] > > Except that they aren't, at least not in any practical sense, and > that's what was missed. Sounds like I'm free to make the groups in > Gerrit all be self-owned? Yes, please. I think we thought there is no such restriction in those group untill Stephen brought this here. ---- On Fri, 01 Jul 2022 12:41:57 -0500 Jeremy Stanley wrote --- > On 2022-07-01 11:17:26 -0500 (-0500), Ghanshyam Mann wrote: > [...] > > In Xena cycle, TC passed the resolution about decentralize the > > stable branch core team to projects > > - https://governance.openstack.org/tc/resolutions/20210923-stable-core-team.html > > > > and We updated the project-team-guide also to have project manage > > the project specific stable core group > > - https://review.opendev.org/c/openstack/project-team-guide/+/834794 > > > > It is own by the project and they can manage this group the same > > way they manage the master branch core group. > [...] > > Except that they aren't, at least not in any practical sense, and > that's what was missed. Sounds like I'm free to make the groups in > Gerrit all be self-owned? Yes, please. I think we thought there is no such restriction in those group until Stephen brought this here. -gmann > -- > Jeremy Stanley > -gmann > -- > Jeremy Stanley > From geguileo at redhat.com Fri Jul 1 18:55:18 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Fri, 1 Jul 2022 20:55:18 +0200 Subject: [cinder] Spec Freeze Exception Request Message-ID: <20220701185518.6cid4paqrsnxnq6a@localhost> Hi, I would like to request a spec freeze exception for the new Cinder Quota System spec [1]. Analysis of the required spec changes needed to implement the second quota driver, as agreed in the PTG/mid-cycle, were non trivial. In the latest spec update I just pushed there are considerable changes: - General improvements to existing sections to increase readability. - Description of the additional driver and reasons why we decided to implement it. - Spell out through the entire spec the similarities and differences of both drivers. - Change in tracking of the reservations to accommodate the new driver. - Description on how switching from one driver to the other would work. - Updated the driver interface to accommodate the particularities of the new driver. - Updated the performance section with a very brief summary of the performance tests done with some code prototipe. - Updating the phases of the effort as well as the work items. Cheers, Gorka. [1]: https://review.opendev.org/c/openstack/cinder-specs/+/819693 From gmann at ghanshyammann.com Fri Jul 1 19:02:05 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 01 Jul 2022 14:02:05 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 01 July 2022: Reading: 5 min Message-ID: <181bb244781.c0271e6626303.1247651419629388523@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on 30 June. Most of the meeting discussions are summarized in this email. Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-06-30-15.00.log.html * Next TC weekly meeting will be on 7 July Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by 6 July. 2. What we completed this week: ========================= * Retired openstack-helm-deployments[2]. * Added Cinder Dell EMC PowerStore charm[3] 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[4], Two are completed and others items are in-progress. Open Reviews ----------------- * Three open reviews for ongoing activities[5]. Create the Environmental Sustainability SIG --------------------------------------------------- We discussed it in TC meeting but did not conclude anything as Kendal would like to wait for more feedback in review[6]. New ELK service dashboard: e-r service ----------------------------------------------- There are good progress on brining elastic-recheck back. From no onwards, we can track the progress in TacT SIG. Feel free to ping dpawlik on #openstack-infra for any query or help. Consistent and Secure Default RBAC ------------------------------------------- We have a good amount of discussion and review in the goal document updates[7] and I have updated the patch by resolving the review comments. We will have policy popup meeting on 5 July[8]. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[9]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[10]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[11][12]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[13]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [14] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847413 [3] https://review.opendev.org/c/openstack/governance/+/846890 [4] https://etherpad.opendev.org/p/tc-zed-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://review.opendev.org/c/openstack/governance-sigs/+/845336 [7] https://review.opendev.org/c/openstack/governance/+/847418 [8] https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting [9] https://review.opendev.org/c/openstack/governance/+/836888 [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [11] https://etherpad.opendev.org/p/zuul-config-error-openstack [12] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [13] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [14] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From franck.vedel at univ-grenoble-alpes.fr Fri Jul 1 19:35:50 2022 From: franck.vedel at univ-grenoble-alpes.fr (Franck VEDEL) Date: Fri, 1 Jul 2022 21:35:50 +0200 Subject: [kolla-ansible][centos][yoga] Problems VpnaaS and containers Message-ID: Hello I hope to have some help on a point that was bothering me last year, and which still does not work. I was a year ago on Wallaby, after 2 updates, I'm currently on Yoga. I use kolla-ansible, and openstack-kolla/centos-source images. Last year, I found the following ebug: https://bugs.launchpad.net/neutron/+bug/1938571 I tried with the Yoga update: exactly the same problem "Command: ['ipsec', 'whack', '--status'] Exit code: 33 Stdout: Stderr: whack: Pluto is not running (no "/run/pluto/pluto.ctl") ; Stderr: ? Should I conclude that this bug will never be fixed and that it is impossible to have vpnaas functionality with centos images? Second question: let's imagine that I change the line kolla_base_distro: "centos" in kolla_base_distro: "ubuntu" Does this have a chance of working? I can't see all the possible problems. Thanks for your help. Franck -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Sat Jul 2 05:05:54 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Sat, 2 Jul 2022 10:35:54 +0530 Subject: [cinder] Spec Freeze Exception Request In-Reply-To: <20220701185518.6cid4paqrsnxnq6a@localhost> References: <20220701185518.6cid4paqrsnxnq6a@localhost> Message-ID: Thanks Gorka for spelling out all the changes made to the spec since the initial submission in yoga, would make the review experience quite better. The quota issues have indeed been a pain point for OpenStack operators for a long time and it's really crucial to fix them. I'm OK with granting this an FFE (+1) On Sat, Jul 2, 2022 at 12:31 AM Gorka Eguileor wrote: > Hi, > > I would like to request a spec freeze exception for the new Cinder Quota > System spec [1]. > > Analysis of the required spec changes needed to implement the second > quota driver, as agreed in the PTG/mid-cycle, were non trivial. > > In the latest spec update I just pushed there are considerable changes: > > - General improvements to existing sections to increase readability. > > - Description of the additional driver and reasons why we decided to > implement it. > > - Spell out through the entire spec the similarities and differences of > both drivers. > > - Change in tracking of the reservations to accommodate the new driver. > > - Description on how switching from one driver to the other would work. > > - Updated the driver interface to accommodate the particularities of the > new driver. > > - Updated the performance section with a very brief summary of the > performance tests done with some code prototipe. > > - Updating the phases of the effort as well as the work items. > > Cheers, > Gorka. > > > [1]: https://review.opendev.org/c/openstack/cinder-specs/+/819693 > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Mon Jul 4 04:46:56 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Mon, 4 Jul 2022 10:16:56 +0530 Subject: Problem while launching an instance directly from an image "Volume did not finish being created even after we waited 203 seconds or 61 attempts" In-Reply-To: References: Message-ID: Hi, On Fri, Jul 1, 2022 at 3:04 PM A Monster wrote: > I've deployed openstack using kolla, when I try to launch an instance > directly from any image, after some time waiting for Block Device Mapping I > get the following error : > > I'm confused here, when you say directly from image, do you mean ephemeral volumes (nova) or persistent volumes (cinder)? I will assume cinder volumes since we've BDM here and nova is triggering cinder to create bootable volumes. > Build of instance 4cf01ba2-05b3-44e9-a685-8875d8c96b4e aborted: Volume >> 01739e82-9e66-41f7-be74-dfbbdcd6746e did not finish being created even >> after we waited 203 seconds or 61 attempts. And its status is creating. > > > I've tried increasing *block_device_allocate_retries=400 *and *block_device_allocate_retries_interval=3 > *, however I keep getting the same error. > > But when I create a volume from an image, then launch an instance from > that same volume, it works just fine. > Any suggestions for this issue? > Which OpenStack release you're working on? I think the operation is failing asynchronously on the cinder side (probably in c-vol) and nova times out waiting for a response. I suggest you check cinder logs (c-api, c-sch, c-vol) for a more specific error message. Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Mon Jul 4 07:36:30 2022 From: thierry at openstack.org (Thierry Carrez) Date: Mon, 4 Jul 2022 09:36:30 +0200 Subject: [largescale-sig] Next meeting: July 6th, 15utc Message-ID: <9aa1acd4-4321-d36d-2482-6f4e417cd41d@openstack.org> Hi everyone, The Large Scale SIG will be meeting this Wednesday in #openstack-operators on OFTC IRC, at 15UTC, before taking a break for July and most of August. You can check how that time translates locally at: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20220706T15 Feel free to add topics to the agenda: https://etherpad.openstack.org/p/large-scale-sig-meeting Regards, -- Thierry Carrez From ralonsoh at redhat.com Mon Jul 4 08:33:42 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Mon, 4 Jul 2022 10:33:42 +0200 Subject: [neutron] Bug deputy Jun 27 to Jul 3 Message-ID: Hello Neutrinos: This is the bug list of the past week: Critical: ** https://bugs.launchpad.net/neutron/+bug/1980055 : neutron rally jobs broken with gnocchiclient 7.0.7 update.* Assigned to Yatin. Patch: https://review.opendev.org/c/openstack/neutron/+/847989 High: ** https://bugs.launchpad.net/neutron/+bug/1979958 : [regression] Unable to schedule segment.* Assigned to Bence. ** https://bugs.launchpad.net/neutron/+bug/1980346 : [CI] NetworkSegmentTests failing frequently in OSC CI* Assigned to Rodolfo Patch: https://review.opendev.org/c/openstack/neutron/+/848396 ** https://bugs.launchpad.net/neutron/+bug/1980421 : 'Socket /var/run/openvswitch/ovnnb_db.sock not found' during ovn_start* However, it seems to be a problem in the deployment script, solved with an extra timeout during the OVN startup. Patch: https://review.opendev.org/c/openstack/devstack/+/848548 Medium: ** https://bugs.launchpad.net/neutron/+bug/1980126 : [FT] Error in "test_dvr_router_lifecycle_ha_with_snat_with_fips"* Assigned to Rodolfo. Patch: https://review.opendev.org/c/openstack/neutron/+/848312 (WIP, test patch) ** https://bugs.launchpad.net/neutron/+bug/1980127 : [FT] Error in MySQL "test_walk_versions" with PostgreSQL* Assigned to Rodolfo. Patch: https://review.opendev.org/c/openstack/neutron/+/848146 ** https://bugs.launchpad.net/neutron/+bug/1980235 : StaticScheduler has not attribute schedule* Unassigned (frickler is commenting on the bug). ** https://bugs.launchpad.net/neutron/+bug/1980488 : [OVN] OVN fails to report to placement if other mech driver is configured.* Assigned to Rodolfo Patch: https://review.opendev.org/q/Ic9e8586991866ebca0b24bfe691e541c198d18d7 Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Mon Jul 4 08:33:51 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Mon, 4 Jul 2022 10:33:51 +0200 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: Message-ID: Just a quick follow up - I was permitted to share a pre-published version of the article I was citing in my email from June 4th. [1] Please enjoy responsibly. :-) [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf Cheers, Radek -yoctozepto On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek wrote: > > On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: > > > > Hi Rados?aw, > > Hi Andy, > > Sorry for the late reply, been busy vacationing and then dealing with COVID-19. > > > > First of all, wow, that looks very interesting and in fact very much > > > what I'm looking for. As I mentioned in the original message, the > > > things this solution lacks are not something blocking for me. > > > Regarding the approach to Guacamole, I know that it's preferable to > > > have guacamole extension (that provides the dynamic inventory) > > > developed rather than meddle with the internal database but I guess it > > > is a good start. > > > > An even better approach would be something like the Guacozy project > > (https://guacozy.readthedocs.io) > > I am not convinced. The project looks dead by now. [1] > It offers a different UI which may appeal to certain users but I think > sticking to vanilla Guacamole should do us right... For the time being > at least. ;-) > > > They were able to use the Guacmole JavaScript libraries directly to > > embed the HTML5 desktop within a React? app. I think this is a much > > better approach, and I'd love to be able to do something similar in > > the future. Would make the integration that much nicer. > > Well, as an example of embedding in the UI - sure. But it does not > invalidate the need to modify Guacamole's database or write an > extension to it so that it has the necessary creds. > > > > > > > Any "quickstart setting up" would be awesome to have at this stage. As > > > this is a Django app, I think I should be able to figure out the bits > > > and bolts to get it up and running in some shape but obviously it will > > > impede wider adoption. > > > > Yeah I agree. I'm in the process of documenting it, so I'll aim to get > > a quickstart guide together. > > > > I have a private repo with code to set up a development environment > > which uses Heat and Ansible - this might be the quickest way to get > > started. I'm happy to share this with you privately if you like. > > I'm interested. Please share it. > > > > On the note of adoption, if I find it usable, I can provide support > > > for it in Kolla [1] and help grow the project's adoption this way. > > > > Kolla could be useful. We're already using containers for this project > > now, and I have a helm chart for deploying to k8s. > > https://github.com/NeCTAR-RC/bumblebee-helm > > Nice! The catch is obviously that some orgs frown upon K8s because > they lack the necessary know-how. > Kolla by design avoids the use of K8s. OpenStack components are not > cloud-native anyway so benefits of using K8s are diminished (yet it > makes sense to use K8s if there is enough experience with it as it > makes certain ops more streamlined and simpler this way). > > > Also, an important part is making sure the images are set up correctly > > with XRDP, etc. Our images are built using Packer, and the config for > > them can be found at https://github.com/NeCTAR-RC/bumblebee-images > > Ack, thanks for sharing. > > > > Also, since this is OpenStack-centric, maybe you could consider > > > migrating to OpenDev at some point to collaborate with interested > > > parties using a common system? > > > Just food for thought at the moment. > > > > I think it would be more appropriate to start a new project. I think > > our codebase has too many assumptions about the underlying cloud. > > > > We inherited the code from another project too, so it's got twice the cruft. > > I see. Well, that's good to know at least. > > > > Writing to let you know I have also found the following related paper: [1] > > > and reached out to its authors in the hope to enable further > > > collaboration to happen. > > > The paper is not open access so I have only obtained it for myself and > > > am unsure if licensing permits me to share, thus I also asked the > > > authors to share their copy (that they have copyrights to). > > > I have obviously let them know of the existence of this thread. ;-) > > > Let's stay tuned. > > > > > > [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 > > > > This looks interesting. A collaboration would be good if there is > > enough interest in the community. > > I am looking forward to the collaboration happening. This could really > liven up the OpenStack VDI. > > [1] https://github.com/paidem/guacozy/ > > -yoctozepto From ekuvaja at redhat.com Mon Jul 4 11:05:21 2022 From: ekuvaja at redhat.com (Erno Kuvaja) Date: Mon, 4 Jul 2022 12:05:21 +0100 Subject: [glance][ops] [nova] Disabling an image In-Reply-To: References: <259825a80b6cce7df2743acf6792ad4c598019ab.camel@redhat.com> Message-ID: On Fri, 1 Jul 2022 at 07:17, Massimo Sgaravatto < massimo.sgaravatto at gmail.com> wrote: > Converting the image from public to private seems indeed a good idea. > Thanks a lot for the hint ! > Cheers, Massimo > > Hi Massimo, Turning it into private will cause the very same issue for anyone using the image who was consuming it outside of the project that owns the image. The "hidden" [0] flag was developed for this purpose. Even it does not prevent one to launch new instances from the said image, it will strongly discourage it as the image is not listed in the normal image listings. So if you have a new up to date version of the image, but the old one is still widely in use, turn the old image hidden and unless someone is specifically launching the instance with that old image ID, they will be directed towards your new version. As we don't currently have any mechanism separating a user making a call to Glance with one of the clients vs. Nova making the call on behalf of the user, we also have no means to ensure that the image would be consumable for housekeeping purposes while new instances would be prevented. So this was the most user friendly solution we came up with at the time. [0] https://specs.openstack.org/openstack/glance-specs/specs/rocky/implemented/glance/operator-image-workflow.html - jokke On Thu, Jun 30, 2022 at 2:56 PM Sean Mooney wrote: > >> On Thu, 2022-06-30 at 14:37 +0200, Massimo Sgaravatto wrote: >> > No: I really mean resize >> i guess for resize we need to pcy the backing file which we preusmabel >> are doing by redownloading the orginal image. it could technically be >> copied form the souce >> host instead but i think if you change the visiableity rahter then >> blocking download that would >> hide it form peopel lookign to create new vms with it in the image list >> but allow it to consiute >> to be used by exsiting instace for rebuild and resize. >> > >> > On Thu, Jun 30, 2022 at 1:42 PM Sean Mooney wrote: >> > >> > > On Thu, 2022-06-30 at 10:09 +0200, Massimo Sgaravatto wrote: >> > > > Dear all >> > > > >> > > > What is the blessed method to avoid using an image for new virtual >> > > machines >> > > > without causing problems for existing instances using that image ? >> > > > >> > > > If I deactivate the image, I then have problems resizing instances >> using >> > > > that image [*]: it claims that image download is forbidden since the >> > > image >> > > > was deactivated >> > > i think you mean rebuilding the instance not resizeing right? >> > > resize should not need the image since it should use the image info we >> > > embed in the nova >> > > in the instance_system_metadata table. >> > > >> > > im not sure if there is a blessed way but i proably would have >> changed the >> > > visablity to private. >> > > >> > > >> > > > >> > > > Thanks, Massimo >> > > > >> > > > [*] >> > > > >> > > > >> > > > | fault | {'code': 500, 'created': >> > > > '2022-06-30T07:57:30Z', 'message': 'Not authorized for image >> > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.', 'details': 'Traceback (most >> > > recent >> > > > call last):\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, >> in >> > > > download\n context, 2, \'data\', args=(image_id,))\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, >> in >> > > > call\n result = getattr(controller, method)(*args, **kwargs)\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", >> line >> > > 670, >> > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line >> 255, >> > > in >> > > > data\n resp, body = self.http_client.get(url)\n File >> > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line >> 395, in >> > > > get\n return self.request(url, \'GET\', **kwargs)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 380, >> > > > in request\n return self._handle_response(resp)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 120, >> > > > in _handle_response\n raise exc.from_response(resp, >> > > > resp.content)\nglanceclient.exc.HTTPForbidden: HTTP 403 Forbidden: >> The >> > > > requested image has been deactivated. Image data download is >> > > > forbidden.\n\nDuring handling of the above exception, another >> exception >> > > > occurred:\n\nTraceback (most recent call last):\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 201, in >> > > > decorated_function\n return function(self, context, *args, >> **kwargs)\n >> > > > File "/usr/lib/python3.6/site-packages/nova/compute/manager.py", >> line >> > > > 5950, in finish_resize\n context, instance, migration)\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 227, in >> > > > __exit__\n self.force_reraise()\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 200, in >> > > > force_reraise\n raise self.value\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5932, in >> > > > finish_resize\n migration, request_spec)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5966, in >> > > > _finish_resize_helper\n request_spec)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5902, in >> > > > _finish_resize\n self._set_instance_info(instance, >> old_flavor)\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 227, in >> > > > __exit__\n self.force_reraise()\n File >> > > > "/usr/lib/python3.6/site-packages/oslo_utils/excutils.py", line >> 200, in >> > > > force_reraise\n raise self.value\n File >> > > > "/usr/lib/python3.6/site-packages/nova/compute/manager.py", line >> 5890, in >> > > > _finish_resize\n block_device_info, power_on)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line >> > > 11343, >> > > > in finish_migration\n >> fallback_from_host=migration.source_compute)\n >> > > > File >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> > > line >> > > > 4703, in _create_image\n injection_info, fallback_from_host)\n >> File >> > > > "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", line >> > > 4831, >> > > > in _create_and_inject_local_root\n instance, size, >> > > fallback_from_host)\n >> > > > File >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/driver.py", >> > > line >> > > > 10625, in _try_fetch_image_cache\n >> > > > trusted_certs=instance.trusted_certs)\n File >> > > > >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", >> > > line >> > > > 275, in cache\n *args, **kwargs)\n File >> > > > >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", >> > > line >> > > > 638, in create_image\n prepare_template(target=base, *args, >> > > **kwargs)\n >> > > > File >> "/usr/lib/python3.6/site-packages/oslo_concurrency/lockutils.py", >> > > > line 391, in inner\n return f(*args, **kwargs)\n File >> > > > >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/imagebackend.py", >> > > line >> > > > 271, in fetch_func_sync\n fetch_func(target=target, *args, >> **kwargs)\n >> > > > File >> "/usr/lib/python3.6/site-packages/nova/virt/libvirt/utils.py", line >> > > > 395, in fetch_image\n images.fetch_to_raw(context, image_id, >> target, >> > > > trusted_certs)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/virt/images.py", line 115, in >> > > > fetch_to_raw\n fetch(context, image_href, path_tmp, >> trusted_certs)\n >> > > > File "/usr/lib/python3.6/site-packages/nova/virt/images.py", line >> 106, >> > > in >> > > > fetch\n trusted_certs=trusted_certs)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1300, >> in >> > > > download\n trusted_certs=trusted_certs)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 379, >> in >> > > > download\n _reraise_translated_image_exception(image_id)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 1031, >> in >> > > > _reraise_translated_image_exception\n raise >> > > > new_exc.with_traceback(exc_trace)\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 377, >> in >> > > > download\n context, 2, \'data\', args=(image_id,))\n File >> > > > "/usr/lib/python3.6/site-packages/nova/image/glance.py", line 191, >> in >> > > > call\n result = getattr(controller, method)(*args, **kwargs)\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/utils.py", >> line >> > > 670, >> > > > in inner\n return RequestIdProxy(wrapped(*args, **kwargs))\n >> File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/v2/images.py", line >> 255, >> > > in >> > > > data\n resp, body = self.http_client.get(url)\n File >> > > > "/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py", line >> 395, in >> > > > get\n return self.request(url, \'GET\', **kwargs)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 380, >> > > > in request\n return self._handle_response(resp)\n File >> > > > "/usr/lib/python3.6/site-packages/glanceclient/common/http.py", >> line 120, >> > > > in _handle_response\n raise exc.from_response(resp, >> > > > resp.content)\nnova.exception.ImageNotAuthorized: Not authorized for >> > > image >> > > > dd1492d5-17a2-4dc2-a4e3-ec6c99255e4b.\n'} | >> > > >> > > >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Jul 4 13:18:54 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 4 Jul 2022 15:18:54 +0200 Subject: [nova][placement] Spec review day tomorrow Message-ID: Hey members, Just a reminder we'll have a nova/placement spec review day happening on tomorrow July 5th. Sharpen your pens and prepare your specs for review, it would be greatly appreciated. -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From bence.romsics at gmail.com Mon Jul 4 14:31:13 2022 From: bence.romsics at gmail.com (Bence Romsics) Date: Mon, 4 Jul 2022 16:31:13 +0200 Subject: [neutron] change of API performance from Pike to Yoga Message-ID: Hi Neutrinos! Inspired by Julia Kreger's presentation on the summit [1] I wanted to gather some ideas about the change in Neutron API performance. For that I used Rally with Neutron's usual Rally task definition [2]. I measured against an all-in-one devstack - running always in a same sized VM, keeping its local.conf the same between versions as much as possible. Neutron was configured with ml2/ovs. Measuring other backends would also be interesting, but first I wanted to keep the config the same as I was going back to earlier versions as long as possible. Without much pain I managed to collect data starting from Yoga back to Pike. You can download all Rally reports in this tarball (6 MiB): https://drive.google.com/file/d/1TjFV7UWtX_sofjw3_njL6-6ezD7IPmsj/view?usp=sharing The tarball also contains data about how to reproduce these tests. It is currently available at my personal Google Drive. I will keep this around at least to the end of July. I would be happy to upload it to somewhere else better suited for long term storage. Let me also attach a single plot (I hope the mailing list configuration allows this) that shows the load_duration (actually the average of 3 runs each) for each Rally scenario by OpenStack release. Which I hope is the single picture summary of these test runs. However the Rally reports contain much more data, feel free to download and browse them. If the mailing list strips the attachment, the picture is included in the tarball too. Cheers, Bence (rubasov) [1] https://youtu.be/OqcnXxTbIxk [2] https://opendev.org/openstack/neutron/src/commit/a9912caf3fa1e258621965ea8c6295a2eac9887c/rally-jobs/task-neutron.yaml -------------- next part -------------- A non-text attachment was scrubbed... Name: load_duration.png Type: image/png Size: 532330 bytes Desc: not available URL: From vince.mulhollon at springcitysolutions.com Mon Jul 4 15:33:14 2022 From: vince.mulhollon at springcitysolutions.com (Vince Mulhollon) Date: Mon, 4 Jul 2022 10:33:14 -0500 Subject: A centralized list of usable vs unusable projects? Message-ID: Hi, Can anyone point me to a status board or other format identifying which projects are uninstallable or unusable perhaps organized by release? Specifically for Yoga on Kolla-Ansible although "in general" would also be useful. I have a test cluster, and I'm exercising Yoga using Ubuntu hosts and Kolla-Ansible, and the online docs imply every project is up and installable and usable. However, from my notes so far: "everyone knows" Freezer hasn't been installable for many years and requires an ElasticSearch version from last decade (Every project that uses ElasticSearch needs a different and incompatible version of ES, honestly just in general, that's not just an OpenStack phenomena), Murano has been uninstallable for years AND hard crashes all of Horizon if you try to install it, Monasca has been uninstallable for around a year, as near as I can tell, due to a crash loop in the log persister, every Watcher container crash loops for no apparent reason after every Kolla-Ansible installation, although I recently hand installed Watcher on a separate Yoga cluster and it "worked" or at least didn't crash loop, I will research this in more detail and add/update issues and storyboards. The docs claim Magnum works great with Docker Swarm and I have a specific enduser application for Swarm so I set up Magnum successfully, then learned "everyone knows" that Magnum doesn't actually work for Docker Swarm, so I was very annoyed at that. I haven't even tried Ceilometer-and-related on Yoga, although if Monasca is dead maybe I should add it to the testing plan. The vast majority of projects do "just work" which is super awesome of course. My point is, I can't find a status board type of page, or any sort of centralized list, it seems like it would be incredibly useful, so I'm making my own list of what works, and what does not work, for my own use, and I'd feel silly if there's already some service or page or dashboard implementing this effort that I couldn't find. I'd certainly contribute data toward such a status page, as I'm running tests on my test cluster anyway, may as well share the results. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jul 4 16:27:21 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Jul 2022 11:27:21 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 7 July 2022 at 1500 UTC Message-ID: <181ca09af2b.ac5a635c132555.2514523884960142985@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 7 July 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, 6 July at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From fungi at yuggoth.org Mon Jul 4 16:58:05 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 4 Jul 2022 16:58:05 +0000 Subject: A centralized list of usable vs unusable projects? In-Reply-To: References: Message-ID: <20220704165804.uprqmx3stmdopvrl@yuggoth.org> On 2022-07-04 10:33:14 -0500 (-0500), Vince Mulhollon wrote: > Can anyone point me to a status board or other format identifying > which projects are uninstallable or unusable perhaps organized by > release? [...] There is none, and attempting to create one has proven contentious in the past since "uninstallable" and "unusable" are more subjective than you might think at first. The TC has recently approved a new process for handling "inactive" projects which may be an indicator of these sorts of symptoms, though whether it's a leading or trailing indicator is hard to know without some initial data points: https://governance.openstack.org/tc/reference/emerging-technology-and-inactive-projects.html > My point is, I can't find a status board type of page, or any sort > of centralized list, it seems like it would be incredibly useful, > so I'm making my own list of what works, and what does not work, > for my own use, and I'd feel silly if there's already some service > or page or dashboard implementing this effort that I couldn't > find. > > I'd certainly contribute data toward such a status page, as I'm > running tests on my test cluster anyway, may as well share the > results. It's certainly not a new idea, and you might be interested in this thread from the first time we tried to do that 7 years ago, for similar reasons: https://lists.openstack.org/pipermail/openstack-operators/2015-June/007181.html The effort started out enthusiastically enough, but once it started to get into subjective criteria about project usability and maturity, it rapidly lost steam and the people involved went on to other things. I'm not trying to say it's a bad idea (far from it), just that it's a lot more complicated of a topic that it might seem on the surface, and there are definitely past experiences we can learn from in order to hopefully not fall into the same traps. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Tue Jul 5 03:00:30 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 04 Jul 2022 22:00:30 -0500 Subject: [all][tc][policy][operator] RBAC discussion policy pop-up next meeting: Tuesday 5 July (biweekly meeting) Message-ID: <181cc4d5a06.fc3bb539143564.3388369513350374560@ghanshyammann.com> Hello Everyone, RBAC policy popup next meeting is schedule tomorrow on 5th July at 14:00 UTC Meeting details: https://wiki.openstack.org/wiki/Consistent_and_Secure_Default_Policies_Popup_Team#Meeting Feel free to add the topic you would to discuss in this etherpad: https://etherpad.opendev.org/p/rbac-zed-ptg#L213 -gmann From swogatpradhan22 at gmail.com Tue Jul 5 03:58:07 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 09:28:07 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby Message-ID: Hi, I am trying to setup openstack wallaby using the repo : centos-release-openstack-wallaby on top of centos 8 stream. I have deployed undercloud but the service ironic_pxe_tftp is not starting up. Previously the undercloud was failing but now the undercloud is deployed successfully but the service is not coming up. Error log from ironic_pxe_tftp: [root at undercloud ~]# podman logs 23427d845098 /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory /bin/bash: /usr/sbin/in.tftpd: No such file or directory My undercloud config: [DEFAULT] undercloud_hostname = undercloud.taashee.com container_images_file = containers-prepare-parameter.yaml local_ip = 192.168.30.50/24 undercloud_public_host = 192.168.30.39 undercloud_admin_host = 192.168.30.41 undercloud_nameservers = 8.8.8.8 pxe_enabled = true #undercloud_ntp_servers = overcloud_domain_name = taashee.com subnets = ctlplane-subnet local_subnet = ctlplane-subnet #undercloud_service_certificate = generate_service_certificate = true certificate_generation_ca = local local_interface = eno3 inspection_extras = false undercloud_debug = false enable_tempest = false enable_ui = false [auth] [ctlplane-subnet] cidr = 192.168.30.0/24 dhcp_start = 192.168.30.60 dhcp_end = 192.168.30.100 inspection_iprange = 192.168.30.110,192.168.30.150 gateway = 192.168.30.1 With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 04:41:22 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 13:41:22 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: The error indicates that you are running c9s containers on c8s containers. I'd suggest you check your ContainParameterParameters and ensure you are pulling the correct image (wallaby + centos 8 stream). On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan wrote: > Hi, > I am trying to setup openstack wallaby using the repo : > centos-release-openstack-wallaby on top of centos 8 stream. > > I have deployed undercloud but the service ironic_pxe_tftp is not starting > up. Previously the undercloud was failing but now the undercloud is > deployed successfully but the service is not coming up. > > Error log from ironic_pxe_tftp: > > [root at undercloud ~]# podman logs 23427d845098 > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > /bin/bash: /usr/sbin/in.tftpd: No such file or directory > > My undercloud config: > > [DEFAULT] > undercloud_hostname = undercloud.taashee.com > container_images_file = containers-prepare-parameter.yaml > local_ip = 192.168.30.50/24 > undercloud_public_host = 192.168.30.39 > undercloud_admin_host = 192.168.30.41 > undercloud_nameservers = 8.8.8.8 > pxe_enabled = true > #undercloud_ntp_servers = > overcloud_domain_name = taashee.com > subnets = ctlplane-subnet > local_subnet = ctlplane-subnet > #undercloud_service_certificate = > generate_service_certificate = true > certificate_generation_ca = local > local_interface = eno3 > inspection_extras = false > undercloud_debug = false > enable_tempest = false > enable_ui = false > > [auth] > > [ctlplane-subnet] > cidr = 192.168.30.0/24 > dhcp_start = 192.168.30.60 > dhcp_end = 192.168.30.100 > inspection_iprange = 192.168.30.110,192.168.30.150 > gateway = 192.168.30.1 > > With regards, > > Swogat Pradhan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 04:41:48 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 13:41:48 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: > The error indicates that you are running c9s containers on c8s containers. I mean to say c9s containers on c8s *hosts*. On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami wrote: > The error indicates that you are running c9s containers on c8s containers. > I'd suggest you check your ContainParameterParameters and ensure you are > pulling > the correct image (wallaby + centos 8 stream). > > On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan > wrote: > >> Hi, >> I am trying to setup openstack wallaby using the repo : >> centos-release-openstack-wallaby on top of centos 8 stream. >> >> I have deployed undercloud but the service ironic_pxe_tftp is not >> starting up. Previously the undercloud was failing but now the undercloud >> is deployed successfully but the service is not coming up. >> >> Error log from ironic_pxe_tftp: >> >> [root at undercloud ~]# podman logs 23427d845098 >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >> >> My undercloud config: >> >> [DEFAULT] >> undercloud_hostname = undercloud.taashee.com >> container_images_file = containers-prepare-parameter.yaml >> local_ip = 192.168.30.50/24 >> undercloud_public_host = 192.168.30.39 >> undercloud_admin_host = 192.168.30.41 >> undercloud_nameservers = 8.8.8.8 >> pxe_enabled = true >> #undercloud_ntp_servers = >> overcloud_domain_name = taashee.com >> subnets = ctlplane-subnet >> local_subnet = ctlplane-subnet >> #undercloud_service_certificate = >> generate_service_certificate = true >> certificate_generation_ca = local >> local_interface = eno3 >> inspection_extras = false >> undercloud_debug = false >> enable_tempest = false >> enable_ui = false >> >> [auth] >> >> [ctlplane-subnet] >> cidr = 192.168.30.0/24 >> dhcp_start = 192.168.30.60 >> dhcp_end = 192.168.30.100 >> inspection_iprange = 192.168.30.110,192.168.30.150 >> gateway = 192.168.30.1 >> >> With regards, >> >> Swogat Pradhan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 5 05:27:26 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 10:57:26 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: i believe that is the issue, the current continer parameters file is trying to pull centos9 images. i changed the namespace, but i was unable to find the quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify that: (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml # Generated with the following on 2022-07-04T13:53:39.943715 # # openstack tripleo container image prepare default --local-push-destination --output-env-file containers-prepare-parameter.yaml # parameter_defaults: ContainerImagePrepare: - push_destination: true set: ceph_alertmanager_image: alertmanager ceph_alertmanager_namespace: quay.ceph.io/prometheus ceph_alertmanager_tag: v0.16.2 ceph_grafana_image: grafana ceph_grafana_namespace: quay.ceph.io/app-sre ceph_grafana_tag: 6.7.4 ceph_image: daemon ceph_namespace: quay.io/ceph ceph_node_exporter_image: node-exporter ceph_node_exporter_namespace: quay.ceph.io/prometheus ceph_node_exporter_tag: v0.17.0 ceph_prometheus_image: prometheus ceph_prometheus_namespace: quay.ceph.io/prometheus ceph_prometheus_tag: v2.7.2 ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 name_prefix: openstack- name_suffix: '' namespace: quay.io/tripleowallaby,iiuc neutron_driver: ovn rhel_containers: false tag: current-tripleo tag_from_label: rdo_version Is this how to specify it? On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami wrote: > > The error indicates that you are running c9s containers on c8s > containers. > I mean to say > > c9s containers on c8s *hosts*. > > On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami > wrote: > >> The error indicates that you are running c9s containers on c8s containers. >> I'd suggest you check your ContainParameterParameters and ensure you are >> pulling >> the correct image (wallaby + centos 8 stream). >> >> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan >> wrote: >> >>> Hi, >>> I am trying to setup openstack wallaby using the repo : >>> centos-release-openstack-wallaby on top of centos 8 stream. >>> >>> I have deployed undercloud but the service ironic_pxe_tftp is not >>> starting up. Previously the undercloud was failing but now the undercloud >>> is deployed successfully but the service is not coming up. >>> >>> Error log from ironic_pxe_tftp: >>> >>> [root at undercloud ~]# podman logs 23427d845098 >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>> >>> My undercloud config: >>> >>> [DEFAULT] >>> undercloud_hostname = undercloud.taashee.com >>> container_images_file = containers-prepare-parameter.yaml >>> local_ip = 192.168.30.50/24 >>> undercloud_public_host = 192.168.30.39 >>> undercloud_admin_host = 192.168.30.41 >>> undercloud_nameservers = 8.8.8.8 >>> pxe_enabled = true >>> #undercloud_ntp_servers = >>> overcloud_domain_name = taashee.com >>> subnets = ctlplane-subnet >>> local_subnet = ctlplane-subnet >>> #undercloud_service_certificate = >>> generate_service_certificate = true >>> certificate_generation_ca = local >>> local_interface = eno3 >>> inspection_extras = false >>> undercloud_debug = false >>> enable_tempest = false >>> enable_ui = false >>> >>> [auth] >>> >>> [ctlplane-subnet] >>> cidr = 192.168.30.0/24 >>> dhcp_start = 192.168.30.60 >>> dhcp_end = 192.168.30.100 >>> inspection_iprange = 192.168.30.110,192.168.30.150 >>> gateway = 192.168.30.1 >>> >>> With regards, >>> >>> Swogat Pradhan >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 05:35:59 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 14:35:59 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Can you try namespace: quay.io/tripleowallaby instead ? On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan wrote: > i believe that is the issue, the current continer parameters file is > trying to pull centos9 images. > i changed the namespace, but i was unable to find the > quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify > that: > (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml > # Generated with the following on 2022-07-04T13:53:39.943715 > # > # openstack tripleo container image prepare default > --local-push-destination --output-env-file containers-prepare-parameter.yaml > # > > parameter_defaults: > ContainerImagePrepare: > - push_destination: true > set: > ceph_alertmanager_image: alertmanager > ceph_alertmanager_namespace: quay.ceph.io/prometheus > ceph_alertmanager_tag: v0.16.2 > ceph_grafana_image: grafana > ceph_grafana_namespace: quay.ceph.io/app-sre > ceph_grafana_tag: 6.7.4 > ceph_image: daemon > ceph_namespace: quay.io/ceph > ceph_node_exporter_image: node-exporter > ceph_node_exporter_namespace: quay.ceph.io/prometheus > ceph_node_exporter_tag: v0.17.0 > ceph_prometheus_image: prometheus > ceph_prometheus_namespace: quay.ceph.io/prometheus > ceph_prometheus_tag: v2.7.2 > ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 > name_prefix: openstack- > name_suffix: '' > namespace: quay.io/tripleowallaby,iiuc > neutron_driver: ovn > rhel_containers: false > tag: current-tripleo > tag_from_label: rdo_version > > Is this how to specify it? > > On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami > wrote: > >> > The error indicates that you are running c9s containers on c8s >> containers. >> I mean to say >> >> c9s containers on c8s *hosts*. >> >> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >> wrote: >> >>> The error indicates that you are running c9s containers on c8s >>> containers. >>> I'd suggest you check your ContainParameterParameters and ensure you are >>> pulling >>> the correct image (wallaby + centos 8 stream). >>> >>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan >>> wrote: >>> >>>> Hi, >>>> I am trying to setup openstack wallaby using the repo : >>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>> >>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>> starting up. Previously the undercloud was failing but now the undercloud >>>> is deployed successfully but the service is not coming up. >>>> >>>> Error log from ironic_pxe_tftp: >>>> >>>> [root at undercloud ~]# podman logs 23427d845098 >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> >>>> My undercloud config: >>>> >>>> [DEFAULT] >>>> undercloud_hostname = undercloud.taashee.com >>>> container_images_file = containers-prepare-parameter.yaml >>>> local_ip = 192.168.30.50/24 >>>> undercloud_public_host = 192.168.30.39 >>>> undercloud_admin_host = 192.168.30.41 >>>> undercloud_nameservers = 8.8.8.8 >>>> pxe_enabled = true >>>> #undercloud_ntp_servers = >>>> overcloud_domain_name = taashee.com >>>> subnets = ctlplane-subnet >>>> local_subnet = ctlplane-subnet >>>> #undercloud_service_certificate = >>>> generate_service_certificate = true >>>> certificate_generation_ca = local >>>> local_interface = eno3 >>>> inspection_extras = false >>>> undercloud_debug = false >>>> enable_tempest = false >>>> enable_ui = false >>>> >>>> [auth] >>>> >>>> [ctlplane-subnet] >>>> cidr = 192.168.30.0/24 >>>> dhcp_start = 192.168.30.60 >>>> dhcp_end = 192.168.30.100 >>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>> gateway = 192.168.30.1 >>>> >>>> With regards, >>>> >>>> Swogat Pradhan >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Tue Jul 5 05:38:52 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Tue, 5 Jul 2022 00:38:52 -0500 Subject: [event] OpenInfradays Mexico 2022 (Virtual) In-Reply-To: References: Message-ID: Hello Community, the CFP will close in 6 days, don?t forget to submit your proposals, we only need the title and abstracts, video talk needs to be submitted later on. Remember that this is a virtual event and a great opportunity to share and spread knowledge across the LATAM region. https://events.linuxfoundation.org/about/community/?_sft_lfevent-country=mx https://openinfradays.mx/ Cheers! --- Alvaro Soto Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you. ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. On Thu, Jun 23, 2022, 6:13 PM Alvaro Soto wrote: > You're all invited to participate in the CFP for OID-MX22 > https://openinfradays.mx > > Let me know if you have any questions. > > --- > Alvaro Soto Escobar > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 5 05:39:10 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 11:09:10 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: I had used the namespace: quay.io/tripleowallaby where I faced this issue. Which is why i started this thread. On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami wrote: > Can you try > > namespace: quay.io/tripleowallaby > > instead ? > > On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan > wrote: > >> i believe that is the issue, the current continer parameters file is >> trying to pull centos9 images. >> i changed the namespace, but i was unable to find the >> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify >> that: >> (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml >> # Generated with the following on 2022-07-04T13:53:39.943715 >> # >> # openstack tripleo container image prepare default >> --local-push-destination --output-env-file containers-prepare-parameter.yaml >> # >> >> parameter_defaults: >> ContainerImagePrepare: >> - push_destination: true >> set: >> ceph_alertmanager_image: alertmanager >> ceph_alertmanager_namespace: quay.ceph.io/prometheus >> ceph_alertmanager_tag: v0.16.2 >> ceph_grafana_image: grafana >> ceph_grafana_namespace: quay.ceph.io/app-sre >> ceph_grafana_tag: 6.7.4 >> ceph_image: daemon >> ceph_namespace: quay.io/ceph >> ceph_node_exporter_image: node-exporter >> ceph_node_exporter_namespace: quay.ceph.io/prometheus >> ceph_node_exporter_tag: v0.17.0 >> ceph_prometheus_image: prometheus >> ceph_prometheus_namespace: quay.ceph.io/prometheus >> ceph_prometheus_tag: v2.7.2 >> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >> name_prefix: openstack- >> name_suffix: '' >> namespace: quay.io/tripleowallaby,iiuc >> neutron_driver: ovn >> rhel_containers: false >> tag: current-tripleo >> tag_from_label: rdo_version >> >> Is this how to specify it? >> >> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >> wrote: >> >>> > The error indicates that you are running c9s containers on c8s >>> containers. >>> I mean to say >>> >>> c9s containers on c8s *hosts*. >>> >>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>> wrote: >>> >>>> The error indicates that you are running c9s containers on c8s >>>> containers. >>>> I'd suggest you check your ContainParameterParameters and ensure you >>>> are pulling >>>> the correct image (wallaby + centos 8 stream). >>>> >>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> Hi, >>>>> I am trying to setup openstack wallaby using the repo : >>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>> >>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>> is deployed successfully but the service is not coming up. >>>>> >>>>> Error log from ironic_pxe_tftp: >>>>> >>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>> >>>>> My undercloud config: >>>>> >>>>> [DEFAULT] >>>>> undercloud_hostname = undercloud.taashee.com >>>>> container_images_file = containers-prepare-parameter.yaml >>>>> local_ip = 192.168.30.50/24 >>>>> undercloud_public_host = 192.168.30.39 >>>>> undercloud_admin_host = 192.168.30.41 >>>>> undercloud_nameservers = 8.8.8.8 >>>>> pxe_enabled = true >>>>> #undercloud_ntp_servers = >>>>> overcloud_domain_name = taashee.com >>>>> subnets = ctlplane-subnet >>>>> local_subnet = ctlplane-subnet >>>>> #undercloud_service_certificate = >>>>> generate_service_certificate = true >>>>> certificate_generation_ca = local >>>>> local_interface = eno3 >>>>> inspection_extras = false >>>>> undercloud_debug = false >>>>> enable_tempest = false >>>>> enable_ui = false >>>>> >>>>> [auth] >>>>> >>>>> [ctlplane-subnet] >>>>> cidr = 192.168.30.0/24 >>>>> dhcp_start = 192.168.30.60 >>>>> dhcp_end = 192.168.30.100 >>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>> gateway = 192.168.30.1 >>>>> >>>>> With regards, >>>>> >>>>> Swogat Pradhan >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Tue Jul 5 06:29:00 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Tue, 5 Jul 2022 16:29:00 +1000 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Hey, The tripleowallaby containers are all built on ubi8 at the moment: ? skopeo inspect docker:// quay.io/tripleowallaby/openstack-ironic-base:current-tripleo | jq .Labels.name "ubi8" The container image should be ok. If it isn't an environmental issue, we should be seeing the same problem in our CI environments. Wallaby in particular is getting a lot of attention in our CI environments at the moment. Are you able to inspect the container and share the output? sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels Brendan Shephard Software Engineer Red Hat APAC 193 N Quay Brisbane City QLD 4000 @RedHat Red Hat Red Hat On Tue, Jul 5, 2022 at 3:39 PM Swogat Pradhan wrote: > I had used the namespace: quay.io/tripleowallaby where I faced this issue. > Which is why i started this thread. > > On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami > wrote: > >> Can you try >> >> namespace: quay.io/tripleowallaby >> >> instead ? >> >> On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan >> wrote: >> >>> i believe that is the issue, the current continer parameters file is >>> trying to pull centos9 images. >>> i changed the namespace, but i was unable to find the >>> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify >>> that: >>> (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml >>> # Generated with the following on 2022-07-04T13:53:39.943715 >>> # >>> # openstack tripleo container image prepare default >>> --local-push-destination --output-env-file containers-prepare-parameter.yaml >>> # >>> >>> parameter_defaults: >>> ContainerImagePrepare: >>> - push_destination: true >>> set: >>> ceph_alertmanager_image: alertmanager >>> ceph_alertmanager_namespace: quay.ceph.io/prometheus >>> ceph_alertmanager_tag: v0.16.2 >>> ceph_grafana_image: grafana >>> ceph_grafana_namespace: quay.ceph.io/app-sre >>> ceph_grafana_tag: 6.7.4 >>> ceph_image: daemon >>> ceph_namespace: quay.io/ceph >>> ceph_node_exporter_image: node-exporter >>> ceph_node_exporter_namespace: quay.ceph.io/prometheus >>> ceph_node_exporter_tag: v0.17.0 >>> ceph_prometheus_image: prometheus >>> ceph_prometheus_namespace: quay.ceph.io/prometheus >>> ceph_prometheus_tag: v2.7.2 >>> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >>> name_prefix: openstack- >>> name_suffix: '' >>> namespace: quay.io/tripleowallaby,iiuc >>> neutron_driver: ovn >>> rhel_containers: false >>> tag: current-tripleo >>> tag_from_label: rdo_version >>> >>> Is this how to specify it? >>> >>> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >>> wrote: >>> >>>> > The error indicates that you are running c9s containers on c8s >>>> containers. >>>> I mean to say >>>> >>>> c9s containers on c8s *hosts*. >>>> >>>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>>> wrote: >>>> >>>>> The error indicates that you are running c9s containers on c8s >>>>> containers. >>>>> I'd suggest you check your ContainParameterParameters and ensure you >>>>> are pulling >>>>> the correct image (wallaby + centos 8 stream). >>>>> >>>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> Hi, >>>>>> I am trying to setup openstack wallaby using the repo : >>>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>>> >>>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>>> is deployed successfully but the service is not coming up. >>>>>> >>>>>> Error log from ironic_pxe_tftp: >>>>>> >>>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>> >>>>>> My undercloud config: >>>>>> >>>>>> [DEFAULT] >>>>>> undercloud_hostname = undercloud.taashee.com >>>>>> container_images_file = containers-prepare-parameter.yaml >>>>>> local_ip = 192.168.30.50/24 >>>>>> undercloud_public_host = 192.168.30.39 >>>>>> undercloud_admin_host = 192.168.30.41 >>>>>> undercloud_nameservers = 8.8.8.8 >>>>>> pxe_enabled = true >>>>>> #undercloud_ntp_servers = >>>>>> overcloud_domain_name = taashee.com >>>>>> subnets = ctlplane-subnet >>>>>> local_subnet = ctlplane-subnet >>>>>> #undercloud_service_certificate = >>>>>> generate_service_certificate = true >>>>>> certificate_generation_ca = local >>>>>> local_interface = eno3 >>>>>> inspection_extras = false >>>>>> undercloud_debug = false >>>>>> enable_tempest = false >>>>>> enable_ui = false >>>>>> >>>>>> [auth] >>>>>> >>>>>> [ctlplane-subnet] >>>>>> cidr = 192.168.30.0/24 >>>>>> dhcp_start = 192.168.30.60 >>>>>> dhcp_end = 192.168.30.100 >>>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>>> gateway = 192.168.30.1 >>>>>> >>>>>> With regards, >>>>>> >>>>>> Swogat Pradhan >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From tkajinam at redhat.com Tue Jul 5 06:32:36 2022 From: tkajinam at redhat.com (Takashi Kajinami) Date: Tue, 5 Jul 2022 15:32:36 +0900 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Looking at the latest wallaby code, it seems we use dnsmasq instead of tftpd server[1] even for CentOS 8 and I guess you are still using the old version. Please check whether the following patch is included. [1] https://review.opendev.org/c/openstack/tripleo-heat-templates/+/809213 . We've removed tftp-server from ironic-conductor image by[2] So the latest container does not include the binary even if you are using the correct one. [2] https://review.opendev.org/c/openstack/tripleo-common/+/812690 On Tue, Jul 5, 2022 at 3:29 PM Brendan Shephard wrote: > Hey, > > The tripleowallaby containers are all built on ubi8 at the moment: > ? skopeo inspect docker:// > quay.io/tripleowallaby/openstack-ironic-base:current-tripleo | jq > .Labels.name > "ubi8" > > The container image should be ok. If it isn't an environmental issue, we > should be seeing the same problem in our CI environments. Wallaby in > particular is getting a lot of attention in our CI environments at the > moment. > > Are you able to inspect the container and share the output? > sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels > > > > Brendan Shephard > > Software Engineer > > Red Hat APAC > > 193 N Quay > > Brisbane City QLD 4000 > @RedHat Red Hat > Red Hat > > > > > > On Tue, Jul 5, 2022 at 3:39 PM Swogat Pradhan > wrote: > >> I had used the namespace: quay.io/tripleowallaby where I faced this >> issue. >> Which is why i started this thread. >> >> On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami >> wrote: >> >>> Can you try >>> >>> namespace: quay.io/tripleowallaby >>> >>> instead ? >>> >>> On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan >>> wrote: >>> >>>> i believe that is the issue, the current continer parameters file is >>>> trying to pull centos9 images. >>>> i changed the namespace, but i was unable to find the >>>> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to >>>> specify that: >>>> (undercloud) [stack at undercloud ~]$ cat >>>> containers-prepare-parameter.yaml >>>> # Generated with the following on 2022-07-04T13:53:39.943715 >>>> # >>>> # openstack tripleo container image prepare default >>>> --local-push-destination --output-env-file containers-prepare-parameter.yaml >>>> # >>>> >>>> parameter_defaults: >>>> ContainerImagePrepare: >>>> - push_destination: true >>>> set: >>>> ceph_alertmanager_image: alertmanager >>>> ceph_alertmanager_namespace: quay.ceph.io/prometheus >>>> ceph_alertmanager_tag: v0.16.2 >>>> ceph_grafana_image: grafana >>>> ceph_grafana_namespace: quay.ceph.io/app-sre >>>> ceph_grafana_tag: 6.7.4 >>>> ceph_image: daemon >>>> ceph_namespace: quay.io/ceph >>>> ceph_node_exporter_image: node-exporter >>>> ceph_node_exporter_namespace: quay.ceph.io/prometheus >>>> ceph_node_exporter_tag: v0.17.0 >>>> ceph_prometheus_image: prometheus >>>> ceph_prometheus_namespace: quay.ceph.io/prometheus >>>> ceph_prometheus_tag: v2.7.2 >>>> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >>>> name_prefix: openstack- >>>> name_suffix: '' >>>> namespace: quay.io/tripleowallaby,iiuc >>>> neutron_driver: ovn >>>> rhel_containers: false >>>> tag: current-tripleo >>>> tag_from_label: rdo_version >>>> >>>> Is this how to specify it? >>>> >>>> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >>>> wrote: >>>> >>>>> > The error indicates that you are running c9s containers on c8s >>>>> containers. >>>>> I mean to say >>>>> >>>>> c9s containers on c8s *hosts*. >>>>> >>>>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> The error indicates that you are running c9s containers on c8s >>>>>> containers. >>>>>> I'd suggest you check your ContainParameterParameters and ensure you >>>>>> are pulling >>>>>> the correct image (wallaby + centos 8 stream). >>>>>> >>>>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> I am trying to setup openstack wallaby using the repo : >>>>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>>>> >>>>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>>>> is deployed successfully but the service is not coming up. >>>>>>> >>>>>>> Error log from ironic_pxe_tftp: >>>>>>> >>>>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> >>>>>>> My undercloud config: >>>>>>> >>>>>>> [DEFAULT] >>>>>>> undercloud_hostname = undercloud.taashee.com >>>>>>> container_images_file = containers-prepare-parameter.yaml >>>>>>> local_ip = 192.168.30.50/24 >>>>>>> undercloud_public_host = 192.168.30.39 >>>>>>> undercloud_admin_host = 192.168.30.41 >>>>>>> undercloud_nameservers = 8.8.8.8 >>>>>>> pxe_enabled = true >>>>>>> #undercloud_ntp_servers = >>>>>>> overcloud_domain_name = taashee.com >>>>>>> subnets = ctlplane-subnet >>>>>>> local_subnet = ctlplane-subnet >>>>>>> #undercloud_service_certificate = >>>>>>> generate_service_certificate = true >>>>>>> certificate_generation_ca = local >>>>>>> local_interface = eno3 >>>>>>> inspection_extras = false >>>>>>> undercloud_debug = false >>>>>>> enable_tempest = false >>>>>>> enable_ui = false >>>>>>> >>>>>>> [auth] >>>>>>> >>>>>>> [ctlplane-subnet] >>>>>>> cidr = 192.168.30.0/24 >>>>>>> dhcp_start = 192.168.30.60 >>>>>>> dhcp_end = 192.168.30.100 >>>>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>>>> gateway = 192.168.30.1 >>>>>>> >>>>>>> With regards, >>>>>>> >>>>>>> Swogat Pradhan >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 5 06:37:47 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Tue, 5 Jul 2022 12:07:47 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Hi, Here is the output as requested: [root at undercloud ~]# sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels { "architecture": "x86_64", "build-date": "2020-09-01T19:43:46.041620", "com.redhat.build-host": "cpt-1008.osbs.prod.upshift.rdu2.redhat.com", "com.redhat.component": "ubi8-container", "com.redhat.license_terms": " https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI", "config_data": "{'command': ['/bin/bash', '-c', 'BIND_HOST=$(hiera ironic::pxe::tftp_bind_host -c /etc/puppet/hiera.yaml); /usr/sbin/in.tftpd --foreground --user root --address $BIND_HOST:69 --map-file /var/lib/ironic/tftpboot/map-file /var/lib/ironic/tftpboot'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'TRIPLEO_CONFIG_HASH': '669301f635becb3ecffd248a4ac56f35'}, 'healthcheck': {'test': '/openstack/healthcheck'}, 'image': ' undercloud.ctlplane.taashee.com:8787/tripleowallaby/openstack-ironic-pxe:current-tripleo', 'net': 'host', 'privileged': False, 'restart': 'always', 'start_order': 90, 'volumes': ['/etc/hosts:/etc/hosts:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/extracted:/etc/pki/ca-trust/extracted:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/etc/pki/tls/certs/ca-bundle.crt:/etc/pki/tls/certs/ca-bundle.crt:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/pki/tls/cert.pem:/etc/pki/tls/cert.pem:ro', '/dev/log:/dev/log', '/etc/puppet:/etc/puppet:ro', '/var/lib/kolla/config_files/ironic_pxe_tftp.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/config-data/puppet-generated/ironic:/var/lib/kolla/config_files/src:ro', '/var/lib/ironic:/var/lib/ironic/:shared,z', '/var/log/containers/ironic:/var/log/ironic:z', '/var/log/containers/httpd/ironic-pxe:/var/log/httpd:z']}", "config_id": "tripleo_step4", "container_name": "ironic_pxe_tftp", "description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.", "distribution-scope": "public", "io.buildah.version": "1.19.9", "io.k8s.description": "The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.", "io.k8s.display-name": "Red Hat Universal Base Image 8", "io.openshift.expose-services": "", "io.openshift.tags": "base rhel8", "maintainer": "OpenStack TripleO team", "managed_by": "tripleo_ansible", "name": "ubi8", "release": "347", "summary": "Provides the latest release of Red Hat Universal Base Image 8.", "tcib_managed": "true", "url": " https://access.redhat.com/containers/#/registry.access.redhat.com/ubi8/images/8.2-347 ", "vcs-ref": "663db861f0ff7a9c526c1c169a62c14c01a32dcc", "vcs-type": "git", "vendor": "Red Hat, Inc.", "version": "8.2" } On Tue, Jul 5, 2022 at 11:59 AM Brendan Shephard wrote: > Hey, > > The tripleowallaby containers are all built on ubi8 at the moment: > ? skopeo inspect docker:// > quay.io/tripleowallaby/openstack-ironic-base:current-tripleo | jq > .Labels.name > "ubi8" > > The container image should be ok. If it isn't an environmental issue, we > should be seeing the same problem in our CI environments. Wallaby in > particular is getting a lot of attention in our CI environments at the > moment. > > Are you able to inspect the container and share the output? > sudo podman inspect ironic_pxe_tftp | jq .[].Config.Labels > > > > Brendan Shephard > > Software Engineer > > Red Hat APAC > > 193 N Quay > > Brisbane City QLD 4000 > @RedHat Red Hat > Red Hat > > > > > > On Tue, Jul 5, 2022 at 3:39 PM Swogat Pradhan > wrote: > >> I had used the namespace: quay.io/tripleowallaby where I faced this >> issue. >> Which is why i started this thread. >> >> On Tue, Jul 5, 2022 at 11:06 AM Takashi Kajinami >> wrote: >> >>> Can you try >>> >>> namespace: quay.io/tripleowallaby >>> >>> instead ? >>> >>> On Tue, Jul 5, 2022 at 2:27 PM Swogat Pradhan >>> wrote: >>> >>>> i believe that is the issue, the current continer parameters file is >>>> trying to pull centos9 images. >>>> i changed the namespace, but i was unable to find the >>>> quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to >>>> specify that: >>>> (undercloud) [stack at undercloud ~]$ cat >>>> containers-prepare-parameter.yaml >>>> # Generated with the following on 2022-07-04T13:53:39.943715 >>>> # >>>> # openstack tripleo container image prepare default >>>> --local-push-destination --output-env-file containers-prepare-parameter.yaml >>>> # >>>> >>>> parameter_defaults: >>>> ContainerImagePrepare: >>>> - push_destination: true >>>> set: >>>> ceph_alertmanager_image: alertmanager >>>> ceph_alertmanager_namespace: quay.ceph.io/prometheus >>>> ceph_alertmanager_tag: v0.16.2 >>>> ceph_grafana_image: grafana >>>> ceph_grafana_namespace: quay.ceph.io/app-sre >>>> ceph_grafana_tag: 6.7.4 >>>> ceph_image: daemon >>>> ceph_namespace: quay.io/ceph >>>> ceph_node_exporter_image: node-exporter >>>> ceph_node_exporter_namespace: quay.ceph.io/prometheus >>>> ceph_node_exporter_tag: v0.17.0 >>>> ceph_prometheus_image: prometheus >>>> ceph_prometheus_namespace: quay.ceph.io/prometheus >>>> ceph_prometheus_tag: v2.7.2 >>>> ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 >>>> name_prefix: openstack- >>>> name_suffix: '' >>>> namespace: quay.io/tripleowallaby,iiuc >>>> neutron_driver: ovn >>>> rhel_containers: false >>>> tag: current-tripleo >>>> tag_from_label: rdo_version >>>> >>>> Is this how to specify it? >>>> >>>> On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami >>>> wrote: >>>> >>>>> > The error indicates that you are running c9s containers on c8s >>>>> containers. >>>>> I mean to say >>>>> >>>>> c9s containers on c8s *hosts*. >>>>> >>>>> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >>>>> wrote: >>>>> >>>>>> The error indicates that you are running c9s containers on c8s >>>>>> containers. >>>>>> I'd suggest you check your ContainParameterParameters and ensure you >>>>>> are pulling >>>>>> the correct image (wallaby + centos 8 stream). >>>>>> >>>>>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> Hi, >>>>>>> I am trying to setup openstack wallaby using the repo : >>>>>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>>>>> >>>>>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>>>>> starting up. Previously the undercloud was failing but now the undercloud >>>>>>> is deployed successfully but the service is not coming up. >>>>>>> >>>>>>> Error log from ironic_pxe_tftp: >>>>>>> >>>>>>> [root at undercloud ~]# podman logs 23427d845098 >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>>>>> >>>>>>> My undercloud config: >>>>>>> >>>>>>> [DEFAULT] >>>>>>> undercloud_hostname = undercloud.taashee.com >>>>>>> container_images_file = containers-prepare-parameter.yaml >>>>>>> local_ip = 192.168.30.50/24 >>>>>>> undercloud_public_host = 192.168.30.39 >>>>>>> undercloud_admin_host = 192.168.30.41 >>>>>>> undercloud_nameservers = 8.8.8.8 >>>>>>> pxe_enabled = true >>>>>>> #undercloud_ntp_servers = >>>>>>> overcloud_domain_name = taashee.com >>>>>>> subnets = ctlplane-subnet >>>>>>> local_subnet = ctlplane-subnet >>>>>>> #undercloud_service_certificate = >>>>>>> generate_service_certificate = true >>>>>>> certificate_generation_ca = local >>>>>>> local_interface = eno3 >>>>>>> inspection_extras = false >>>>>>> undercloud_debug = false >>>>>>> enable_tempest = false >>>>>>> enable_ui = false >>>>>>> >>>>>>> [auth] >>>>>>> >>>>>>> [ctlplane-subnet] >>>>>>> cidr = 192.168.30.0/24 >>>>>>> dhcp_start = 192.168.30.60 >>>>>>> dhcp_end = 192.168.30.100 >>>>>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>>>>> gateway = 192.168.30.1 >>>>>>> >>>>>>> With regards, >>>>>>> >>>>>>> Swogat Pradhan >>>>>>> >>>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Tue Jul 5 06:55:31 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Tue, 5 Jul 2022 11:25:31 +0430 Subject: import external compute Message-ID: hello is there any way that i can import an already existing kolla-ansible compute to my openstack ? migrating instances manually take a lot of time from me -------------- next part -------------- An HTML attachment was scrubbed... URL: From hongbin034 at gmail.com Tue Jul 5 07:24:08 2022 From: hongbin034 at gmail.com (Hongbin Lu) Date: Tue, 5 Jul 2022 15:24:08 +0800 Subject: Zun connector for persistent shared files system Manila In-Reply-To: References: Message-ID: Hi Vaibhav, In current state, only Cinder is supported. In theory, Manila can be added as another storage backend. I will check if anyone interests to contribute this feature. Best regards, Hongbin On Fri, Jul 1, 2022 at 9:40 PM Vaibhav wrote: > Hi, > > I am using zun for running containers and managing them. > I deployed cinder also persistent storage. and it is working fine. > > I want to mount my Manila shares to be mounted on containers managed by > Zun. > > I can see a Fuxi project and driver for this but it is discontinued now. > > With Cinder only one container can use the storage volume at a time. If I > want to have a shared file system to be mounted on multiple containers > simultaneously, it is not possible with cinder. > > Is there any alternative to Fuxi. is there any other mechanism to use > docker Volume support for NFS as shown in the link below? > https://docs.docker.com/storage/volumes/ > > Please advise and give a suggestion. > > Regards, > Vaibhav > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jul 5 07:38:02 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 05 Jul 2022 09:38:02 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: References: Message-ID: <23344295.OIdGMMxvHE@p1> Hi, Dnia poniedzia?ek, 4 lipca 2022 16:31:13 CEST Bence Romsics pisze: > Hi Neutrinos! > > Inspired by Julia Kreger's presentation on the summit [1] I wanted to > gather some ideas about the change in Neutron API performance. > > For that I used Rally with Neutron's usual Rally task definition [2]. > I measured against an all-in-one devstack - running always in a same > sized VM, keeping its local.conf the same between versions as much as > possible. Neutron was configured with ml2/ovs. Measuring other > backends would also be interesting, but first I wanted to keep the > config the same as I was going back to earlier versions as long as > possible. > > Without much pain I managed to collect data starting from Yoga back to Pike. > > You can download all Rally reports in this tarball (6 MiB): > https://drive.google.com/file/d/1TjFV7UWtX_sofjw3_njL6-6ezD7IPmsj/view?usp=sharing > > The tarball also contains data about how to reproduce these tests. It > is currently available at my personal Google Drive. I will keep this > around at least to the end of July. I would be happy to upload it to > somewhere else better suited for long term storage. > > Let me also attach a single plot (I hope the mailing list > configuration allows this) that shows the load_duration (actually the > average of 3 runs each) for each Rally scenario by OpenStack release. > Which I hope is the single picture summary of these test runs. However > the Rally reports contain much more data, feel free to download and > browse them. If the mailing list strips the attachment, the picture is > included in the tarball too. > > Cheers, > Bence (rubasov) > > [1] https://youtu.be/OqcnXxTbIxk > [2] https://opendev.org/openstack/neutron/src/commit/a9912caf3fa1e258621965ea8c6295a2eac9887c/rally-jobs/task-neutron.yaml > Thx Bence for that. So from just brief look at load_duration.png file it seems that we are improving API performance in last cycles :) I was also thinking about doing something similar to what Julia described in Berlin (but I still didn't had time for it). But I was thinking that instead of using rally, maybe we can do something similar like Ironic is doing and have some simple script which will populate neutron db with many resources, like e.g. 2-3k ports/networks/trunks etc. and then measure time of e.g. doing "list" of those resources. That way we will IMHO measure only neutron API performance and Neutron - DB interactions, without relying on the backends and other components, like e.g. Nova to spawn actual VM. Wdyt about it? Is it worth to do or it will be better to rely on the rally only? -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From lokendrarathour at gmail.com Tue Jul 5 08:28:29 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 5 Jul 2022 13:58:29 +0530 Subject: Ironic pxe tftp service failed | Tripleo wallaby In-Reply-To: References: Message-ID: Also Swogat, in our case, we made these container images offline, it made me stay with one stable release for my research. maybe later images got some changes comparing older one. Best Regards, Lokendra On Tue, Jul 5, 2022 at 10:57 AM Swogat Pradhan wrote: > i believe that is the issue, the current continer parameters file is > trying to pull centos9 images. > i changed the namespace, but i was unable to find the > quay.io/tripleowallaby,iiuc in web, honestly i don't know ow to specify > that: > (undercloud) [stack at undercloud ~]$ cat containers-prepare-parameter.yaml > # Generated with the following on 2022-07-04T13:53:39.943715 > # > # openstack tripleo container image prepare default > --local-push-destination --output-env-file containers-prepare-parameter.yaml > # > > parameter_defaults: > ContainerImagePrepare: > - push_destination: true > set: > ceph_alertmanager_image: alertmanager > ceph_alertmanager_namespace: quay.ceph.io/prometheus > ceph_alertmanager_tag: v0.16.2 > ceph_grafana_image: grafana > ceph_grafana_namespace: quay.ceph.io/app-sre > ceph_grafana_tag: 6.7.4 > ceph_image: daemon > ceph_namespace: quay.io/ceph > ceph_node_exporter_image: node-exporter > ceph_node_exporter_namespace: quay.ceph.io/prometheus > ceph_node_exporter_tag: v0.17.0 > ceph_prometheus_image: prometheus > ceph_prometheus_namespace: quay.ceph.io/prometheus > ceph_prometheus_tag: v2.7.2 > ceph_tag: v6.0.4-stable-6.0-pacific-centos-8-x86_64 > name_prefix: openstack- > name_suffix: '' > namespace: quay.io/tripleowallaby,iiuc > neutron_driver: ovn > rhel_containers: false > tag: current-tripleo > tag_from_label: rdo_version > > Is this how to specify it? > > On Tue, Jul 5, 2022 at 10:12 AM Takashi Kajinami > wrote: > >> > The error indicates that you are running c9s containers on c8s >> containers. >> I mean to say >> >> c9s containers on c8s *hosts*. >> >> On Tue, Jul 5, 2022 at 1:41 PM Takashi Kajinami >> wrote: >> >>> The error indicates that you are running c9s containers on c8s >>> containers. >>> I'd suggest you check your ContainParameterParameters and ensure you are >>> pulling >>> the correct image (wallaby + centos 8 stream). >>> >>> On Tue, Jul 5, 2022 at 1:12 PM Swogat Pradhan >>> wrote: >>> >>>> Hi, >>>> I am trying to setup openstack wallaby using the repo : >>>> centos-release-openstack-wallaby on top of centos 8 stream. >>>> >>>> I have deployed undercloud but the service ironic_pxe_tftp is not >>>> starting up. Previously the undercloud was failing but now the undercloud >>>> is deployed successfully but the service is not coming up. >>>> >>>> Error log from ironic_pxe_tftp: >>>> >>>> [root at undercloud ~]# podman logs 23427d845098 >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> /bin/bash: /usr/sbin/in.tftpd: No such file or directory >>>> >>>> My undercloud config: >>>> >>>> [DEFAULT] >>>> undercloud_hostname = undercloud.taashee.com >>>> container_images_file = containers-prepare-parameter.yaml >>>> local_ip = 192.168.30.50/24 >>>> undercloud_public_host = 192.168.30.39 >>>> undercloud_admin_host = 192.168.30.41 >>>> undercloud_nameservers = 8.8.8.8 >>>> pxe_enabled = true >>>> #undercloud_ntp_servers = >>>> overcloud_domain_name = taashee.com >>>> subnets = ctlplane-subnet >>>> local_subnet = ctlplane-subnet >>>> #undercloud_service_certificate = >>>> generate_service_certificate = true >>>> certificate_generation_ca = local >>>> local_interface = eno3 >>>> inspection_extras = false >>>> undercloud_debug = false >>>> enable_tempest = false >>>> enable_ui = false >>>> >>>> [auth] >>>> >>>> [ctlplane-subnet] >>>> cidr = 192.168.30.0/24 >>>> dhcp_start = 192.168.30.60 >>>> dhcp_end = 192.168.30.100 >>>> inspection_iprange = 192.168.30.110,192.168.30.150 >>>> gateway = 192.168.30.1 >>>> >>>> With regards, >>>> >>>> Swogat Pradhan >>>> >>> -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Tue Jul 5 08:35:50 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Tue, 5 Jul 2022 09:35:50 +0100 Subject: QoS Cinder, Zed Release Message-ID: To whom it may concern, I am helping a colleague of mine with the following pieces of work: 820027 (https://review.opendev.org/c/openstack/cinder/+/820027 ), 820030 (https://review.opendev.org/c/openstack/cinder-specs/+/820030 ). I was wondering whether it is not too late to include the aforementioned within the Zed release? Is there anyone who can advise on this matter? Best Regards, Sergey -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jul 5 10:30:54 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Jul 2022 11:30:54 +0100 Subject: import external compute In-Reply-To: References: Message-ID: <54f7dd504e71c190dabe765d186ee52438acd0ce.camel@redhat.com> On Tue, 2022-07-05 at 11:25 +0430, Parsa Aminian wrote: > hello > is there any way that i can import an already existing kolla-ansible > compute to my openstack ? migrating instances manually take a lot of time > from me not really no nova does not have a way to import exisitng comptue and more importantly the instances form a differnt deployment. we cant assume the flaovr for exampel would be the same so an import of a compute is a non trivial thing you would be be imporitng a comptue and all insntance on the compute which belong to multiple tenants consumeing resouce form a differnt glnace/neturon/cinder ectra then the new cloud. so there is no simple way to import the compute but you coudl perhaps look at os-migrate to "import" or migrate the instnaces. os-migrate is intended to automate moving instnace between clouds but its not an entirly lossless process as ips will change in the process. https://github.com/os-migrate/os-migrate From geguileo at redhat.com Tue Jul 5 11:27:13 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Tue, 5 Jul 2022 13:27:13 +0200 Subject: [cinder] Re: QoS Cinder, Zed Release In-Reply-To: References: Message-ID: <20220705112713.savzcsfir2oppzs7@localhost> On 05/07, Sergey Drozdov wrote: > To whom it may concern, > > I am helping a colleague of mine with the following pieces of work: 820027 (https://review.opendev.org/c/openstack/cinder/+/820027 ), 820030 (https://review.opendev.org/c/openstack/cinder-specs/+/820030 ). I was wondering whether it is not too late to include the aforementioned within the Zed release? Is there anyone who can advise on this matter? > > Best Regards, > Sergey Hi Sergey, Cinder is in spec freeze, and the spec freeze exception request period ended last Friday, though I personally don't see much problem in merging this particular spec later, since I don't even think a spec is really necessary here since it's just implementing a standard feature, but that's just my opinion. I have reviewed both spec and patch and added some comments. Cheers, Gorka. From bence.romsics at gmail.com Tue Jul 5 12:19:43 2022 From: bence.romsics at gmail.com (Bence Romsics) Date: Tue, 5 Jul 2022 14:19:43 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: <23344295.OIdGMMxvHE@p1> References: <23344295.OIdGMMxvHE@p1> Message-ID: Hi, > So from just brief look at load_duration.png file it seems that we are improving API performance in last cycles :) I believe the same. :-) > I was also thinking about doing something similar to what Julia described in Berlin (but I still didn't had time for it). But I was thinking that instead of using rally, maybe we can do something similar like Ironic is doing and have some simple script which will populate neutron db with many resources, like e.g. 2-3k ports/networks/trunks etc. and then measure time of e.g. doing "list" of those resources. > That way we will IMHO measure only neutron API performance and Neutron - DB interactions, without relying on the backends and other components, like e.g. Nova to spawn actual VM. Wdyt about it? Is it worth to do or it will be better to rely on the rally only? I think both approaches have their uses. These rally reports are hopefully useful for users of Neutron API, users planning an upgrade or as feedback for maintainers who worked on performance related issues in the last few cycles. But rally reports do not give much information on where to look when we want to make further improvements. But I believe Julia's approach can be used to narrow down or even identify where to make further code changes. Also she targeted testing _at scale_, what our current rally tests don't do. In short, I believe both approaches have their uses and rally tests probably cannot (easily) replace what the Ironic team did with the tests Julia described in her presentation. Cheers, Bence From adivya1.singh at gmail.com Tue Jul 5 12:52:52 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Tue, 5 Jul 2022 18:22:52 +0530 Subject: Regarding Policy.json entries for glance image update not working for a user In-Reply-To: References: Message-ID: hi Brian, Regarding the Policy.Json, it is working fine for 3 Controllers have a individual Glance Container But I have another scenario, where only One controller holds the Glance Image , but the same steps I do for the same, it fails with error code 403. Regards Adivya Singh On Wed, Jun 15, 2022 at 2:04 AM Brian Rosmaita wrote: > On 6/14/22 2:18 PM, Adivya Singh wrote: > > Hi Takashi, > > > > when a user upload images which is a member , The image will be set to > > private. > > > > This is what he is asking for access to make it public, The above rule > > applies for only public images > Alan and Takashi have both given you good advice: > > - By default, Glance assumes that your custom policy file is named > "policy.yaml". If it doesn't have that name, Glance will assume it does > not exist and will use the defaults defined in code. You can change the > filename glance will look for in your glance-api.conf -- look for > [oslo_policy]/policy_file > > - We recommend that you use YAML instead of JSON to write your policy > file because YAML allows comments, which you will find useful in > documenting any changes you make to the file > > - You want to keep the permissions on modify_image at their default > value, because otherwise users won't be able to do simple things like > add image properties to their own images > > - Some image properties can affect the system or other users. Glance > will not allow *any* user to modify some system properties (for example, > 'id', 'status'), and it requires additional permission along with > modify_image to set 'public' or 'community' for image visibility. > > - It's also possible to configure property protections to require > additional permission to CRUD specific properties (the default setting > is *not* to do this). > > For your particular use case, where you want a specific user to be able > to publicize_image, I would encourage you to think more carefully about > what exactly you want to accomplish. Traditionally, images with > 'public' visibility are provided by the cloud operator, and this gives > image consumers some confidence that there's nothing malicious on the > image. Public images are accessible to all users, and they will show up > in the default image-list call for all users, so if a public image > contains something nasty, it can spread very quickly. > > Glance provides four levels of image visibility: > > - private: only visible to users in the project that owns the image > > - shared: visible to users in the project that owns the image *plus* any > projects that are added to the image as "members". (A shared image with > no members is effectively a private image.) See [0] for info about how > image sharing is designed and what API calls are associated with it. > There are a bunch of policies around this; the defaults are basically > what you'd expect, with the image owner being able to add and delete > members, and image members being able to 'accept' or 'reject' shared > images. > > - community: accessible to everyone, but only visible if you look for > them. See [1] for an explanation of what that means. The ability to > set 'community' visibility on an image is controlled by the > "communitize_image" policy (default is admin-or-owner). > > - public: accessible to everyone, and easily visible to all users. > Controlled by the "publicize_image" policy (default is admin-only). > > You're running your own cloud, so you can configure things however you > like, but I encourage you to think carefully before handing out > publicize_image permission, and consider whether one of the other > visibilities can accomplish what you want. > > For more info, the introductory section on "Images" in the api-ref [2] > has a useful discussion of image properties and image visibility. > > The final thing I want to stress is that you should be sure to test > carefully any policies you define in a custom policy file. You are > actually having a good problem, that is, someone can't do something you > would like them to. The way worse problem happens when in addition to > that someone being able to do what you want them to, a whole bunch of > other users can also do that same thing. > > OK, so to get to your particular issue: > > - you don't want to change the "modify_image" policy in the way you > proposed in your email, because no one (other than the person having the > 'user role) will be able to do any kind of image updates. > > - if you decide to give that user publicize_image permissions, be > careful how you do it. For example, > "publicize_image": "role:user" > won't allow an admin to make images public (unless you also give each > admin the 'user' role). If you look at most of the policies in the Xena > policy.yaml.sample, they begin "role:admin or ...". > > - the reason you were seeing the 403 when you tried to do > openstack image set --public > as the user with the 'user' property is that you were allowed to > modify_image but when you tried to change the visibility, you did not > have permission (because the default for that is role:admin) > > Hope this helps! Once you get this figured out, you may want to put up > a patch to update the Glance documentation around policies. I think > everything said above is in there somewhere, but it may not be in the > most obvious places. > > Actually, there is one more thing. The above all applies to Xena, but > there's been some work around policies in Yoga and more happening in > Zed, so be sure to read the Glance release notes when you eventually > upgrade. > > > [0] > > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html > [1] > > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html#sharing-images-with-all-users > [2] https://docs.openstack.org/api-ref/image/v2/index.html#images > > > > > regards > > Adivya Singh > > > > On Tue, Jun 14, 2022 at 10:54 AM Takashi Kajinami > > wrote: > > > > Glance has a separate policy rule (publicize_image) for > > creating/updating public images., > > and you should define that policy rule instead of modify_image. > > > > https://docs.openstack.org/glance/xena/admin/policies.html > > > > ~~~ > > |publicize_image| - Create or update public images > > ~~~ > > > > AFAIK The modify_image policy defaults to rule:default and is > > allowed for any users > > as long as the target image is owned by that user. > > > > > > On Tue, Jun 14, 2022 at 2:01 PM Adivya Singh > > > wrote: > > > > Hi Brian, > > > > Please find the response > > > > 1> i am using Xena release version 24.0.1 > > > > Now the scenario is line below, my customer wants to have > > their login access on setting up the properties of an image > > to the public. now what i did is > > > > 1> i created a role in openstack using the admin credential > > name as "user" > > 2> i assigned that user to a role user. > > 3> i assigned those user to those project id, which they > > want to access as a user role > > > > Then i went to Glance container which is controller by lxc > > and made a policy.yaml file as below > > > > root at aio1-glance-container-724aa778:/etc/glance# cat > policy.yaml > > > > "modify_image": "role:user" > > > > then i went to utility container and try to set the > > properties of a image using openstack command > > > > openstack image set --public > > > > and then i got this error > > > > HTTP 403 Forbidden: You are not authorized to complete > > publicize_image action. > > > > Even when i am trying the upload image with this user , i > > get the above error only > > > > export OS_ENDPOINT_TYPE=internalURL > > export OS_INTERFACE=internalURL > > export OS_USERNAME=adsingh > > export OS_PASSWORD='adsingh' > > export OS_PROJECT_NAME=adsingh > > export OS_TENANT_NAME=adsingh > > export OS_AUTH_TYPE=password > > export OS_AUTH_URL=https://:5000/v3 > > export OS_NO_CACHE=1 > > export OS_USER_DOMAIN_NAME=Default > > export OS_PROJECT_DOMAIN_NAME=Default > > export OS_REGION_NAME=RegionOne > > > > Regards > > Adivya Singh > > > > > > > > On Mon, Jun 13, 2022 at 6:41 PM Alan Bishop > > > wrote: > > > > > > > > On Mon, Jun 13, 2022 at 6:00 AM Brian Rosmaita > > > > wrote: > > > > On 6/13/22 8:29 AM, Adivya Singh wrote: > > > hi Team, > > > > > > Any thoughts on this > > > > H Adivya, > > > > Please supply some more information, for example: > > > > - which openstack release you are using > > - the full API request you are making to modify the > > image > > - the full API response you receive > > - whether the user with "role:user" is in the same > > project that owns the > > image > > - debug level log extract for this call if you have > it > > - anything else that could be relevant, for example, > > have you modified > > any other policies, and if so, what values are you > > using now? > > > > > > Also bear in mind that the default policy_file name is > > "policy.yaml" (not .json). You either > > need to provide a policy.yaml file, or override the > > policy_file setting if you really want to > > use policy.json. > > > > Alan > > > > cheers, > > brian > > > > > > > > Regards > > > Adivya Singh > > > > > > On Sat, Jun 11, 2022 at 12:40 AM Adivya Singh > > > > > > > >> wrote: > > > > > > Hi Team, > > > > > > I have a use case where I have to give a user > > restriction on > > > updating the image properties as a member. > > > > > > I have created a policy Json file and give > > the modify_image rule to > > > the particular role, but still it is not > working > > > > > > "modify_image": "role:user", This role is > > created in OpenStack. > > > > > > but still it is failing while updating > > properties with a > > > particular user assigned to a role as "access > > denied" and > > > unauthorized access > > > > > > Regards > > > Adivya Singh > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hiwkby at yahoo.com Tue Jul 5 13:48:34 2022 From: hiwkby at yahoo.com (Hirotaka Wakabayashi) Date: Tue, 5 Jul 2022 13:48:34 +0000 (UTC) Subject: Issues With trove guest agent References: <657723071.1550527.1657028914038.ref@mail.yahoo.com> Message-ID: <657723071.1550527.1657028914038@mail.yahoo.com> Hello, Hernando! I think Trove calls nova to inject files here: https://opendev.org/openstack/trove/src/branch/master/trove/taskmanager/models.py#L999 When cloud-init fails to inject Trove's configuration files, I think you should check the following parameters. 1. `use_nova_server_config_drive` configuration on trove.conf. This value should be True if you use config_drive to inject files. 2. `DIB_CLOUD_INIT_DATASOURCES` environment when buiding your own image. This value should contains `OpenStack` if you set `use_nova_server_config_drive` as False. and I think you should also check cloud-init log using journalctl. Thanks, Hirotaka On Tuesday, June 28, 2022, 09:55:24 PM GMT+9, wrote: Send openstack-discuss mailing list submissions to ??? openstack-discuss at lists.openstack.org To subscribe or unsubscribe via the World Wide Web, visit ??? http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss or, via email, send a message with subject or body 'help' to ??? openstack-discuss-request at lists.openstack.org You can reach the person managing the list at ??? openstack-discuss-owner at lists.openstack.org When replying, please edit your Subject line so it is more specific than "Re: Contents of openstack-discuss digest..." Today's Topics: ? 1. Re: Regarding Floating IP is existing Setup (Adivya Singh) ? 2. Issues With trove guest agent (Hernando Ariza Perez) ? 3. Openstack keystone LDAP integration | openstack user list ? ? ? --domain domain.com | Internal server error (HTTP 500) ? ? ? (Swogat Pradhan) ---------------------------------------------------------------------- Message: 1 Date: Tue, 28 Jun 2022 18:12:51 +0530 From: Adivya Singh To: Slawek Kaplonski ,? OpenStack Discuss ??? Subject: Re: Regarding Floating IP is existing Setup Message-ID: ??? Content-Type: text/plain; charset="utf-8" hi Slawek, it happens with a given router namespace at a time Regards Adivya Singh On Fri, Jun 24, 2022 at 10:13 PM Adivya Singh wrote: > Hi, > > Thanks for the advice and the link, > > What i saw when i do testing using tcpdump was "ARP" was not working, and > it is not able to associate the FLoating IP with the MAC address of the > interface in the VM,? When i do the associate and disassociate the VM , it > works fine > > But the Router NameSpace got changed. > > Regards > Adivya Singh > > On Thu, Jun 23, 2022 at 1:22 PM Slawek Kaplonski > wrote: > >> Hi, >> >> Dnia wtorek, 21 czerwca 2022 13:55:51 CEST Adivya Singh pisze: >> > hi Eugen, >> > >> > The current setup is 3 controller nodes,? The Load is not much? on each >> > controller and the number of DHCP agent is always set to 2 as per the >> > standard in the neutron.conf, The L3 agent seems to be stables as other >> > router namespace works fine under it, Only few router? Namespace get >> > affected under the agent. >> >> Is it that problem happens for new floating IPs or for the FIPs which >> were working fine and then suddenly stopped working? If the latter, was >> there any action which triggered the issue to happen? >> Is there e.g. only one FIP broken in the router or maybe when it happens, >> then all FIPs which uses same router are broken? >> >> Can You also try to analyze with e.g. tcpdump where traffic is dropped >> exactly? You can check >> http://kaplonski.pl/blog/neutron-where-is-my-packet-2/ for some more >> detailed description how traffic should go from the external network to >> Your instance. >> >> > >> > Most of the template having issue , Have all instance having FLoating >> IP, a >> > Stack with a single floating IP have chance of issue very less >> > >> > Regards >> > Adivya Singh >> > >> > On Tue, Jun 21, 2022 at 1:18 PM Eugen Block wrote: >> > >> > > Hi, >> > > >> > > this sounds very familiar to me, I had to deal with something similar >> > > a couple of times in a heavily used cluster with 2 control nodes. What >> > > does your setup look like, is it a HA setup? I would start checking >> > > the DHCP and L3 agents. After increasing dhcp_agents_per_network to 2 >> > > in neutron.conf and restarting the services this didn't occur again >> > > (yet). This would impact floating IPs as well, sometimes I had to >> > > disable and enable the affected router(s). If you only have one >> > > control node a different approach is necessary. Do you see a high load >> > > on the control node? >> > > >> > > >> > > Zitat von Adivya Singh : >> > > >> > > > hi Team, >> > > > >> > > > We got a issue in Xena release, where we set the environment in >> Ubuntu >> > > > Platform, But later we get some issues in Floating IP not reachable. >> > > > >> > > > In a Network node, not all router namespace are Impacted and only >> few of >> > > > them get affected, So we can not comment Network node has a issue. >> > > > >> > > > The L3 agent where the Router is tied up, Worked just fine, as other >> > > > routers work Fine. >> > > > >> > > > and the one having issue in Floating IP, if i unassigned and >> assigned it >> > > > starts working most of the time. >> > > > >> > > > Any thoughts on this >> > > > >> > > > Regards >> > > > Adivya Singh >> > > >> > > >> > > >> > > >> > > >> > >> >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 27 Jun 2022 13:54:59 -0500 From: Hernando Ariza Perez To: openstack-discuss at lists.openstack.org Subject: Issues With trove guest agent Message-ID: ??? Content-Type: text/plain; charset="utf-8" Dear Trove community, My name is Hernando, I?m writing this email because I spend a lot of time trying to make trove yoga version works, the service functionality seems that is working, however the guest agent not, I built an image following the build process in the documentation ( https://docs.openstack.org/trove/ussuri/admin/building_guest_images.html ), also I used the image that you have in http://tarballs.openstack.org/trove/images/ So right now, when I create a data store instance, the instances is active normally and I can reach it, I go inside it, and I saw that the guest agent service doesn?t pull the datastore docker container image. I put the guest agent config file manually in the image, because trove task manager never put it via cloud unit, it only put the guest info conf. So this case is weird because I?m didn?t get any log error, that?s why I?m here. Well could you please provide me some guides about this process? Maybe an example of the new trove config files, I?ll really appreciate this help. Thanks for read, Regards Hernando Clareth -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 3 Date: Tue, 28 Jun 2022 14:09:57 +0530 From: Swogat Pradhan To: OpenStack Discuss Subject: Openstack keystone LDAP integration | openstack user list ??? --domain domain.com | Internal server error (HTTP 500) Message-ID: ??? Content-Type: text/plain; charset="utf-8" Description of problem: I am trying to integrate AD server in keystone and facing 'Internal server error' domain configuration: [stack at hkg2director ~]$ cat workplace/keystone_domain_specific_ldap_backend.yaml # This is an example template on how to configure keystone domain specific LDAP # backends. This will configure a domain called tripleoldap will the attributes # specified. parameter_defaults: ? KeystoneLDAPDomainEnable: true ? KeystoneLDAPBackendConfigs: ? ? domain.com: ? ? ? url: ldap://172.25.161.211 ? ? ? user: cn=Openstack,ou=Admins,dc=domain,dc=com ? ? ? password: password ? ? ? suffix: dc=domain,dc=com ? ? ? user_tree_dn: ou=APAC,dc=domain,dc=com ? ? ? user_filter: "(|(memberOf=cn=openstackadmin,ou=Groups,dc=domain,dc=com)(memberOf=cn=openstackeditor,ou=Groups,dc=domain,dc=com)(memberOf=cn=openstackviewer,ou=Groups,dc=domain,dc=com)" ? ? ? user_objectclass: person ? ? ? user_id_attribute: cn ? ? ? group_tree_dn: ou=Groups,dc=domain,dc=com ? ? ? group_objectclass: Groups ? ? ? group_id_attribute: cn When i issue the command: $ openstack user list --domain domain.com Output: Internal server error (HTTP 500) Keystone_wsgi_error.log: [Tue Jun 28 06:46:49.112848 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] mod_wsgi (pid=45): Exception occurred processing WSGI script '/var/www/cgi-bin/keystone/keystone'. [Tue Jun 28 06:46:49.121797 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] Traceback (most recent call last): [Tue Jun 28 06:46:49.122202 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 2464, in __call__ [Tue Jun 28 06:46:49.122218 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.wsgi_app(environ, start_response) [Tue Jun 28 06:46:49.122231 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/werkzeug/middleware/proxy_fix.py", line 187, in __call__ [Tue Jun 28 06:46:49.122238 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.app(environ, start_response) [Tue Jun 28 06:46:49.122248 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122254 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122264 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122270 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122284 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/oslo_middleware/base.py", line 124, in __call__ [Tue Jun 28 06:46:49.122294 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self.application) [Tue Jun 28 06:46:49.122304 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122310 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122320 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122326 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122337 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 143, in __call__ [Tue Jun 28 06:46:49.122344 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return resp(environ, start_response) [Tue Jun 28 06:46:49.122354 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122364 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122374 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122382 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122392 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/oslo_middleware/base.py", line 124, in __call__ [Tue Jun 28 06:46:49.122400 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self.application) [Tue Jun 28 06:46:49.122413 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122421 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122432 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122439 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122463 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122470 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122481 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122490 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122500 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/osprofiler/web.py", line 112, in __call__ [Tue Jun 28 06:46:49.122507 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return request.get_response(self.application) [Tue Jun 28 06:46:49.122517 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122525 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122535 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122542 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122552 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122562 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122572 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122579 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122589 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/oslo_middleware/request_id.py", line 58, in __call__ [Tue Jun 28 06:46:49.122596 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self.application) [Tue Jun 28 06:46:49.122605 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122612 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122622 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122630 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122670 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/server/flask/request_processing/middleware/url_normalize.py", line 38, in __call__ [Tue Jun 28 06:46:49.122696 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.app(environ, start_response) [Tue Jun 28 06:46:49.122729 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 129, in __call__ [Tue Jun 28 06:46:49.122743 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = self.call_func(req, *args, **kw) [Tue Jun 28 06:46:49.122753 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/dec.py", line 193, in call_func [Tue Jun 28 06:46:49.122761 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.func(req, *args, **kwargs) [Tue Jun 28 06:46:49.122772 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystonemiddleware/auth_token/__init__.py", line 341, in __call__ [Tue Jun 28 06:46:49.122786 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = req.get_response(self._app) [Tue Jun 28 06:46:49.122800 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1314, in send [Tue Jun 28 06:46:49.122807 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] application, catch_exc_info=False) [Tue Jun 28 06:46:49.122817 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/webob/request.py", line 1278, in call_application [Tue Jun 28 06:46:49.122824 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] app_iter = application(self.environ, start_response) [Tue Jun 28 06:46:49.122835 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/werkzeug/middleware/dispatcher.py", line 78, in __call__ [Tue Jun 28 06:46:49.122845 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return app(environ, start_response) [Tue Jun 28 06:46:49.122856 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 2450, in wsgi_app [Tue Jun 28 06:46:49.122863 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = self.handle_exception(e) [Tue Jun 28 06:46:49.122874 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.122883 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.122893 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.122900 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.122910 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.122921 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.122932 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] [Previous line repeated 27 more times] [Tue Jun 28 06:46:49.122943 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1867, in handle_exception [Tue Jun 28 06:46:49.122952 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] reraise(exc_type, exc_value, tb) [Tue Jun 28 06:46:49.122964 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 38, in reraise [Tue Jun 28 06:46:49.122971 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] raise value.with_traceback(tb) [Tue Jun 28 06:46:49.122981 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 2447, in wsgi_app [Tue Jun 28 06:46:49.122988 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] response = self.full_dispatch_request() [Tue Jun 28 06:46:49.122998 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1952, in full_dispatch_request [Tue Jun 28 06:46:49.123007 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] rv = self.handle_user_exception(e) [Tue Jun 28 06:46:49.123018 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.123025 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.123035 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.123044 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.123059 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 272, in error_router [Tue Jun 28 06:46:49.123066 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return original_handler(e) [Tue Jun 28 06:46:49.123077 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] [Previous line repeated 27 more times] [Tue Jun 28 06:46:49.123089 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1821, in handle_user_exception [Tue Jun 28 06:46:49.123097 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] reraise(exc_type, exc_value, tb) [Tue Jun 28 06:46:49.123107 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/_compat.py", line 38, in reraise [Tue Jun 28 06:46:49.123118 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] raise value.with_traceback(tb) [Tue Jun 28 06:46:49.123129 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1950, in full_dispatch_request [Tue Jun 28 06:46:49.123137 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] rv = self.dispatch_request() [Tue Jun 28 06:46:49.123147 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/app.py", line 1936, in dispatch_request [Tue Jun 28 06:46:49.123154 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.view_functions[rule.endpoint](**req.view_args) [Tue Jun 28 06:46:49.123165 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 468, in wrapper [Tue Jun 28 06:46:49.123175 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = resource(*args, **kwargs) [Tue Jun 28 06:46:49.123186 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask/views.py", line 89, in view [Tue Jun 28 06:46:49.123193 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.dispatch_request(*args, **kwargs) [Tue Jun 28 06:46:49.123204 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/flask_restful/__init__.py", line 583, in dispatch_request [Tue Jun 28 06:46:49.123211 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] resp = meth(*args, **kwargs) [Tue Jun 28 06:46:49.123222 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/api/users.py", line 183, in get [Tue Jun 28 06:46:49.123232 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self._list_users() [Tue Jun 28 06:46:49.123245 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/api/users.py", line 215, in _list_users [Tue Jun 28 06:46:49.123252 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] domain_scope=domain, hints=hints) [Tue Jun 28 06:46:49.123263 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/common/manager.py", line 115, in wrapped [Tue Jun 28 06:46:49.123273 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] __ret_val = __f(*args, **kwargs) [Tue Jun 28 06:46:49.123282 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 414, in wrapper [Tue Jun 28 06:46:49.123289 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return f(self, *args, **kwargs) [Tue Jun 28 06:46:49.123299 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 424, in wrapper [Tue Jun 28 06:46:49.123308 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return f(self, *args, **kwargs) [Tue Jun 28 06:46:49.123327 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 1108, in list_users [Tue Jun 28 06:46:49.123337 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] ref_list = self._handle_shadow_and_local_users(driver, hints) [Tue Jun 28 06:46:49.123351 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/core.py", line 1091, in _handle_shadow_and_local_users [Tue Jun 28 06:46:49.123358 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return driver.list_users(hints) + fed_res [Tue Jun 28 06:46:49.123368 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/core.py", line 85, in list_users [Tue Jun 28 06:46:49.123376 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.user.get_all_filtered(hints) [Tue Jun 28 06:46:49.123387 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/core.py", line 328, in get_all_filtered [Tue Jun 28 06:46:49.123394 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] for user in self.get_all(query, hints)] [Tue Jun 28 06:46:49.123406 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/core.py", line 320, in get_all [Tue Jun 28 06:46:49.123413 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] hints=hints) [Tue Jun 28 06:46:49.123425 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 1949, in get_all [Tue Jun 28 06:46:49.123432 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return super(EnabledEmuMixIn, self).get_all(ldap_filter, hints) [Tue Jun 28 06:46:49.123443 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 1637, in get_all [Tue Jun 28 06:46:49.123453 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] for x in self._ldap_get_all(hints, ldap_filter)] [Tue Jun 28 06:46:49.123464 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/common/driver_hints.py", line 42, in wrapper [Tue Jun 28 06:46:49.123472 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return f(self, hints, *args, **kwargs) [Tue Jun 28 06:46:49.123482 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 1590, in _ldap_get_all [Tue Jun 28 06:46:49.123489 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] attrs) [Tue Jun 28 06:46:49.123500 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 986, in search_s [Tue Jun 28 06:46:49.123507 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] attrlist, attrsonly) [Tue Jun 28 06:46:49.123517 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 679, in wrapper [Tue Jun 28 06:46:49.123524 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return func(self, conn, *args, **kwargs) [Tue Jun 28 06:46:49.123535 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib/python3.6/site-packages/keystone/identity/backends/ldap/common.py", line 814, in search_s [Tue Jun 28 06:46:49.123542 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] attrsonly) [Tue Jun 28 06:46:49.123552 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 870, in search_s [Tue Jun 28 06:46:49.123559 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self.search_ext_s(base,scope,filterstr,attrlist,attrsonly,None,None,timeout=self.timeout) [Tue Jun 28 06:46:49.123578 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 1286, in search_ext_s [Tue Jun 28 06:46:49.123586 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return self._apply_method_s(SimpleLDAPObject.search_ext_s,*args,**kwargs) [Tue Jun 28 06:46:49.123596 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 1224, in _apply_method_s [Tue Jun 28 06:46:49.123603 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] return func(self,*args,**kwargs) [Tue Jun 28 06:46:49.123613 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 863, in search_ext_s [Tue Jun 28 06:46:49.123621 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] msgid = self.search_ext(base,scope,filterstr,attrlist,attrsonly,serverctrls,clientctrls,timeout,sizelimit) [Tue Jun 28 06:46:49.123631 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 859, in search_ext [Tue Jun 28 06:46:49.123650 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] timeout,sizelimit, [Tue Jun 28 06:46:49.123664 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 340, in _ldap_call [Tue Jun 28 06:46:49.123672 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] reraise(exc_type, exc_value, exc_traceback) [Tue Jun 28 06:46:49.123690 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/compat.py", line 46, in reraise [Tue Jun 28 06:46:49.123701 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] raise exc_value [Tue Jun 28 06:46:49.123713 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] File "/usr/lib64/python3.6/site-packages/ldap/ldapobject.py", line 324, in _ldap_call [Tue Jun 28 06:46:49.123720 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] result = func(*args,**kwargs) [Tue Jun 28 06:46:49.123754 2022] [wsgi:error] [pid 45] [remote 172.25.201.201:58080] ldap.FILTER_ERROR: {'result': -7, 'desc': 'Bad search filter', 'ctrls': []} Version-Release number of selected component (if applicable): How reproducible: Configure domain in keystone. Steps to Reproduce: 1. setup 3 groups in ldap 2. create a user 3. configure ldap in keystone Actual results: When i issue the command: $ openstack user list --domain domain.com Output: Internal server error (HTTP 500) Expected results: When i issue the command: $ openstack user list --domain domain.com Output: should display users in the groups -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ openstack-discuss mailing list openstack-discuss at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss ------------------------------ End of openstack-discuss Digest, Vol 44, Issue 167 ************************************************** From derekokeeffe85 at yahoo.ie Tue Jul 5 14:38:16 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 5 Jul 2022 14:38:16 +0000 (UTC) Subject: Listing instances that use a security group References: <987935218.3775030.1657031896834.ref@mail.yahoo.com> Message-ID: <987935218.3775030.1657031896834@mail.yahoo.com> Hi all, Is there a cli command to list all the VM's that have a specific security group attached, I need to delete some groups as a tidy up but I only get a warning that it's in use by an instance (of which there's 200) so I'd rather not go through them 1 by 1 in Horizon or show each one on the cli separately. An sql query would be acceptable also but nova db, select * from security_groups;?select * from instances; &?select * from security_group_instance_association; doesn't give me the required results that I can refine to search deeper. Thanks in advance for any info. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jul 5 15:37:35 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 05 Jul 2022 16:37:35 +0100 Subject: Listing instances that use a security group In-Reply-To: <987935218.3775030.1657031896834@mail.yahoo.com> References: <987935218.3775030.1657031896834.ref@mail.yahoo.com> <987935218.3775030.1657031896834@mail.yahoo.com> Message-ID: so security groups are a netuon concept with some legacy support in nova. the way i woudl apporoch this is to list all ports via the neutrion api/cli that have the security group assocaited with it then extract the device-id form the port which is the nova server uuid looking at https://docs.openstack.org/api-ref/network/v2/index.html?expanded=list-ports-detail#list-ports security group does not appear to be one of the request parmaters of the port list api however security_groups supported by osc so not sure if the api doc is out of date so you shoudl be able to do this openstack port list --security-group you shoudl technialy be able to use -c device_id to get the list of vms uuid form that set of ports but im not sure that the openstack clinet will corrrectly inlcude the device_id filed in the api request in that case """openstack port list --security-group -c device_id -f value | sort | uniq""" should print a list of server of unique server uuids using that secuirty group if the openstack client is correctly askign for the device_id filed to be retured as aprt of the request. its is part fo the port list api responce by default. so you might need to usee --debug to get the api request url and then use curl to call the api direclty if the clinet does not supprot this properly On Tue, 2022-07-05 at 14:38 +0000, Derek O keeffe wrote: > Hi all, > Is there a cli command to list all the VM's that have a specific security group attached, I need to delete some groups as a tidy up but I only get a warning that it's in use by an instance (of which there's 200) so I'd rather not go through them 1 by 1 in Horizon or show each one on the cli separately. An sql query would be acceptable also but nova db, select * from security_groups;?select * from instances; &?select * from security_group_instance_association; doesn't give me the required results that I can refine to search deeper. > Thanks in advance for any info. > Regards,Derek > From rdhasman at redhat.com Tue Jul 5 16:06:14 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Tue, 5 Jul 2022 21:36:14 +0530 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: References: Message-ID: Hi Arnaud, We discussed this in last week's cinder meeting and unfortunately we haven't tested it thoroughly so we don't have any performance numbers to share. What we can tell is the reason why RBD requires a higher number of native threads. RBD calls C code which could potentially block green threads hence blocking the main operation therefore all of the calls in RBD to execute operations are wrapped to use native threads so depending on the operations we want to perform concurrently, we can set the value of backend_native_threads_pool_size for RBD. Thanks and regards Rajat Dhasmana On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > Hey all, > > Is there any recommendation on the number of threads to use when using > RBD backend (option backend_native_threads_pool_size)? > The doc is saying that 20 is the default but it should be increased, > specially for the RBD driver, but up to which value? > > Is there anyone tuning this parameter in their openstack deployments? > > If yes, maybe we can add some recommendations on openstack large-scale > doc about it? > > Cheers, > > Arnaud. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Tue Jul 5 16:40:39 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Tue, 5 Jul 2022 17:40:39 +0100 Subject: Listing instances that use a security group In-Reply-To: References: Message-ID: Hi Sean, Thanks for that. I will try tomorrow and let you know how it went. Regards, Derek > On 5 Jul 2022, at 16:42, Sean Mooney wrote: > > ?so security groups are a netuon concept with some legacy support in nova. > > the way i woudl apporoch this is to list all ports via the neutrion api/cli that have the security group assocaited with it > then extract the device-id form the port which is the nova server uuid > > looking at https://docs.openstack.org/api-ref/network/v2/index.html?expanded=list-ports-detail#list-ports > > security group does not appear to be one of the request parmaters of the port list api > however security_groups supported by osc so not sure if the api doc is out of date > > so you shoudl be able to do this > > openstack port list --security-group > > you shoudl technialy be able to use -c device_id to get the list of vms uuid form that set of ports but im not sure that the > openstack clinet will corrrectly inlcude the device_id filed in the api request in that case > > """openstack port list --security-group -c device_id -f value | sort | uniq""" > > should print a list of server of unique server uuids using that secuirty group if the openstack client is correctly askign for the device_id filed to > be retured as aprt of the request. its is part fo the port list api responce by default. > > so you might need to usee --debug to get the api request url and then use curl to call the api direclty if the clinet does not supprot this properly > > >> On Tue, 2022-07-05 at 14:38 +0000, Derek O keeffe wrote: >> Hi all, >> Is there a cli command to list all the VM's that have a specific security group attached, I need to delete some groups as a tidy up but I only get a warning that it's in use by an instance (of which there's 200) so I'd rather not go through them 1 by 1 in Horizon or show each one on the cli separately. An sql query would be acceptable also but nova db, select * from security_groups; select * from instances; & select * from security_group_instance_association; doesn't give me the required results that I can refine to search deeper. >> Thanks in advance for any info. >> Regards,Derek >> > > From arnaud.morin at gmail.com Tue Jul 5 17:19:28 2022 From: arnaud.morin at gmail.com (Arnaud) Date: Tue, 05 Jul 2022 19:19:28 +0200 Subject: =?US-ASCII?Q?Re=3A_=5Blarge-scale=5D=5Bcinder=5D_backend=5Fnative=5F?= =?US-ASCII?Q?threads=5Fpool=5Fsize_option_with_rbd_backend?= In-Reply-To: References: Message-ID: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> Hey, Thanks for your answer! OK, I understand the why ;) also because we hit some issues on our deployment. So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). We also disabled the periodic task to compute usage and use the less precise way from db. First question here: do you think we are going the right path? One thing we are not yet sure is how to calculate correctly the number of threads to use. Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? Thanks! Arnaud Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: >Hi Arnaud, > >We discussed this in last week's cinder meeting and unfortunately we >haven't tested it thoroughly so we don't have any performance numbers to >share. >What we can tell is the reason why RBD requires a higher number of native >threads. RBD calls C code which could potentially block green threads hence >blocking the main operation therefore all of the calls in RBD to execute >operations are wrapped to use native threads so depending on the operations >we want to >perform concurrently, we can set the value of >backend_native_threads_pool_size for RBD. > >Thanks and regards >Rajat Dhasmana > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > >> Hey all, >> >> Is there any recommendation on the number of threads to use when using >> RBD backend (option backend_native_threads_pool_size)? >> The doc is saying that 20 is the default but it should be increased, >> specially for the RBD driver, but up to which value? >> >> Is there anyone tuning this parameter in their openstack deployments? >> >> If yes, maybe we can add some recommendations on openstack large-scale >> doc about it? >> >> Cheers, >> >> Arnaud. >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rodrigo.barbieri2010 at gmail.com Tue Jul 5 18:37:54 2022 From: rodrigo.barbieri2010 at gmail.com (Rodrigo Barbieri) Date: Tue, 5 Jul 2022 15:37:54 -0300 Subject: [manila] Stepping down from manila core Message-ID: Hello fellow zorillas, It has been a long time since I started to hope every day I'd able to dedicate more time to manila core activities and so far that hasn't happened and I don't see it happening in the near foreseeable future. I had been following the meetings notes weekly until ~2 months ago but I recently ended up dropping those as well. Therefore I am stepping down from the manila core role. I would like to thank everyone that I worked closely with from 2014 to 2019 on this project. I hold this project and all of you dear to my heart and I am extremely glad and grateful to have worked with you and met you on summits/PTGs, as the life memories around Manila are among the best I have from that period. If someday circumstances change, the manila project and its community will be ones I will be very happy to go back to working closely with again. Kind regards, -- Rodrigo Barbieri MSc Computer Scientist -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Jul 5 20:44:40 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 5 Jul 2022 15:44:40 -0500 Subject: October 2022 PTG Dates & Registration Message-ID: Hello Everyone! As you may have seen, we announced the next PTG[1] which will take place October 17-20, 2022 in Columbus, Ohio! Registration is now open[2]. We have also secured a limited, discounted hotel block for PTG attendees [3]. If your organization is interested in sponsoring the event, information on packages and pricing are now available on the PTG website [1] or feel free to reach out directly to ptg at openinfra.dev. Can't wait to SEE you all there! -Kendall Nelson (diablo_rojo) [1] https://openinfra.dev/ptg/ [2] https://openinfra-ptg.eventbrite.com/ [3] https://www.hyatt.com/en-US/group-booking/CMHRC/G-L0RT -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Wed Jul 6 08:20:14 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 6 Jul 2022 10:20:14 +0200 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> Message-ID: <20220706082014.qu34mzuhmb3uma2j@localhost> On 05/07, Arnaud wrote: > Hey, > > Thanks for your answer! > OK, I understand the why ;) also because we hit some issues on our deployment. > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). Hi Arnaud, Deferred deletion should reduce the number of required native threads since delete calls will complete faster. > We also disabled the periodic task to compute usage and use the less precise way from db. Are you referring to the 'rbd_exclusive_cinder_pool' configuration option? Because that should already have the optimum default (True value). > First question here: do you think we are going the right path? > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? Native threads on the RBD driver are not only used for deletion, they are used for *all* RBD calls. We haven't defined any particular method to calculate the optimum number of threads on a system, but I can think of 2 possible avenues to explore: - Performance testing: Run a set of tests with a high number of concurrent requests and different operations and see how Cinder performs. I wouldn't bother with individual attach and detach to VM operations because those are noops on the Cinder side, creating volume from image with either different images or cache disabled would be better. - Reporting native thread usage: To really know if the number of native threads is sufficient or not you could modify the Cinder volume manager (and possibly also eventlet.tpool to gather statistics on the number of used/free native threads and number of queued requests that are waiting for a native thread to pick them up. Cheers, Gorka. > > Thanks! > > Arnaud > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > >Hi Arnaud, > > > >We discussed this in last week's cinder meeting and unfortunately we > >haven't tested it thoroughly so we don't have any performance numbers to > >share. > >What we can tell is the reason why RBD requires a higher number of native > >threads. RBD calls C code which could potentially block green threads hence > >blocking the main operation therefore all of the calls in RBD to execute > >operations are wrapped to use native threads so depending on the operations > >we want to > >perform concurrently, we can set the value of > >backend_native_threads_pool_size for RBD. > > > >Thanks and regards > >Rajat Dhasmana > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > >> Hey all, > >> > >> Is there any recommendation on the number of threads to use when using > >> RBD backend (option backend_native_threads_pool_size)? > >> The doc is saying that 20 is the default but it should be increased, > >> specially for the RBD driver, but up to which value? > >> > >> Is there anyone tuning this parameter in their openstack deployments? > >> > >> If yes, maybe we can add some recommendations on openstack large-scale > >> doc about it? > >> > >> Cheers, > >> > >> Arnaud. > >> > >> From rdhasman at redhat.com Wed Jul 6 08:22:28 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 6 Jul 2022 13:52:28 +0530 Subject: QoS Cinder, Zed Release In-Reply-To: References: Message-ID: Hi Sergey, As Gorka said, we generally don't require spec for driver features but we encourage to register a blueprint on launchpad[1] so as to keep track of all the features. Having said that, the spec would act as a good point of documentation so we can consider it merging. It's a good point for discussion which i will add it in the cinder meeting agenda today[2] but even if it doesn't merge this cycle (since we're already in the spec freeze exception phase), we can do it next cycle without blocking reviews of the main feature. [1] https://blueprints.launchpad.net/cinder [2] https://etherpad.opendev.org/p/cinder-zed-meetings Thanks and regards Rajat Dhasmana On Tue, Jul 5, 2022 at 2:19 PM Sergey Drozdov wrote: > To whom it may concern, > > I am helping a colleague of mine with the following pieces of work: 820027 > (https://review.opendev.org/c/openstack/cinder/+/820027), 820030 ( > https://review.opendev.org/c/openstack/cinder-specs/+/820030). I was > wondering whether it is not too late to include the aforementioned within > the Zed release? Is there anyone who can advise on this matter? > > Best Regards, > Sergey > -------------- next part -------------- An HTML attachment was scrubbed... URL: From arnaud.morin at gmail.com Wed Jul 6 09:11:41 2022 From: arnaud.morin at gmail.com (Arnaud Morin) Date: Wed, 6 Jul 2022 09:11:41 +0000 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: <20220706082014.qu34mzuhmb3uma2j@localhost> References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> <20220706082014.qu34mzuhmb3uma2j@localhost> Message-ID: Yes, I was talking about rbd_exclusive_cinder_pool. It was false by default on our side because we are still running cinder stein release :( Thank you for the answer about the calculation methods! Do you mind if I copy paste your answer to the large-scale documentaiton ([1])? Cheers, Arnaud. [1] https://docs.openstack.org/large-scale/ On 06.07.22 - 10:20, Gorka Eguileor wrote: > On 05/07, Arnaud wrote: > > Hey, > > > > Thanks for your answer! > > OK, I understand the why ;) also because we hit some issues on our deployment. > > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). > > Hi Arnaud, > > Deferred deletion should reduce the number of required native threads > since delete calls will complete faster. > > > > We also disabled the periodic task to compute usage and use the less precise way from db. > > Are you referring to the 'rbd_exclusive_cinder_pool' configuration > option? Because that should already have the optimum default (True > value). > > > > First question here: do you think we are going the right path? > > > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? > > Native threads on the RBD driver are not only used for deletion, they > are used for *all* RBD calls. > > We haven't defined any particular method to calculate the optimum number > of threads on a system, but I can think of 2 possible avenues to > explore: > > - Performance testing: Run a set of tests with a high number of > concurrent requests and different operations and see how Cinder > performs. I wouldn't bother with individual attach and detach to VM > operations because those are noops on the Cinder side, creating volume > from image with either different images or cache disabled would be > better. > > - Reporting native thread usage: To really know if the number of native > threads is sufficient or not you could modify the Cinder volume > manager (and possibly also eventlet.tpool to gather statistics on the > number of used/free native threads and number of queued requests that > are waiting for a native thread to pick them up. > > Cheers, > Gorka. > > > > > Thanks! > > > > Arnaud > > > > > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > > >Hi Arnaud, > > > > > >We discussed this in last week's cinder meeting and unfortunately we > > >haven't tested it thoroughly so we don't have any performance numbers to > > >share. > > >What we can tell is the reason why RBD requires a higher number of native > > >threads. RBD calls C code which could potentially block green threads hence > > >blocking the main operation therefore all of the calls in RBD to execute > > >operations are wrapped to use native threads so depending on the operations > > >we want to > > >perform concurrently, we can set the value of > > >backend_native_threads_pool_size for RBD. > > > > > >Thanks and regards > > >Rajat Dhasmana > > > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > > > >> Hey all, > > >> > > >> Is there any recommendation on the number of threads to use when using > > >> RBD backend (option backend_native_threads_pool_size)? > > >> The doc is saying that 20 is the default but it should be increased, > > >> specially for the RBD driver, but up to which value? > > >> > > >> Is there anyone tuning this parameter in their openstack deployments? > > >> > > >> If yes, maybe we can add some recommendations on openstack large-scale > > >> doc about it? > > >> > > >> Cheers, > > >> > > >> Arnaud. > > >> > > >> > From amonster369 at gmail.com Wed Jul 6 10:05:55 2022 From: amonster369 at gmail.com (A Monster) Date: Wed, 6 Jul 2022 11:05:55 +0100 Subject: Problem while launching an instance directly from an image "Volume did not finish being created even after we waited 203 seconds or 61 attempts" Message-ID: Hello, I talking about launching instance from persistent volume ( cinder ), I've checked cinder logs but I've found no errors, the only error I could find is in nova compute log which says Build of instance 4cf01ba2-05b3-44e9-a685-8875d8c96b4e aborted: Volume 01739e82-9e66-41f7-be74-dfbbdcd6746e did not finish being created even after we waited 203 seconds or 61 attempts. And its status is creating. I am using openstack xena deployed with kolla ansible on centos 8 stream. -------------- next part -------------- An HTML attachment was scrubbed... URL: From senrique at redhat.com Wed Jul 6 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 6 Jul 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 07-06-2022 Message-ID: This is a bug report from 06-29-2022 to 07-06-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Low - https://bugs.launchpad.net/cinder/+bug/1980268 "creating a bootable volume fails async when vol_size < virtual_size of image." Fix proposed to master. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From openstack at garloff.de Wed Jul 6 11:15:38 2022 From: openstack at garloff.de (Kurt Garloff) Date: Wed, 6 Jul 2022 13:15:38 +0200 Subject: openstackclient: booting from image-created volume w/ delete_on_termination Message-ID: Hi openstack CLI wizards, Having flavors without disks, I want volumes to be created from images on the fly that I can boot from. But I don't want to do bookkeeping for these volumes; they should disappear once the server disappears. On the command line with nova, I can do this: nova boot --nic net-name=$MYNET --key-name $MYKEY --flavor $MYFLAVOR \ --block-device "id=$MYIMGIFD,source=image,dest=volume,size=$MYSIZE,shutdown=remove,bootindex=0" $MYNAME I did not find a way to do this with openstack server create. Is there one? Here's what I tried: --image $MYIMGID --boot-from-volume $MYSIZE works, except that there is no way to specify delete_on_termination --block-device-mapping "sda=$MYIMGID:image:$MYSIZE:true" does complain that no --image or --volume have been passed. The option does not appear to be used for bootable volumes. I have seen a BP (merged with Ussuri) that would allow a PUT call to update delete_on_termination, but I don't see it usable with the openstackclient ... https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/destroy-instance-with-datavolume.html Anything obvious I missed? Thanks, -- Kurt Garloff Cologne, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jul 6 11:56:58 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 6 Jul 2022 13:56:58 +0200 Subject: openstackclient: booting from image-created volume w/ delete_on_termination In-Reply-To: References: Message-ID: Hey! delete_on_termination is a flag that is provided to nova during volume attachment basically. So in case of attaching volume to existing server, you can provide that flag to attachment: openstack server add volume --enable-delete-on-termination --os-compute-api-version 2.79 $UUID It's more complex when we're talking about server creation though. What openstackclient allows you to do, is to provide --block-device flag, where you can provide extra specs to the mapping. You can check doc on how to use it here https://docs.openstack.org/python-openstackclient/yoga/cli/command-objects/server.html#cmdoption-openstack-server-create-block-device For API call it would be block_device_mapping_v2 option provided with request which supports kind of same format: https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server ??, 6 ???. 2022 ?. ? 13:21, Kurt Garloff : > > Hi openstack CLI wizards, > > Having flavors without disks, I want volumes to be created from images on the fly that I can boot from. > But I don't want to do bookkeeping for these volumes; they should disappear once the server disappears. > > On the command line with nova, I can do this: > nova boot --nic net-name=$MYNET --key-name $MYKEY --flavor $MYFLAVOR \ > --block-device "id=$MYIMGIFD,source=image,dest=volume,size=$MYSIZE,shutdown=remove,bootindex=0" $MYNAME > > I did not find a way to do this with openstack server create. > Is there one? > > Here's what I tried: > --image $MYIMGID --boot-from-volume $MYSIZE > works, except that there is no way to specify delete_on_termination > > --block-device-mapping "sda=$MYIMGID:image:$MYSIZE:true" > does complain that no --image or --volume have been passed. > The option does not appear to be used for bootable volumes. > > I have seen a BP (merged with Ussuri) that would allow a PUT call to update delete_on_termination, but I don't see it usable with the openstackclient ... > https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/destroy-instance-with-datavolume.html > > Anything obvious I missed? > > Thanks, > > -- > Kurt Garloff > Cologne, Germany From rosmaita.fossdev at gmail.com Wed Jul 6 12:51:33 2022 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 6 Jul 2022 08:51:33 -0400 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: References: Message-ID: <494ef22b-22e4-be64-4827-e7c294b9e34c@gmail.com> On 6/30/22 9:39 AM, Herve Beraud wrote: > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member > of the oslo core team. > > During the last months Takashi has been a significant contributor to the > oslo projects. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. I'm not an oslo core, but tkajinam is an enthusiastic contributor who pays close attention to what's going on in openstack as a whole, so +1 from me. > > Thanks. > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > From geguileo at redhat.com Wed Jul 6 12:52:43 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Wed, 6 Jul 2022 14:52:43 +0200 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> <20220706082014.qu34mzuhmb3uma2j@localhost> Message-ID: <20220706125243.vtzv5ynkn7syd7xt@localhost> On 06/07, Arnaud Morin wrote: > Yes, I was talking about rbd_exclusive_cinder_pool. > It was false by default on our side because we are still running cinder > stein release :( > > Thank you for the answer about the calculation methods! > Do you mind if I copy paste your answer to the large-scale > documentaiton ([1])? Feel free to use it any way you want. :-) > > Cheers, > Arnaud. > > [1] https://docs.openstack.org/large-scale/ > > On 06.07.22 - 10:20, Gorka Eguileor wrote: > > On 05/07, Arnaud wrote: > > > Hey, > > > > > > Thanks for your answer! > > > OK, I understand the why ;) also because we hit some issues on our deployment. > > > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). > > > > Hi Arnaud, > > > > Deferred deletion should reduce the number of required native threads > > since delete calls will complete faster. > > > > > > > We also disabled the periodic task to compute usage and use the less precise way from db. > > > > Are you referring to the 'rbd_exclusive_cinder_pool' configuration > > option? Because that should already have the optimum default (True > > value). > > > > > > > First question here: do you think we are going the right path? > > > > > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > > > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? > > > > Native threads on the RBD driver are not only used for deletion, they > > are used for *all* RBD calls. > > > > We haven't defined any particular method to calculate the optimum number > > of threads on a system, but I can think of 2 possible avenues to > > explore: > > > > - Performance testing: Run a set of tests with a high number of > > concurrent requests and different operations and see how Cinder > > performs. I wouldn't bother with individual attach and detach to VM > > operations because those are noops on the Cinder side, creating volume > > from image with either different images or cache disabled would be > > better. > > > > - Reporting native thread usage: To really know if the number of native > > threads is sufficient or not you could modify the Cinder volume > > manager (and possibly also eventlet.tpool to gather statistics on the > > number of used/free native threads and number of queued requests that > > are waiting for a native thread to pick them up. > > > > Cheers, > > Gorka. > > > > > > > > Thanks! > > > > > > Arnaud > > > > > > > > > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > > > >Hi Arnaud, > > > > > > > >We discussed this in last week's cinder meeting and unfortunately we > > > >haven't tested it thoroughly so we don't have any performance numbers to > > > >share. > > > >What we can tell is the reason why RBD requires a higher number of native > > > >threads. RBD calls C code which could potentially block green threads hence > > > >blocking the main operation therefore all of the calls in RBD to execute > > > >operations are wrapped to use native threads so depending on the operations > > > >we want to > > > >perform concurrently, we can set the value of > > > >backend_native_threads_pool_size for RBD. > > > > > > > >Thanks and regards > > > >Rajat Dhasmana > > > > > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > > > > > >> Hey all, > > > >> > > > >> Is there any recommendation on the number of threads to use when using > > > >> RBD backend (option backend_native_threads_pool_size)? > > > >> The doc is saying that 20 is the default but it should be increased, > > > >> specially for the RBD driver, but up to which value? > > > >> > > > >> Is there anyone tuning this parameter in their openstack deployments? > > > >> > > > >> If yes, maybe we can add some recommendations on openstack large-scale > > > >> doc about it? > > > >> > > > >> Cheers, > > > >> > > > >> Arnaud. > > > >> > > > >> > > > From gthiemonge at redhat.com Wed Jul 6 13:36:08 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Wed, 6 Jul 2022 15:36:08 +0200 Subject: [Octavia] Proposing Tom Weininger as core reviewer In-Reply-To: References: Message-ID: Hi, Thanks for the feedback, I have added Tom to the Octavia core reviewer group! Greg On Tue, Jun 28, 2022 at 7:50 PM Michael Johnson wrote: > +1, Tom has been doing great work. > > Michael > > On Tue, Jun 28, 2022 at 7:29 AM Adam Harwell wrote: > >> +1 from me as well! >> >> On Tue, Jun 28, 2022 at 6:42 AM Anna Taraday >> wrote: >> >>> +1 for Tom >>> >>> Thank you for your hard work! >>> >>> On Tue, Jun 28, 2022 at 5:35 PM Gregory Thiemonge >>> wrote: >>> >>>> Hi Folks, >>>> >>>> I would like to propose Tom Weininger as a core reviewer for the >>>> Octavia project. >>>> Since he joined the project, Tom has been a major contributor to >>>> Octavia, and he is an excellent reviewer. >>>> >>>> Please send your feedback in this thread, if there is no objection >>>> until next week, we will add him to the list of core reviewers. >>>> >>>> Thanks >>>> Gregory >>>> >>> >>> >>> -- >>> >>> Ann Taraday >>> >>> Senior Software Engineer >>> ataraday at mirantis.com >>> >>> -- >> Thanks, >> --Adam >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Wed Jul 6 16:30:49 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Wed, 6 Jul 2022 22:00:49 +0530 Subject: Zun connector for persistent shared files system Manila In-Reply-To: References: Message-ID: Hi Hongbin, Thanks a lot. I saw earlier fuxi driver was there. but it is discontinued now. it seems to be good to refix it for Manila. Also, there is docker support for NFS volumes. https://docs.docker.com/storage/volumes/ Can something be done to have it. I am ready to test if somebody is ready for development. and help in development if your team guides me some hook points. Regards, Vaibhav On Tue, Jul 5, 2022 at 12:54 PM Hongbin Lu wrote: > Hi Vaibhav, > > In current state, only Cinder is supported. In theory, Manila can be added > as another storage backend. I will check if anyone interests to contribute > this feature. > > Best regards, > Hongbin > > On Fri, Jul 1, 2022 at 9:40 PM Vaibhav wrote: > >> Hi, >> >> I am using zun for running containers and managing them. >> I deployed cinder also persistent storage. and it is working fine. >> >> I want to mount my Manila shares to be mounted on containers managed by >> Zun. >> >> I can see a Fuxi project and driver for this but it is discontinued now. >> >> With Cinder only one container can use the storage volume at a time. If I >> want to have a shared file system to be mounted on multiple containers >> simultaneously, it is not possible with cinder. >> >> Is there any alternative to Fuxi. is there any other mechanism to use >> docker Volume support for NFS as shown in the link below? >> https://docs.docker.com/storage/volumes/ >> >> Please advise and give a suggestion. >> >> Regards, >> Vaibhav >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Jul 6 17:33:26 2022 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 6 Jul 2022 19:33:26 +0200 Subject: [largescale-sig] Next meeting: July 6th, 15utc In-Reply-To: <9aa1acd4-4321-d36d-2482-6f4e417cd41d@openstack.org> References: <9aa1acd4-4321-d36d-2482-6f4e417cd41d@openstack.org> Message-ID: Hi everyone, Here is the summary of our SIG meeting today. We discussed our next OpenInfra Live episode as well as the completion of the transition of our documentation to docs.openstack.org. You can read the meeting logs at: https://meetings.opendev.org/meetings/large_scale_sig/2022/large_scale_sig.2022-07-06-15.00.html We'll now pause meetings for the European summer. Our next IRC meeting will be August 31, at 1500utc on #openstack-operators on OFTC. Regards, -- Thierry From rosmaita.fossdev at gmail.com Wed Jul 6 19:33:19 2022 From: rosmaita.fossdev at gmail.com (Brian Rosmaita) Date: Wed, 6 Jul 2022 15:33:19 -0400 Subject: Regarding Policy.json entries for glance image update not working for a user In-Reply-To: References: Message-ID: On 7/5/22 8:52 AM, Adivya Singh wrote: > hi Brian, > > Regarding?the Policy.Json, it is working fine for 3 Controllers have a > individual Glance Container > > But I have another scenario, where only One controller holds the Glance > Image , but the same steps I do for the same, it fails with error code 403. Since the 403 is the default behavior, it sounds to me like your custom policy.yaml file isn't being found in your single-controller setup. Check the [oslo_policy]/policy_file config option in your glance-api.conf to make sure it's got the correct value. That's all I can think of at the moment; maybe someone else will have a better idea. cheers, brian > > Regards > Adivya Singh > > On Wed, Jun 15, 2022 at 2:04 AM Brian Rosmaita > > wrote: > > On 6/14/22 2:18 PM, Adivya Singh wrote: > > Hi Takashi, > > > > when a user upload images which is a member , The image?will be > set to > > private. > > > > This is what he is asking for access to make it public,? The > above rule > > applies for only public images > Alan and Takashi have both given you good advice: > > - By default, Glance assumes that your custom policy file is named > "policy.yaml".? If it doesn't have that name, Glance will assume it > does > not exist and will use the defaults defined in code.? You can change > the > filename glance will look for in your glance-api.conf -- look for > [oslo_policy]/policy_file > > - We recommend that you use YAML instead of JSON to write your policy > file because YAML allows comments, which you will find useful in > documenting any changes you make to the file > > - You want to keep the permissions on modify_image at their default > value, because otherwise users won't be able to do simple things like > add image properties to their own images > > - Some image properties can affect the system or other users.? Glance > will not allow *any* user to modify some system properties (for > example, > 'id', 'status'), and it requires additional permission along with > modify_image to set 'public' or 'community' for image visibility. > > - It's also possible to configure property protections to require > additional permission to CRUD specific properties (the default setting > is *not* to do this). > > For your particular use case, where you want a specific user to be able > to publicize_image, I would encourage you to think more carefully about > what exactly you want to accomplish.? Traditionally, images with > 'public' visibility are provided by the cloud operator, and this gives > image consumers some confidence that there's nothing malicious on the > image.? Public images are accessible to all users, and they will > show up > in the default image-list call for all users, so if a public image > contains something nasty, it can spread very quickly. > > Glance provides four levels of image visibility: > > - private: only visible to users in the project that owns the image > > - shared: visible to users in the project that owns the image *plus* > any > projects that are added to the image as "members".? (A shared image > with > no members is effectively a private image.)? See [0] for info about how > image sharing is designed and what API calls are associated with it. > There are a bunch of policies around this; the defaults are basically > what you'd expect, with the image owner being able to add and delete > members, and image members being able to 'accept' or 'reject' shared > images. > > - community: accessible to everyone, but only visible if you look for > them.? See [1] for an explanation of what that means.? The ability to > set 'community' visibility on an image is controlled by the > "communitize_image" policy (default is admin-or-owner). > > - public: accessible to everyone, and easily visible to all users. > Controlled by the "publicize_image" policy (default is admin-only). > > You're running your own cloud, so you can configure things however you > like, but I encourage you to think carefully before handing out > publicize_image permission, and consider whether one of the other > visibilities can accomplish what you want. > > For more info, the introductory section on "Images" in the api-ref [2] > has a useful discussion of image properties and image visibility. > > The final thing I want to stress is that you should be sure to test > carefully any policies you define in a custom policy file.? You are > actually having a good problem, that is, someone can't do something you > would like them to.? The way worse problem happens when in addition to > that someone being able to do what you want them to, a whole bunch of > other users can also do that same thing. > > OK, so to get to your particular issue: > > - you don't want to change the "modify_image" policy in the way you > proposed in your email, because no one (other than the person having > the > 'user role) will be able to do any kind of image updates. > > - if you decide to give that user publicize_image permissions, be > careful how you do it.? For example, > ? ?"publicize_image": "role:user" > won't allow an admin to make images public (unless you also give each > admin the 'user' role).? If you look at most of the policies in the > Xena > policy.yaml.sample, they begin "role:admin or ...". > > - the reason you were seeing the 403 when you tried to do > ? ? ?openstack image set --public > as the user with the 'user' property is that you were allowed to > modify_image but when you tried to change the visibility, you did not > have permission (because the default for that is role:admin) > > Hope this helps!? Once you get this figured out, you may want to put up > a patch to update the Glance documentation around policies.? I think > everything said above is in there somewhere, but it may not be in the > most obvious places. > > Actually, there is one more thing.? The above all applies to Xena, but > there's been some work around policies in Yoga and more happening in > Zed, so be sure to read the Glance release notes when you eventually > upgrade. > > > [0] > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html > > [1] > https://specs.openstack.org/openstack/glance-specs/specs/api/v2/sharing-image-api-v2.html#sharing-images-with-all-users > > [2] https://docs.openstack.org/api-ref/image/v2/index.html#images > > > > > > regards > > Adivya Singh > > > > On Tue, Jun 14, 2022 at 10:54 AM Takashi Kajinami > > > >> wrote: > > > >? ? ?Glance has a separate policy rule (publicize_image) for > >? ? ?creating/updating public images., > >? ? ?and you should define that policy rule instead of modify_image. > > > > https://docs.openstack.org/glance/xena/admin/policies.html > > >? ? ? > > >? ? ?~~~ > >? ? ?|publicize_image| - Create or update public images > >? ? ?~~~ > > > >? ? ?AFAIK The modify_image policy defaults to rule:default and is > >? ? ?allowed for any users > >? ? ?as long as the target image is owned by that user. > > > > > >? ? ?On Tue, Jun 14, 2022 at 2:01 PM Adivya Singh > >? ? ? > >> > wrote: > > > >? ? ? ? ? ?Hi Brian, > > > >? ? ? ? ?Please find the response > > > >? ? ? ? ? ? ?1> i am using Xena release version 24.0.1 > > > >? ? ? ? ? ? ?Now the scenario?is line below, my customer wants to have > >? ? ? ? ? ? ?their login access on setting up the properties of an > image > >? ? ? ? ? ? ?to the public. now what i did is > > > >? ? ? ? ? ? ?1> i created a role in openstack using the admin > credential > >? ? ? ? ? ? ?name as "user" > >? ? ? ? ? ? ?2> i assigned that user to a role user. > >? ? ? ? ? ? ?3> i assigned those user to those project id, which they > >? ? ? ? ? ? ?want to access as a user role > > > >? ? ? ? ? ? ?Then i went to Glance container which is controller > by lxc > >? ? ? ? ? ? ?and made a policy.yaml file as below > > > >? ? ? ? ? ? ?root at aio1-glance-container-724aa778:/etc/glance# cat > policy.yaml > > > >? ? ? ? ? ? ? ?"modify_image": "role:user" > > > >? ? ? ? ? ? ?then i went to utility container and try to set the > >? ? ? ? ? ? ?properties of a image using openstack command > > > >? ? ? ? ? ? ?openstack image set --public > > > >? ? ? ? ? ? ?and then i got this error > > > >? ? ? ? ? ? ?HTTP 403 Forbidden: You are not authorized to complete > >? ? ? ? ? ? ?publicize_image action. > > > >? ? ? ? ? ? ?Even when i am trying the upload image with this user , i > >? ? ? ? ? ? ?get the above error only > > > >? ? ? ? ? ? ?export OS_ENDPOINT_TYPE=internalURL > >? ? ? ? ? ? ?export OS_INTERFACE=internalURL > >? ? ? ? ? ? ?export OS_USERNAME=adsingh > >? ? ? ? ? ? ?export OS_PASSWORD='adsingh' > >? ? ? ? ? ? ?export OS_PROJECT_NAME=adsingh > >? ? ? ? ? ? ?export OS_TENANT_NAME=adsingh > >? ? ? ? ? ? ?export OS_AUTH_TYPE=password > >? ? ? ? ? ? ?export OS_AUTH_URL=https:// horizon>:5000/v3 > >? ? ? ? ? ? ?export OS_NO_CACHE=1 > >? ? ? ? ? ? ?export OS_USER_DOMAIN_NAME=Default > >? ? ? ? ? ? ?export OS_PROJECT_DOMAIN_NAME=Default > >? ? ? ? ? ? ?export OS_REGION_NAME=RegionOne > > > >? ? ? ? ? ? ?Regards > >? ? ? ? ? ? ?Adivya Singh > > > > > > > >? ? ? ? ? ? ?On Mon, Jun 13, 2022 at 6:41 PM Alan Bishop > >? ? ? ? ? ? ? > >> wrote: > > > > > > > >? ? ? ? ? ? ? ? ?On Mon, Jun 13, 2022 at 6:00 AM Brian Rosmaita > >? ? ? ? ? ? ? ? ? > >? ? ? ? ? ? ? ? ? >> wrote: > > > >? ? ? ? ? ? ? ? ? ? ?On 6/13/22 8:29 AM, Adivya Singh wrote: > >? ? ? ? ? ? ? ? ? ? ? > hi Team, > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > Any thoughts on this > > > >? ? ? ? ? ? ? ? ? ? ?H Adivya, > > > >? ? ? ? ? ? ? ? ? ? ?Please supply some more information, for example: > > > >? ? ? ? ? ? ? ? ? ? ?- which openstack release you are using > >? ? ? ? ? ? ? ? ? ? ?- the full API request you are making to > modify the > >? ? ? ? ? ? ? ? ? ? ?image > >? ? ? ? ? ? ? ? ? ? ?- the full API response you receive > >? ? ? ? ? ? ? ? ? ? ?- whether the user with "role:user" is in the > same > >? ? ? ? ? ? ? ? ? ? ?project that owns the > >? ? ? ? ? ? ? ? ? ? ?image > >? ? ? ? ? ? ? ? ? ? ?- debug level log extract for this call if > you have it > >? ? ? ? ? ? ? ? ? ? ?- anything else that could be relevant, for > example, > >? ? ? ? ? ? ? ? ? ? ?have you modified > >? ? ? ? ? ? ? ? ? ? ?any other policies, and if so, what values > are you > >? ? ? ? ? ? ? ? ? ? ?using now? > > > > > >? ? ? ? ? ? ? ? ?Also bear in mind that the default policy_file > name is > >? ? ? ? ? ? ? ? ?"policy.yaml" (not .json). You either > >? ? ? ? ? ? ? ? ?need to provide a policy.yaml file, or override the > >? ? ? ? ? ? ? ? ?policy_file setting if you really want to > >? ? ? ? ? ? ? ? ?use policy.json. > > > >? ? ? ? ? ? ? ? ?Alan > > > >? ? ? ? ? ? ? ? ? ? ?cheers, > >? ? ? ? ? ? ? ? ? ? ?brian > > > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > Regards > >? ? ? ? ? ? ? ? ? ? ? > Adivya Singh > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > On Sat, Jun 11, 2022 at 12:40 AM Adivya Singh > >? ? ? ? ? ? ? ? ? ? ? > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >>> wrote: > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?Hi Team, > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?I have a use case where I have to give > a user > >? ? ? ? ? ? ? ? ? ? ?restriction on > >? ? ? ? ? ? ? ? ? ? ? >? ? ?updating the image properties as a member. > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?I have created a policy Json file and give > >? ? ? ? ? ? ? ? ? ? ?the modify_image rule to > >? ? ? ? ? ? ? ? ? ? ? >? ? ?the particular role, but still it is > not working > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?"modify_image": "role:user", This role is > >? ? ? ? ? ? ? ? ? ? ?created in OpenStack. > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?but still it is failing while updating > >? ? ? ? ? ? ? ? ? ? ?properties with a > >? ? ? ? ? ? ? ? ? ? ? >? ? ?particular?user assigned to a role as > "access > >? ? ? ? ? ? ? ? ? ? ?denied" and > >? ? ? ? ? ? ? ? ? ? ? >? ? ?unauthorized access > >? ? ? ? ? ? ? ? ? ? ? > > >? ? ? ? ? ? ? ? ? ? ? >? ? ?Regards > >? ? ? ? ? ? ? ? ? ? ? >? ? ?Adivya Singh > >? ? ? ? ? ? ? ? ? ? ? > > > > > > > From openstack at garloff.de Wed Jul 6 20:51:13 2022 From: openstack at garloff.de (Kurt Garloff) Date: Wed, 6 Jul 2022 22:51:13 +0200 Subject: openstackclient: booting from image-created volume w/ delete_on_termination In-Reply-To: References: Message-ID: Hi Dmitriy, thanks for your response! It looks like this option is new in Wallaby and it looks like it would address my use-case. I'll grab newer client utils and see whether it works. -- Kurt On 06.07.22 13:56, Dmitriy Rabotyagov wrote: > Hey! > > delete_on_termination is a flag that is provided to nova during volume > attachment basically. > So in case of attaching volume to existing server, you can provide > that flag to attachment: > > openstack server add volume --enable-delete-on-termination > --os-compute-api-version 2.79 $UUID > > It's more complex when we're talking about server creation though. > What openstackclient allows you to do, is to provide --block-device > flag, where you can provide extra specs to the mapping. You can check > doc on how to use it here > https://docs.openstack.org/python-openstackclient/yoga/cli/command-objects/server.html#cmdoption-openstack-server-create-block-device > > For API call it would be block_device_mapping_v2 option provided with > request which supports kind of same format: > https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server > > ??, 6 ???. 2022 ?. ? 13:21, Kurt Garloff : >> Hi openstack CLI wizards, >> >> Having flavors without disks, I want volumes to be created from images on the fly that I can boot from. >> But I don't want to do bookkeeping for these volumes; they should disappear once the server disappears. >> >> On the command line with nova, I can do this: >> nova boot --nic net-name=$MYNET --key-name $MYKEY --flavor $MYFLAVOR \ >> --block-device "id=$MYIMGIFD,source=image,dest=volume,size=$MYSIZE,shutdown=remove,bootindex=0" $MYNAME >> >> I did not find a way to do this with openstack server create. >> Is there one? [...] From gmann at ghanshyammann.com Wed Jul 6 21:12:31 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 06 Jul 2022 16:12:31 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 7 July 2022 at 1500 UTC In-Reply-To: <181ca09af2b.ac5a635c132555.2514523884960142985@ghanshyammann.com> References: <181ca09af2b.ac5a635c132555.2514523884960142985@ghanshyammann.com> Message-ID: <181d55b7d9f.d9489bd0147866.9174330688028119568@ghanshyammann.com> Hello Everyone, Below is the agenda for Today's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * Checks on Zed cycle tracker ** https://etherpad.opendev.org/p/tc-zed-tracker * CentOS-stream-9 testing stability and collaboration with centos-stream maintainers * RBAC feedback in ops meetup ** https://etherpad.opendev.org/p/rbac-zed-ptg#L171 ** https://review.opendev.org/c/openstack/governance/+/847418 * Create the Environmental Sustainability SIG ** https://review.opendev.org/c/openstack/governance-sigs/+/845336 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 04 Jul 2022 11:27:21 -0500 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 7 July 2022, at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, 6 July at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From allison at openinfra.dev Wed Jul 6 21:12:53 2022 From: allison at openinfra.dev (Allison Price) Date: Wed, 6 Jul 2022 16:12:53 -0500 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: Message-ID: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. Cheers, Allison > On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: > > Just a quick follow up - I was permitted to share a pre-published > version of the article I was citing in my email from June 4th. [1] > Please enjoy responsibly. :-) > > [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf > > Cheers, > Radek > -yoctozepto > > On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek > wrote: >> >> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: >>> >>> Hi Rados?aw, >> >> Hi Andy, >> >> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. >> >>>> First of all, wow, that looks very interesting and in fact very much >>>> what I'm looking for. As I mentioned in the original message, the >>>> things this solution lacks are not something blocking for me. >>>> Regarding the approach to Guacamole, I know that it's preferable to >>>> have guacamole extension (that provides the dynamic inventory) >>>> developed rather than meddle with the internal database but I guess it >>>> is a good start. >>> >>> An even better approach would be something like the Guacozy project >>> (https://guacozy.readthedocs.io) >> >> I am not convinced. The project looks dead by now. [1] >> It offers a different UI which may appeal to certain users but I think >> sticking to vanilla Guacamole should do us right... For the time being >> at least. ;-) >> >>> They were able to use the Guacmole JavaScript libraries directly to >>> embed the HTML5 desktop within a React? app. I think this is a much >>> better approach, and I'd love to be able to do something similar in >>> the future. Would make the integration that much nicer. >> >> Well, as an example of embedding in the UI - sure. But it does not >> invalidate the need to modify Guacamole's database or write an >> extension to it so that it has the necessary creds. >> >>>> >>>> Any "quickstart setting up" would be awesome to have at this stage. As >>>> this is a Django app, I think I should be able to figure out the bits >>>> and bolts to get it up and running in some shape but obviously it will >>>> impede wider adoption. >>> >>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get >>> a quickstart guide together. >>> >>> I have a private repo with code to set up a development environment >>> which uses Heat and Ansible - this might be the quickest way to get >>> started. I'm happy to share this with you privately if you like. >> >> I'm interested. Please share it. >> >>>> On the note of adoption, if I find it usable, I can provide support >>>> for it in Kolla [1] and help grow the project's adoption this way. >>> >>> Kolla could be useful. We're already using containers for this project >>> now, and I have a helm chart for deploying to k8s. >>> https://github.com/NeCTAR-RC/bumblebee-helm >> >> Nice! The catch is obviously that some orgs frown upon K8s because >> they lack the necessary know-how. >> Kolla by design avoids the use of K8s. OpenStack components are not >> cloud-native anyway so benefits of using K8s are diminished (yet it >> makes sense to use K8s if there is enough experience with it as it >> makes certain ops more streamlined and simpler this way). >> >>> Also, an important part is making sure the images are set up correctly >>> with XRDP, etc. Our images are built using Packer, and the config for >>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images >> >> Ack, thanks for sharing. >> >>>> Also, since this is OpenStack-centric, maybe you could consider >>>> migrating to OpenDev at some point to collaborate with interested >>>> parties using a common system? >>>> Just food for thought at the moment. >>> >>> I think it would be more appropriate to start a new project. I think >>> our codebase has too many assumptions about the underlying cloud. >>> >>> We inherited the code from another project too, so it's got twice the cruft. >> >> I see. Well, that's good to know at least. >> >>>> Writing to let you know I have also found the following related paper: [1] >>>> and reached out to its authors in the hope to enable further >>>> collaboration to happen. >>>> The paper is not open access so I have only obtained it for myself and >>>> am unsure if licensing permits me to share, thus I also asked the >>>> authors to share their copy (that they have copyrights to). >>>> I have obviously let them know of the existence of this thread. ;-) >>>> Let's stay tuned. >>>> >>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 >>> >>> This looks interesting. A collaboration would be good if there is >>> enough interest in the community. >> >> I am looking forward to the collaboration happening. This could really >> liven up the OpenStack VDI. >> >> [1] https://github.com/paidem/guacozy/ >> >> -yoctozepto > From vishwanath.ne at gmail.com Wed Jul 6 21:24:15 2022 From: vishwanath.ne at gmail.com (Vishwanath) Date: Wed, 6 Jul 2022 14:24:15 -0700 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely Message-ID: Hello all, I have upgraded openstack, we are currently running on 14.1.0. I noticed fluentd container restarting indefinitely. The message I see under /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to fix this ? I noticed a similar issue back in 2017 from this post - https://bugs.launchpad.net/kolla-ansible/+bug/1663126 *error log:* *2022-07-06 21:20:42 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="'format' parameter is required"* *full logs:* 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf" 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.2.3' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '4.1.1' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version '0.14.1' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version '0.6.1' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.3' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version '1.8.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.0.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.3.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.2.5' 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, "", "apache_access", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, "", "wsgi_access", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(sahara-api|sahara-engine)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(magnum-conductor|magnum-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(keystone)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(heat-engine|heat-api|heat-api-cfn)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(glance-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(ceilometer-polling|ceilometer-agent-notification)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(mistral-server|mistral-engine|mistral-executor)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(murano-api|murano-engine)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(freezer-api|freezer-api_access|freezer-manage)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(kuryr-server)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(ironic-api|ironic-conductor|ironic-inspector)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(tacker-server|tacker-conductor)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(blazar-api|blazar-manager)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /^(masakari-engine|masakari-api)$/, "", "openstack_python", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: programname [#, /.+/, "", "unmatched", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload [#, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload [#, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] *2022-07-06 21:20:42 +0000 [error]: config error file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError error="'format' parameter is required"* Thanks Vish -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Wed Jul 6 21:30:24 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 6 Jul 2022 18:30:24 -0300 Subject: [manila] Stepping down from manila core In-Reply-To: References: Message-ID: Em ter., 5 de jul. de 2022 ?s 15:50, Rodrigo Barbieri < rodrigo.barbieri2010 at gmail.com> escreveu: > Hello fellow zorillas, > > It has been a long time since I started to hope every day I'd able to > dedicate more time to manila core activities and so far that hasn't > happened and I don't see it happening in the near foreseeable future. I had > been following the meetings notes weekly until ~2 months ago but I recently > ended up dropping those as well. > > Therefore I am stepping down from the manila core role. I would like to > thank everyone that I worked closely with from 2014 to 2019 on this > project. I hold this project and all of you dear to my heart and I am > extremely glad and grateful to have worked with you and met you on > summits/PTGs, as the life memories around Manila are among the best I have > from that period. > > Rodrigo, thank you for your contributions in various ways to Manila during all these years. You helped us to shape many features and served as core for a long time. I have worked with you closely for some time and I learned a lot from you. I wish you all the best. > If someday circumstances change, the manila project and its community will > be ones I will be very happy to go back to working closely with again. > > And we would be lucky to have you back! > Kind regards, > -- > Rodrigo Barbieri > MSc Computer Scientist > Regards, carloss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vinhducnguyen1708 at gmail.com Thu Jul 7 04:27:21 2022 From: vinhducnguyen1708 at gmail.com (Vinh Nguyen Duc) Date: Thu, 7 Jul 2022 11:27:21 +0700 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) Message-ID: I have a problem with I/O performance on Openstack block device HDD. *Environment:**Openstack version: Ussuri* - OS: CentOS8 - Kernel: 4.18.0-240.15.1.el8_3.x86_64 - KVM: qemu-kvm-5.1.0-20.el8 *CEPH version: Octopus * *15.2.8-0.el8.x84_64* - OS: CentOS8 - Kernel: 4.18.0-240.15.1.el8_3.x86_64 In CEPH Cluster we have 2 class: - Bluestore - HDD (only for cinder volume) - SSD (images, cinder volume) *Hardware:* - Ceph-client: 2x10Gbps (bond) MTU 9000 - Ceph-replicate: 2x10Gbps (bond) MTU 9000 *VM:* - Swapoff - non LVM *Issue*When create VM on Openstack using cinder volume HDD, have really poor performance: 60-85 MB/s writes. And when tests with ioping have high latency. *Diagnostic* 1. I have checked the performance between Compute Host (Openstack) and CEPH, and created an RBD (HDD class) mounted on Compute Host. And the performance is 300-400 MB/s. => So i think the problem is in the hypervisor But when I check performance on VM using cinder Volume SSD, the result equals performance when test RBD (SSD) mounted on a Compute host. 2. I already have to configure disk_cachemodes="network=writeback"(and enable rbd cache client) or test with disk_cachemodes="none" but nothing different. 3. Push iperf3 from compute host to random ceph host still has 20Gb traffic. 4. Compute Host and CEPH host connected to the same switch (layer2). Where else can I look for issues? Please help me in this case. Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tobias.rydberg at cleura.com Thu Jul 7 07:54:06 2022 From: tobias.rydberg at cleura.com (Tobias Rydberg) Date: Thu, 7 Jul 2022 09:54:06 +0200 Subject: [publiccloud-sig] A new start for Public Cloud SIG Message-ID: Hi everyone, In Berlin it became clear that there is a big interest in restarting the Public Cloud SIG. Thank you all for your contributions in that forum session [0] and the interest in participating in the work of this SIG. A lot of good ideas of what we should focus on was identified, with a clear focus of interoperability and standardization, to make the experience of using OpenStack as an end-user even better. Standardization of images and flavors? - naming, metadata etc - being one of them, working closely with InterOp WG regarding the checks and governance of the OpenStack Powered Program another. The ultimate goal could be to reach a state where it is possible to start to federate between the public clouds, but for that to be possible on a more global scale we need to start with aligning the simple things. To kick this off, we will start with bi-weekly IRC meetings again, shape the goals kick of some work towards identified goals. Since we have an IRC channel (#openstck-publiccloud) my suggestion is that we will start there. Let's decide on suggestions for day and time for our bi-weekly meetings during the kick off meeting. Kick off meeting =========== When: Wednesday 10th of August at 1400 UTC Where: IRC in channel #openstack-publiccloud I created an etherpad for our first meeting [1], feel free to add items to the agenda or other suggestions on goals etc that you might have prior to the meeting. Hope to see at IRC in August! [0] https://etherpad.opendev.org/p/berlin-summit-future-public-cloud-sig [1] https://etherpad.opendev.org/p/publiccloud-sig-kickoff BR, Tobias Rydberg -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3626 bytes Desc: S/MIME Cryptographic Signature URL: From skaplons at redhat.com Thu Jul 7 07:59:26 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 07 Jul 2022 09:59:26 +0200 Subject: [all][TC] Bare rechecks stats update Message-ID: <4937933.kRPvX5JM0G@p1> Hi, New stats from last 7 days about bare rechecks in each team are available in [1]: +--------------------+---------------+--------------+-------------------+ | Team | Bare rechecks | All Rechecks | Bare rechecks [%] | +--------------------+---------------+--------------+-------------------+ | skyline | 1 | 1 | 100.0 | | OpenStack Charms | 10 | 10 | 100.0 | | kolla | 92 | 92 | 100.0 (was 76.19%)| | sahara | 21 | 21 | 100.0 | | tacker | 5 | 5 | 100.0 | | keystone | 17 | 17 | 100.0 | | OpenStack-Helm | 15 | 15 | 100.0 (was 70%) | | rally | 1 | 1 | 100.0 | | barbican | 2 | 2 | 100.0 | | Telemetry | 3 | 3 | 100.0 | | ec2-api | 5 | 5 | 100.0 | | zun | 2 | 2 | 100.0 | | kuryr | 30 | 32 | 93.75 (was 92.5%) | | cinder | 89 | 102 | 87.25 (was 81.33%)| | Puppet OpenStack | 13 | 15 | 86.67 (was 85.29%)| | horizon | 17 | 20 | 85.0 (was 100%) | | Quality Assurance | 26 | 32 | 81.25 (was 64.44%)| | manila | 70 | 89 | 78.65 (was 74%) | | ironic | 77 | 101 | 76.24 (was 79.66%)| | tripleo | 230 | 318 | 72.33 (was 87.19%)| | swift | 5 | 7 | 71.43 (was 75%) | | glance | 8 | 12 | 66.67 (was 50%) | | designate | 9 | 14 | 64.29 (was 64.71%)| | nova | 10 | 18 | 55.56 (was 84.21%)| | octavia | 6 | 11 | 54.55 (was 16.67%)| | OpenStackSDK | 13 | 25 | 52.0 (was 74.19%) | | neutron | 26 | 54 | 48.15 (was 73.68%)| | heat | 3 | 8 | 37.5 (was 25%) | | requirements | 1 | 3 | 33.33 (was 87.5) | | oslo | 1 | 3 | 33.33 (was 0%) | | Release Management | 0 | 2 | 0.0 | | OpenStackAnsible | 0 | 34 | 0.0 (was 13.64%) | +--------------------+---------------+--------------+-------------------+ There are some teams which made progress since last check (especially *OpenStackAnsible* which had all rechecks done with given some comment at least - thx a lot for that :)) [1] https://etherpad.opendev.org/p/recheck-weekly-summary[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://etherpad.opendev.org/p/recheck-weekly-summary -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From pierre at stackhpc.com Thu Jul 7 08:48:48 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 7 Jul 2022 10:48:48 +0200 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely In-Reply-To: References: Message-ID: Hello Vish, Are you using a custom configuration for fluentd? Could you please share your generated td-agent.conf? Best wishes, Pierre Riteau (priteau) On Wed, 6 Jul 2022 at 23:34, Vishwanath wrote: > Hello all, > > I have upgraded openstack, we are currently running on 14.1.0. I noticed > fluentd container restarting indefinitely. The message I see under > /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to > fix this ? I noticed a similar issue back in 2017 from this post - > https://bugs.launchpad.net/kolla-ansible/+bug/1663126 > > *error log:* > *2022-07-06 21:20:42 +0000 [error]: config error > file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError > error="'format' parameter is required"* > > > *full logs:* > 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded > path="/etc/td-agent/td-agent.conf" > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' > version '5.2.3' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' > version '4.1.1' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' version > '2.6.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version > '0.14.1' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version > '0.6.1' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version > '2.0.3' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version > '1.8.2' > 2022-07-06 21:20:42 +0000 [info]: gem > 'fluent-plugin-prometheus_pushgateway' version '0.0.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' > version '2.1.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' > version '2.4.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' > version '2.3.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version > '1.0.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version > '1.2.5' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' > 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, > "", "apache_access", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, > "", "wsgi_access", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(sahara-api|sahara-engine)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(magnum-conductor|magnum-api)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(keystone)$/, "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(heat-engine|heat-api|heat-api-cfn)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(glance-api)$/, "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(ceilometer-polling|ceilometer-agent-notification)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(mistral-server|mistral-engine|mistral-executor)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(murano-api|murano-engine)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(freezer-api|freezer-api_access|freezer-manage)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(kuryr-server)$/, "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(ironic-api|ironic-conductor|ironic-inspector)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(tacker-server|tacker-conductor)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(blazar-api|blazar-manager)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, > /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, > "", "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /^(masakari-engine|masakari-api)$/, "", > "openstack_python", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: > programname > [# @keys="programname">, /.+/, "", "unmatched", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload > [# @keys="Payload">, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] > 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload > [# @keys="Payload">, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] > *2022-07-06 21:20:42 +0000 [error]: config error > file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError > error="'format' parameter is required"* > > > Thanks > Vish > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jul 7 09:26:48 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 7 Jul 2022 11:26:48 +0200 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> References: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> Message-ID: Hi Allison, I am also in touch with folks at rz.uni-freiburg.de who are also interested in this topic. We might be able to gather a panel for discussion. I think we need to introduce the topic properly with some presentations and then move onto a discussion if time allows (I believe it will as the time slot is 1h and the presentations should not be overly detailed for an introductory session). Cheers, Radek -yoctozepto On Wed, 6 Jul 2022 at 23:12, Allison Price wrote: > > I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. > > Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. > > Cheers, > Allison > > > On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: > > > > Just a quick follow up - I was permitted to share a pre-published > > version of the article I was citing in my email from June 4th. [1] > > Please enjoy responsibly. :-) > > > > [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf > > > > Cheers, > > Radek > > -yoctozepto > > > > On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek > > wrote: > >> > >> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: > >>> > >>> Hi Rados?aw, > >> > >> Hi Andy, > >> > >> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. > >> > >>>> First of all, wow, that looks very interesting and in fact very much > >>>> what I'm looking for. As I mentioned in the original message, the > >>>> things this solution lacks are not something blocking for me. > >>>> Regarding the approach to Guacamole, I know that it's preferable to > >>>> have guacamole extension (that provides the dynamic inventory) > >>>> developed rather than meddle with the internal database but I guess it > >>>> is a good start. > >>> > >>> An even better approach would be something like the Guacozy project > >>> (https://guacozy.readthedocs.io) > >> > >> I am not convinced. The project looks dead by now. [1] > >> It offers a different UI which may appeal to certain users but I think > >> sticking to vanilla Guacamole should do us right... For the time being > >> at least. ;-) > >> > >>> They were able to use the Guacmole JavaScript libraries directly to > >>> embed the HTML5 desktop within a React? app. I think this is a much > >>> better approach, and I'd love to be able to do something similar in > >>> the future. Would make the integration that much nicer. > >> > >> Well, as an example of embedding in the UI - sure. But it does not > >> invalidate the need to modify Guacamole's database or write an > >> extension to it so that it has the necessary creds. > >> > >>>> > >>>> Any "quickstart setting up" would be awesome to have at this stage. As > >>>> this is a Django app, I think I should be able to figure out the bits > >>>> and bolts to get it up and running in some shape but obviously it will > >>>> impede wider adoption. > >>> > >>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get > >>> a quickstart guide together. > >>> > >>> I have a private repo with code to set up a development environment > >>> which uses Heat and Ansible - this might be the quickest way to get > >>> started. I'm happy to share this with you privately if you like. > >> > >> I'm interested. Please share it. > >> > >>>> On the note of adoption, if I find it usable, I can provide support > >>>> for it in Kolla [1] and help grow the project's adoption this way. > >>> > >>> Kolla could be useful. We're already using containers for this project > >>> now, and I have a helm chart for deploying to k8s. > >>> https://github.com/NeCTAR-RC/bumblebee-helm > >> > >> Nice! The catch is obviously that some orgs frown upon K8s because > >> they lack the necessary know-how. > >> Kolla by design avoids the use of K8s. OpenStack components are not > >> cloud-native anyway so benefits of using K8s are diminished (yet it > >> makes sense to use K8s if there is enough experience with it as it > >> makes certain ops more streamlined and simpler this way). > >> > >>> Also, an important part is making sure the images are set up correctly > >>> with XRDP, etc. Our images are built using Packer, and the config for > >>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images > >> > >> Ack, thanks for sharing. > >> > >>>> Also, since this is OpenStack-centric, maybe you could consider > >>>> migrating to OpenDev at some point to collaborate with interested > >>>> parties using a common system? > >>>> Just food for thought at the moment. > >>> > >>> I think it would be more appropriate to start a new project. I think > >>> our codebase has too many assumptions about the underlying cloud. > >>> > >>> We inherited the code from another project too, so it's got twice the cruft. > >> > >> I see. Well, that's good to know at least. > >> > >>>> Writing to let you know I have also found the following related paper: [1] > >>>> and reached out to its authors in the hope to enable further > >>>> collaboration to happen. > >>>> The paper is not open access so I have only obtained it for myself and > >>>> am unsure if licensing permits me to share, thus I also asked the > >>>> authors to share their copy (that they have copyrights to). > >>>> I have obviously let them know of the existence of this thread. ;-) > >>>> Let's stay tuned. > >>>> > >>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 > >>> > >>> This looks interesting. A collaboration would be good if there is > >>> enough interest in the community. > >> > >> I am looking forward to the collaboration happening. This could really > >> liven up the OpenStack VDI. > >> > >> [1] https://github.com/paidem/guacozy/ > >> > >> -yoctozepto > > > From eyalb1 at gmail.com Thu Jul 7 10:04:47 2022 From: eyalb1 at gmail.com (Eyal B) Date: Thu, 7 Jul 2022 13:04:47 +0300 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: Message-ID: Hello, Will the licenses be renewed ? they ended on July 5 Eyal On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni wrote: > Sorry for the typo, It'd be July 5, 2022 > > > On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray wrote: > >> Hi Swapnil, >> >> We?re at July 2021 already ? so they expire at the end of this month? >> >> >> >> *From: *Swapnil Kulkarni >> *Date: *Tuesday, 6 July 2021 at 17:50 >> *To: *openstack-discuss at lists.openstack.org < >> openstack-discuss at lists.openstack.org> >> *Subject: *[all] PyCharm Licenses Renewed till July 2021 >> >> Hello, >> >> >> >> Happy to inform you the open source developer license for Pycharm has >> been renewed for 1 additional year till July 2021. >> >> >> Best Regards, >> Swapnil Kulkarni >> coolsvap at gmail dot com >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Jul 7 10:06:28 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 7 Jul 2022 12:06:28 +0200 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: References: Message-ID: <20220707100628.zdaikq5knnyzktxo@localhost> On 07/07, Vinh Nguyen Duc wrote: > I have a problem with I/O performance on Openstack block device HDD. > > *Environment:**Openstack version: Ussuri* > - OS: CentOS8 > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > - KVM: qemu-kvm-5.1.0-20.el8 > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > - OS: CentOS8 > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > In CEPH Cluster we have 2 class: > - Bluestore > - HDD (only for cinder volume) > - SSD (images, cinder volume) > *Hardware:* > - Ceph-client: 2x10Gbps (bond) MTU 9000 > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > *VM:* > - Swapoff > - non LVM > > *Issue*When create VM on Openstack using cinder volume HDD, have really > poor performance: 60-85 MB/s writes. And when tests with ioping have high > latency. > *Diagnostic* > 1. I have checked the performance between Compute Host (Openstack) and > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > performance is 300-400 MB/s. Hi, I probably won't be able to help you on the hypervisor side, but I have a couple of questions that may help narrow down the issue: - Are Cinder volumes using encryption? - How did you connect the volume to the Compute Host, using krbd or rbd-nbd? - Do both RBD images (Cinder and yours) have the same Ceph flags? - Did you try connecting to the Compute Host the same RBD image created by Cinder instead of creating a new one? Cheers, Gorka. > => So i think the problem is in the hypervisor > But when I check performance on VM using cinder Volume SSD, the result > equals performance when test RBD (SSD) mounted on a Compute host. > 2. I already have to configure disk_cachemodes="network=writeback"(and > enable rbd cache client) or test with disk_cachemodes="none" but nothing > different. > 3. Push iperf3 from compute host to random ceph host still has 20Gb > traffic. > 4. Compute Host and CEPH host connected to the same switch (layer2). > Where else can I look for issues? > Please help me in this case. > Thank you. From geguileo at redhat.com Thu Jul 7 10:15:08 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 7 Jul 2022 12:15:08 +0200 Subject: [large-scale][cinder] backend_native_threads_pool_size option with rbd backend In-Reply-To: <20220706082014.qu34mzuhmb3uma2j@localhost> References: <664ED09D-5DFD-4FFF-B1C8-978554D8FB6C@gmail.com> <20220706082014.qu34mzuhmb3uma2j@localhost> Message-ID: <20220707101508.omgqkhyvlxmuvgry@localhost> On 06/07, Gorka Eguileor wrote: > On 05/07, Arnaud wrote: > > Hey, > > > > Thanks for your answer! > > OK, I understand the why ;) also because we hit some issues on our deployment. > > So we increase the number of threads to 100 but we also enable the deferred deletion (keeping in mind the quota usage downsides that it brings). > > Hi Arnaud, > > Deferred deletion should reduce the number of required native threads > since delete calls will complete faster. > > > > We also disabled the periodic task to compute usage and use the less precise way from db. > > Are you referring to the 'rbd_exclusive_cinder_pool' configuration > option? Because that should already have the optimum default (True > value). > > > > First question here: do you think we are going the right path? > > > > One thing we are not yet sure is how to calculate correctly the number of threads to use. > > Should we do basic math with the number of deletion per minutes? Or should we take the number of volumes in the backend into account? Something in the middle? > > Native threads on the RBD driver are not only used for deletion, they > are used for *all* RBD calls. > > We haven't defined any particular method to calculate the optimum number > of threads on a system, but I can think of 2 possible avenues to > explore: > > - Performance testing: Run a set of tests with a high number of > concurrent requests and different operations and see how Cinder > performs. I wouldn't bother with individual attach and detach to VM > operations because those are noops on the Cinder side, creating volume > from image with either different images or cache disabled would be > better. > > - Reporting native thread usage: To really know if the number of native > threads is sufficient or not you could modify the Cinder volume > manager (and possibly also eventlet.tpool to gather statistics on the > number of used/free native threads and number of queued requests that > are waiting for a native thread to pick them up. > Hi Arnaud, I just realized you should also be able to use Guru Meditation Reports [1] to get the native threads that are executing at a given time. Cinder uses multiple processes, one for the parent and one for each individual backend, so the PID that should be used to send the signal is not the parent. We can get GMR in the logs for all backend with: $ ps -C cinder-volume -o pid --no-headers | tail -n +2 | xargs sudo kill -SIGUSR2 Then go into the "Threads" section and see how many native threads there are. Cheers, Gorka. [1]: https://docs.openstack.org/nova/queens/reference/gmr.html > Cheers, > Gorka. > > > > > Thanks! > > > > Arnaud > > > > > > > > Le 5 juillet 2022 18:06:14 GMT+02:00, Rajat Dhasmana a ?crit?: > > >Hi Arnaud, > > > > > >We discussed this in last week's cinder meeting and unfortunately we > > >haven't tested it thoroughly so we don't have any performance numbers to > > >share. > > >What we can tell is the reason why RBD requires a higher number of native > > >threads. RBD calls C code which could potentially block green threads hence > > >blocking the main operation therefore all of the calls in RBD to execute > > >operations are wrapped to use native threads so depending on the operations > > >we want to > > >perform concurrently, we can set the value of > > >backend_native_threads_pool_size for RBD. > > > > > >Thanks and regards > > >Rajat Dhasmana > > > > > >On Mon, Jun 27, 2022 at 9:35 PM Arnaud Morin wrote: > > > > > >> Hey all, > > >> > > >> Is there any recommendation on the number of threads to use when using > > >> RBD backend (option backend_native_threads_pool_size)? > > >> The doc is saying that 20 is the default but it should be increased, > > >> specially for the RBD driver, but up to which value? > > >> > > >> Is there anyone tuning this parameter in their openstack deployments? > > >> > > >> If yes, maybe we can add some recommendations on openstack large-scale > > >> doc about it? > > >> > > >> Cheers, > > >> > > >> Arnaud. > > >> > > >> From dmellado at redhat.com Thu Jul 7 11:08:54 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Thu, 7 Jul 2022 13:08:54 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: Message-ID: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> Just noticed that as well, thanks for bringing this up Eyal! On 7/7/22 12:04, Eyal B wrote: > Hello, > > Will the licenses be renewed ? they ended on July 5 > > Eyal > > On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni > wrote: > > Sorry for the typo, It'd be July 5, 2022 > > > On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray > wrote: > > Hi Swapnil,____ > > We?re at July 2021 already ? so they expire at the end of this > month?____ > > __ __ > > *From: *Swapnil Kulkarni > > *Date: *Tuesday, 6 July 2021 at 17:50 > *To: *openstack-discuss at lists.openstack.org > > > > *Subject: *[all] PyCharm Licenses Renewed till July 2021____ > > Hello,____ > > __ __ > > Happy to inform you the open source developer?license for > Pycharm has been renewed for 1 additional year till July 2021. ____ > > > ____ > > Best?Regards, > Swapnil Kulkarni > coolsvap at gmail dot com____ > > __ __ > From smooney at redhat.com Thu Jul 7 11:32:43 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 12:32:43 +0100 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. Message-ID: hi o/ it looks like neutron recently moved linuxbridge to be experimental Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 neutron-server[90491]: ERROR neutron.common.experimental [-] Feature 'linuxbridge' is experimental and has to be explicitly enabled in 'cfg.CONF.experimental' we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but it is enabeld in our job config as one of the configured mech drivers even if we dont install the agent. i have not looked up what change in neutron change this yet but im going to propose a patch to nova to disable it and i likely need to fix os-vif too so if you see a post fialure in either the nova-next of novs hybrid plug jobs (and or look into it and see " die 2385 'Neutron did not start'" in the Run devstack task summery that is why this is failing. ill update this thread once we have fixed the issue. From smooney at redhat.com Thu Jul 7 11:58:43 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 12:58:43 +0100 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. In-Reply-To: References: Message-ID: i have filed a bug for this https://bugs.launchpad.net/os-vif/+bug/1980948 and submited two patches for os-vif and nova https://review.opendev.org/q/topic:bug%252F1980948 other projects might also be affected by the cahnge intorduced in https://github.com/openstack/neutron/commit/7f0413c84c4515cd2fae31d823613c4d7ea43110 until those are merged please continue to hold of rechecking nova or os-vif ci failures. On Thu, 2022-07-07 at 12:32 +0100, Sean Mooney wrote: > hi o/ > > it looks like neutron recently moved linuxbridge to be experimental > Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 neutron-server[90491]: ERROR neutron.common.experimental [-] Feature 'linuxbridge' is > experimental and has to be explicitly enabled in 'cfg.CONF.experimental' > > we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but it is enabeld in our job config as one of the configured mech drivers > even if we dont install the agent. > > i have not looked up what change in neutron change this yet but im going to propose a patch to nova to disable it and i likely need to fix os-vif too > so if you see a post fialure in either the nova-next of novs hybrid plug jobs (and or look into it and see " die 2385 'Neutron did not start'" in the > Run devstack task summery that is why this is failing. > > ill update this thread once we have fixed the issue. > From smooney at redhat.com Thu Jul 7 12:26:20 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 13:26:20 +0100 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: <20220707100628.zdaikq5knnyzktxo@localhost> References: <20220707100628.zdaikq5knnyzktxo@localhost> Message-ID: <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> On Thu, 2022-07-07 at 12:06 +0200, Gorka Eguileor wrote: > On 07/07, Vinh Nguyen Duc wrote: > > I have a problem with I/O performance on Openstack block device HDD. > > > > *Environment:**Openstack version: Ussuri* > > - OS: CentOS8 > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > - KVM: qemu-kvm-5.1.0-20.el8 > > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > > - OS: CentOS8 > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > In CEPH Cluster we have 2 class: > > - Bluestore > > - HDD (only for cinder volume) > > - SSD (images, cinder volume) > > *Hardware:* > > - Ceph-client: 2x10Gbps (bond) MTU 9000 > > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > > *VM:* > > - Swapoff > > - non LVM > > > > *Issue*When create VM on Openstack using cinder volume HDD, have really > > poor performance: 60-85 MB/s writes. And when tests with ioping have high > > latency. > > *Diagnostic* > > 1. I have checked the performance between Compute Host (Openstack) and > > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > > performance is 300-400 MB/s. > > Hi, > > I probably won't be able to help you on the hypervisor side, but I have > a couple of questions that may help narrow down the issue: > > - Are Cinder volumes using encryption? if you are not using encyrption you might be encountering librados issue tracked downstream by https://bugzilla.redhat.com/show_bug.cgi?id=1897572 this is unfixable without moving to updating to a new version fo the cpeh client libs. > > - How did you connect the volume to the Compute Host, using krbd or > rbd-nbd? in ussuri we still technially have the workaround options to use krbd but they are deprecated and removed in xena. https://github.com/openstack/nova/blob/stable/ussuri/nova/conf/workarounds.py#L277-L329= in generaly using these options might invlaidate any support agreement you may have with a vendeor. we are aware of at least once edgecase currently where enableing this with encyrpted volume breaks live migration potentally causeing dataloss. https://bugs.launchpad.net/nova/+bug/1939545 there is a backport inflight for the fix to train https://review.opendev.org/q/topic:bug%252F1939545 but its only been backported to wallaby so far so it is not safe to enable those options and use live migration today. you should also be aware that to enabel this optionon a host you need to drain the host first then enable the option adn cold migrate instance to the host. live migration betwen hosts with local attach enabeld and disabled is not supported. if you want to disable it again in the futrue which you will have to do to upgrade to xena you need to cold migrate all instances again. so if you are deploying your own version fo cpeh and can move to a newer version which has the librados perforamce enhacment feature that is operationlly less painful then using these workaround. the only reason we developed this workaroudn to use krbd in nova was because our hands were tieed downstream since we could not ship a new version of ceph but needed to support release with this perfromance limiation for multiple years. so unless your in a simialr situration upgradeing ceph and ensuring you use the new versions of the ceph libs with qemu and a new enough qemu to leverave the performance enhancments is the best option. so with those disclaimer you may want to consider evaluating those workaround options but keep in mind the limiatation and the fact that you cannot live migrate until that bug is fixt before considering using it in production. > > - Do both RBD images (Cinder and yours) have the same Ceph flags? > > - Did you try connecting to the Compute Host the same RBD image created > by Cinder instead of creating a new one? > > Cheers, > Gorka. > > > => So i think the problem is in the hypervisor > > But when I check performance on VM using cinder Volume SSD, the result > > equals performance when test RBD (SSD) mounted on a Compute host. > > 2. I already have to configure disk_cachemodes="network=writeback"(and > > enable rbd cache client) or test with disk_cachemodes="none" but nothing > > different. > > 3. Push iperf3 from compute host to random ceph host still has 20Gb > > traffic. > > 4. Compute Host and CEPH host connected to the same switch (layer2). > > Where else can I look for issues? > > Please help me in this case. > > Thank you. > > From vinhducnguyen1708 at gmail.com Thu Jul 7 12:40:19 2022 From: vinhducnguyen1708 at gmail.com (Vinh Nguyen Duc) Date: Thu, 7 Jul 2022 19:40:19 +0700 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> References: <20220707100628.zdaikq5knnyzktxo@localhost> <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> Message-ID: Thank for your email We are not using encryption volume. If this is a bug of librados, i do not see any effect of throughput when VM using volume SSD. And the performance of ceph HDD mounted directly from compute still good. We already disable debug in ceph.conf On Thu, 7 Jul 2022 at 19:26 Sean Mooney wrote: > On Thu, 2022-07-07 at 12:06 +0200, Gorka Eguileor wrote: > > On 07/07, Vinh Nguyen Duc wrote: > > > I have a problem with I/O performance on Openstack block device HDD. > > > > > > *Environment:**Openstack version: Ussuri* > > > - OS: CentOS8 > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > - KVM: qemu-kvm-5.1.0-20.el8 > > > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > > > - OS: CentOS8 > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > In CEPH Cluster we have 2 class: > > > - Bluestore > > > - HDD (only for cinder volume) > > > - SSD (images, cinder volume) > > > *Hardware:* > > > - Ceph-client: 2x10Gbps (bond) MTU 9000 > > > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > > > *VM:* > > > - Swapoff > > > - non LVM > > > > > > *Issue*When create VM on Openstack using cinder volume HDD, have really > > > poor performance: 60-85 MB/s writes. And when tests with ioping have > high > > > latency. > > > *Diagnostic* > > > 1. I have checked the performance between Compute Host (Openstack) and > > > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > > > performance is 300-400 MB/s. > > > > Hi, > > > > I probably won't be able to help you on the hypervisor side, but I have > > a couple of questions that may help narrow down the issue: > > > > - Are Cinder volumes using encryption? > if you are not using encyrption you might be encountering librados issue > tracked downstream by https://bugzilla.redhat.com/show_bug.cgi?id=1897572 > this is unfixable without moving to updating to a new version fo the cpeh > client > libs. > > > > - How did you connect the volume to the Compute Host, using krbd or > > rbd-nbd? > in ussuri we still technially have the workaround options to use krbd but > they are > deprecated and removed in xena. > > https://github.com/openstack/nova/blob/stable/ussuri/nova/conf/workarounds.py#L277-L329= > in generaly using these options might invlaidate any support agreement you > may have with a vendeor. > > we are aware of at least once edgecase currently where enableing this with > encyrpted volume breaks live > migration potentally causeing dataloss. > https://bugs.launchpad.net/nova/+bug/1939545 > there is a backport inflight for the fix to train > https://review.opendev.org/q/topic:bug%252F1939545 > but its only been backported to wallaby so far so it is not safe to enable > those options and use live migration > today. > > you should also be aware that to enabel this optionon a host you need to > drain the host first then enable the option adn cold > migrate instance to the host. live migration betwen hosts with local > attach enabeld and disabled is not supported. > > if you want to disable it again in the futrue which you will have to do to > upgrade to xena you need to cold migrate all instances > again. > > so if you are deploying your own version fo cpeh and can move to a newer > version which has the librados perforamce enhacment feature > that is operationlly less painful then using these workaround. > > the only reason we developed this workaroudn to use krbd in nova was > because our hands were tieed downstream since we could not ship a new > version of > ceph but needed to support release with this perfromance limiation for > multiple years. so unless your in a simialr situration > upgradeing ceph and ensuring you use the new versions of the ceph libs > with qemu and a new enough qemu to leverave the performance enhancments is > the > best option. > > so with those disclaimer you may want to consider evaluating those > workaround options but keep in mind the limiatation and the fact that you > cannot > live migrate until that bug is fixt before considering using it in > production. > > > > > - Do both RBD images (Cinder and yours) have the same Ceph flags? > > > > - Did you try connecting to the Compute Host the same RBD image created > > by Cinder instead of creating a new one? > > > > Cheers, > > Gorka. > > > > > => So i think the problem is in the hypervisor > > > But when I check performance on VM using cinder Volume SSD, the result > > > equals performance when test RBD (SSD) mounted on a Compute host. > > > 2. I already have to configure disk_cachemodes="network=writeback"(and > > > enable rbd cache client) or test with disk_cachemodes="none" but > nothing > > > different. > > > 3. Push iperf3 from compute host to random ceph host still has 20Gb > > > traffic. > > > 4. Compute Host and CEPH host connected to the same switch (layer2). > > > Where else can I look for issues? > > > Please help me in this case. > > > Thank you. > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From geguileo at redhat.com Thu Jul 7 12:47:13 2022 From: geguileo at redhat.com (Gorka Eguileor) Date: Thu, 7 Jul 2022 14:47:13 +0200 Subject: Poor I/O performance on OpenStack block device (OpenStack Centos8:Ussuri) In-Reply-To: References: <20220707100628.zdaikq5knnyzktxo@localhost> <2e3047aacbc40681c308f15f6c8e9384924c5e2d.camel@redhat.com> Message-ID: <20220707124713.gyy72zmzxelf2nt6@localhost> On 07/07, Vinh Nguyen Duc wrote: > Thank for your email > We are not using encryption volume. > > If this is a bug of librados, i do not see any effect of throughput when VM > using volume SSD. > And the performance of ceph HDD mounted directly from compute still good. > Hi, Did the Cinder volume that was performing poorly in the VM perform well when manually connected directly to the Compute Host? Cheers, Gorka. > We already disable debug in ceph.conf > > On Thu, 7 Jul 2022 at 19:26 Sean Mooney wrote: > > > On Thu, 2022-07-07 at 12:06 +0200, Gorka Eguileor wrote: > > > On 07/07, Vinh Nguyen Duc wrote: > > > > I have a problem with I/O performance on Openstack block device HDD. > > > > > > > > *Environment:**Openstack version: Ussuri* > > > > - OS: CentOS8 > > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > > - KVM: qemu-kvm-5.1.0-20.el8 > > > > *CEPH version: Octopus * *15.2.8-0.el8.x84_64* > > > > - OS: CentOS8 > > > > - Kernel: 4.18.0-240.15.1.el8_3.x86_64 > > > > In CEPH Cluster we have 2 class: > > > > - Bluestore > > > > - HDD (only for cinder volume) > > > > - SSD (images, cinder volume) > > > > *Hardware:* > > > > - Ceph-client: 2x10Gbps (bond) MTU 9000 > > > > - Ceph-replicate: 2x10Gbps (bond) MTU 9000 > > > > *VM:* > > > > - Swapoff > > > > - non LVM > > > > > > > > *Issue*When create VM on Openstack using cinder volume HDD, have really > > > > poor performance: 60-85 MB/s writes. And when tests with ioping have > > high > > > > latency. > > > > *Diagnostic* > > > > 1. I have checked the performance between Compute Host (Openstack) and > > > > CEPH, and created an RBD (HDD class) mounted on Compute Host. And the > > > > performance is 300-400 MB/s. > > > > > > Hi, > > > > > > I probably won't be able to help you on the hypervisor side, but I have > > > a couple of questions that may help narrow down the issue: > > > > > > - Are Cinder volumes using encryption? > > if you are not using encyrption you might be encountering librados issue > > tracked downstream by https://bugzilla.redhat.com/show_bug.cgi?id=1897572 > > this is unfixable without moving to updating to a new version fo the cpeh > > client > > libs. > > > > > > - How did you connect the volume to the Compute Host, using krbd or > > > rbd-nbd? > > in ussuri we still technially have the workaround options to use krbd but > > they are > > deprecated and removed in xena. > > > > https://github.com/openstack/nova/blob/stable/ussuri/nova/conf/workarounds.py#L277-L329= > > in generaly using these options might invlaidate any support agreement you > > may have with a vendeor. > > > > we are aware of at least once edgecase currently where enableing this with > > encyrpted volume breaks live > > migration potentally causeing dataloss. > > https://bugs.launchpad.net/nova/+bug/1939545 > > there is a backport inflight for the fix to train > > https://review.opendev.org/q/topic:bug%252F1939545 > > but its only been backported to wallaby so far so it is not safe to enable > > those options and use live migration > > today. > > > > you should also be aware that to enabel this optionon a host you need to > > drain the host first then enable the option adn cold > > migrate instance to the host. live migration betwen hosts with local > > attach enabeld and disabled is not supported. > > > > if you want to disable it again in the futrue which you will have to do to > > upgrade to xena you need to cold migrate all instances > > again. > > > > so if you are deploying your own version fo cpeh and can move to a newer > > version which has the librados perforamce enhacment feature > > that is operationlly less painful then using these workaround. > > > > the only reason we developed this workaroudn to use krbd in nova was > > because our hands were tieed downstream since we could not ship a new > > version of > > ceph but needed to support release with this perfromance limiation for > > multiple years. so unless your in a simialr situration > > upgradeing ceph and ensuring you use the new versions of the ceph libs > > with qemu and a new enough qemu to leverave the performance enhancments is > > the > > best option. > > > > so with those disclaimer you may want to consider evaluating those > > workaround options but keep in mind the limiatation and the fact that you > > cannot > > live migrate until that bug is fixt before considering using it in > > production. > > > > > > > > - Do both RBD images (Cinder and yours) have the same Ceph flags? > > > > > > - Did you try connecting to the Compute Host the same RBD image created > > > by Cinder instead of creating a new one? > > > > > > Cheers, > > > Gorka. > > > > > > > => So i think the problem is in the hypervisor > > > > But when I check performance on VM using cinder Volume SSD, the result > > > > equals performance when test RBD (SSD) mounted on a Compute host. > > > > 2. I already have to configure disk_cachemodes="network=writeback"(and > > > > enable rbd cache client) or test with disk_cachemodes="none" but > > nothing > > > > different. > > > > 3. Push iperf3 from compute host to random ceph host still has 20Gb > > > > traffic. > > > > 4. Compute Host and CEPH host connected to the same switch (layer2). > > > > Where else can I look for issues? > > > > Please help me in this case. > > > > Thank you. > > > > > > > > > > From papathanail at uom.edu.gr Thu Jul 7 13:02:42 2022 From: papathanail at uom.edu.gr (GEORGIOS PAPATHANAIL) Date: Thu, 7 Jul 2022 16:02:42 +0300 Subject: Openstack instance is unreachable In-Reply-To: References: <20220628105723.Horde.VEeg85NrLzpcrDTFq6t70GG@webmail.nde.ag> Message-ID: Any thoughts? ???? ??? 29 ???? 2022 ???? 20:06 ? ??????? GEORGIOS PAPATHANAIL < papathanail at uom.edu.gr> ??????: > I did the installation based on this > https://docs.openstack.org/install-guide/openstack-services.html (queens > version) > > In my previous installation (I use VMWare instead of XenServer) the only > thing that I did is to enable promisc mode in VSphere, and the VMs were > reachable. > > I am using ml2 plugin and linuxbridge (default installation) > > Does it need extra configuration? > > Thanks > > > -- *George Papathanail* *Associate Researcher* *Department of Applied Informatics* *University of Macedonia* -------------- next part -------------- An HTML attachment was scrubbed... URL: From tolga at etom.com.tr Thu Jul 7 13:29:27 2022 From: tolga at etom.com.tr (tolga at etom.com.tr) Date: Thu, 07 Jul 2022 16:29:27 +0300 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: Message-ID: <662231657197586@mail.yandex.com.tr> An HTML attachment was scrubbed... URL: From gouthampravi at gmail.com Thu Jul 7 15:10:00 2022 From: gouthampravi at gmail.com (Goutham Pacha Ravi) Date: Thu, 7 Jul 2022 20:40:00 +0530 Subject: [manila] Stepping down from manila core In-Reply-To: References: Message-ID: On Thu, Jul 7, 2022 at 3:17 AM Carlos Silva wrote: > > > > Em ter., 5 de jul. de 2022 ?s 15:50, Rodrigo Barbieri escreveu: >> >> Hello fellow zorillas, >> >> It has been a long time since I started to hope every day I'd able to dedicate more time to manila core activities and so far that hasn't happened and I don't see it happening in the near foreseeable future. I had been following the meetings notes weekly until ~2 months ago but I recently ended up dropping those as well. >> >> Therefore I am stepping down from the manila core role. I would like to thank everyone that I worked closely with from 2014 to 2019 on this project. I hold this project and all of you dear to my heart and I am extremely glad and grateful to have worked with you and met you on summits/PTGs, as the life memories around Manila are among the best I have from that period. >> > Rodrigo, thank you for your contributions in various ways to Manila during all these years. You helped us to shape many features and served as core for a long time. I have worked with you closely for some time and I learned a lot from you. I wish you all the best. >> >> If someday circumstances change, the manila project and its community will be ones I will be very happy to go back to working closely with again. >> > And we would be lucky to have you back! ++ What he said; Thank you Rodrigo! >> >> Kind regards, >> -- >> Rodrigo Barbieri >> MSc Computer Scientist > > > Regards, > carloss From fungi at yuggoth.org Thu Jul 7 18:16:21 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 7 Jul 2022 18:16:21 +0000 Subject: [tc] Reminder: August 2022 OpenInfra Board Sync Message-ID: <20220707181621.4hhm555beap4veie@yuggoth.org> If you're interested in participating in an informal discussion between the OpenStack TC, OpenInfra board members, and other interested community collaborators, don't forget to mark your preferred dates and times by Friday, 2022-07-15, so that I can let the board members know what our collective availability looks like: https://framadate.org/atdFRM8YeUtauSgC If you don't know what this is about, see my earlier post for details (I intentionally avoided replying to it in order to increase visibility for the reminder): https://lists.openstack.org/pipermail/openstack-discuss/2022-June/029352.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From katonalala at gmail.com Thu Jul 7 20:01:46 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 7 Jul 2022 22:01:46 +0200 Subject: [neutron] Drivers meeting agenda - 08.07.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. * [RFE] Adding the "rekey" parameter to the api for strongswan like "dpd_action" (#link https://bugs.launchpad.net/neutron/+bug/1979044) * [RFE] Firewall Group Ordering on Port Association (#link https://bugs.launchpad.net/neutron/+bug/1979816) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jul 7 22:06:42 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 07 Jul 2022 23:06:42 +0100 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. In-Reply-To: References: Message-ID: <423de88694a70e179e5dfb4172fdf879ae95a574.camel@redhat.com> https://review.opendev.org/c/openstack/nova/+/848948 has now merged in nova and the os-vif change have also merged so the nova/os-vif check and gate pipelines should now be unblocked and its is ok to recheck (with a reason) if requried. On Thu, 2022-07-07 at 12:58 +0100, Sean Mooney wrote: > i have filed a bug for this https://bugs.launchpad.net/os-vif/+bug/1980948 and submited two patches for os-vif and nova > https://review.opendev.org/q/topic:bug%252F1980948 > other projects might also be affected by the cahnge intorduced in https://github.com/openstack/neutron/commit/7f0413c84c4515cd2fae31d823613c4d7ea43110 > > until those are merged please continue to hold of rechecking nova or os-vif ci failures. > On Thu, 2022-07-07 at 12:32 +0100, Sean Mooney wrote: > > hi o/ > > > > it looks like neutron recently moved linuxbridge to be experimental > > Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 neutron-server[90491]: ERROR neutron.common.experimental [-] Feature 'linuxbridge' is > > experimental and has to be explicitly enabled in 'cfg.CONF.experimental' > > > > we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but it is enabeld in our job config as one of the configured mech drivers > > even if we dont install the agent. > > > > i have not looked up what change in neutron change this yet but im going to propose a patch to nova to disable it and i likely need to fix os-vif too > > so if you see a post fialure in either the nova-next of novs hybrid plug jobs (and or look into it and see " die 2385 'Neutron did not start'" in the > > Run devstack task summery that is why this is failing. > > > > ill update this thread once we have fixed the issue. > > > From vegkeppnemairamek at gmail.com Fri Jul 8 01:28:15 2022 From: vegkeppnemairamek at gmail.com (Airamek) Date: Fri, 8 Jul 2022 03:28:15 +0200 Subject: Nova-conductor is having trouble with AMQP authentication Message-ID: <5e2ae288-9aa0-7c77-bbf9-47d44737a857@gmail.com> Hello everyone! I've installed OpenStack Ussuri on my home servers (one controller and one compute node, both running OpenSuse Leap 15.3) based on the instructions in the docs(https://docs.openstack.org/install-guide/openstack-services.html#minimal-deployment-for-ussuri) Everything works until I launch an instance in Horizon. The instance gets stuck on "Scheduling". Looking trough the logs I've came across nova-conductor.log(on the controller), which contained the following error log(full log in the attachments): 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server [req-eb225ffd-9b12-4e80-987b-34f0317082c6 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. I'm 100% sure my RabbitMQ username and password are set correctly in nova.conf. I've included my nova.conf too, with the passwords removed. I would be really thankful, if someone could point me in the right direction! P.S: Please excuse me for any grammatical or spelling mistakes, English is not my first language. -------------- next part -------------- 2022-07-08 00:44:03.933 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Caught SIGTERM, stopping children 2022-07-08 00:44:03.945 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Waiting on 2 children to exit 2022-07-08 00:44:03.949 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Child 24716 killed by signal 15 2022-07-08 00:44:03.950 24662 INFO oslo_service.service [req-8b628570-bee1-4744-9c41-cac8ea76917d - - - - -] Child 24717 killed by signal 15 2022-07-08 00:44:09.847 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Starting 2 workers 2022-07-08 00:44:09.865 25885 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:44:09.873 25830 WARNING oslo_config.cfg [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 00:44:09.877 25886 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:44:09.886 25830 WARNING oslo_config.cfg [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 00:48:08.510 25886 ERROR stevedore.extension [req-eb225ffd-9b12-4e80-987b-34f0317082c6 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server [req-eb225ffd-9b12-4e80-987b-34f0317082c6 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 00:48:08.846 25886 ERROR oslo_messaging.rpc.server 2022-07-08 00:59:06.907 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Caught SIGTERM, stopping children 2022-07-08 00:59:06.912 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Waiting on 2 children to exit 2022-07-08 00:59:06.924 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Child 25886 killed by signal 15 2022-07-08 00:59:06.926 25830 INFO oslo_service.service [req-610a6fd5-5a14-47b6-b7a2-f80490141c47 - - - - -] Child 25885 killed by signal 15 2022-07-08 00:59:13.233 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Starting 2 workers 2022-07-08 00:59:13.250 32521 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:59:13.300 32522 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 00:59:13.296 32454 WARNING oslo_config.cfg [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 00:59:13.337 32454 WARNING oslo_config.cfg [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:00:04.052 32521 ERROR stevedore.extension [req-57288225-27ca-413d-8b4e-1e4227b32748 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server [req-57288225-27ca-413d-8b4e-1e4227b32748 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:00:04.270 32521 ERROR oslo_messaging.rpc.server 2022-07-08 01:04:53.610 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:04:53.616 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Waiting on 2 children to exit 2022-07-08 01:04:53.627 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Child 32521 killed by signal 15 2022-07-08 01:04:53.630 32454 INFO oslo_service.service [req-f9c2be9a-2d76-4fcc-bc38-04c312f276ba - - - - -] Child 32522 killed by signal 15 2022-07-08 01:04:59.323 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Starting 2 workers 2022-07-08 01:04:59.338 2766 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:04:59.350 2700 WARNING oslo_config.cfg [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:04:59.356 2767 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:04:59.365 2700 WARNING oslo_config.cfg [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:05:33.816 2767 ERROR stevedore.extension [req-a6193109-e4b0-44b5-a02a-bfa50509dca2 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server [req-a6193109-e4b0-44b5-a02a-bfa50509dca2 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:05:34.004 2767 ERROR oslo_messaging.rpc.server 2022-07-08 01:11:17.263 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:11:17.273 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Waiting on 2 children to exit 2022-07-08 01:11:17.274 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Child 2766 killed by signal 15 2022-07-08 01:11:17.278 2700 INFO oslo_service.service [req-65ffbd46-f9be-4b7c-a6b2-144e9932e98d - - - - -] Child 2767 killed by signal 15 2022-07-08 01:11:24.003 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Starting 2 workers 2022-07-08 01:11:24.019 5582 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:11:24.028 5546 WARNING oslo_config.cfg [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:11:24.036 5583 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:11:24.038 5546 WARNING oslo_config.cfg [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:11:49.934 5583 ERROR stevedore.extension [req-9559f892-d837-4c51-98a0-cb378efc1372 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server [req-9559f892-d837-4c51-98a0-cb378efc1372 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:11:50.143 5583 ERROR oslo_messaging.rpc.server 2022-07-08 01:17:11.819 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:17:11.836 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Waiting on 2 children to exit 2022-07-08 01:17:11.869 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Child 5582 killed by signal 15 2022-07-08 01:17:11.870 5546 INFO oslo_service.service [req-656245cc-7034-49ee-8fa5-31339ef2b978 - - - - -] Child 5583 killed by signal 15 2022-07-08 01:18:47.201 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Starting 2 workers 2022-07-08 01:18:47.243 9104 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:18:47.275 9103 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:18:47.325 8868 WARNING oslo_config.cfg [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:18:47.390 8868 WARNING oslo_config.cfg [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:18:47.804 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:47.877 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:49.837 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:49.906 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:53.861 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:53.939 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:59.879 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:18:59.957 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:07.902 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:07.977 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:17.925 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:18.000 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:29.952 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:30.025 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:43.980 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:19:44.053 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:00.010 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:00.082 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:18.044 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:18.114 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:38.079 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:20:38.148 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:00.115 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:00.185 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:24.156 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:24.222 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:50.196 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:21:50.261 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:22:18.239 9104 ERROR oslo.messaging._drivers.impl_rabbit [req-9990be72-77ed-4924-9ecd-7c8ea83cfecb - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:22:18.304 9103 ERROR oslo.messaging._drivers.impl_rabbit [req-90000cde-6ac5-40dc-b825-9078b35379b3 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:22:26.391 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Caught SIGTERM, stopping children 2022-07-08 01:22:26.518 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Waiting on 2 children to exit 2022-07-08 01:22:26.523 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Child 9104 killed by signal 15 2022-07-08 01:22:26.589 8868 INFO oslo_service.service [req-bd8defb3-624b-4008-ae9e-372bad93b72f - - - - -] Child 9103 killed by signal 15 2022-07-08 01:25:10.902 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Starting 2 workers 2022-07-08 01:25:10.937 3134 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:25:10.939 3135 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 01:25:10.934 2697 WARNING oslo_config.cfg [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 01:25:10.955 2697 WARNING oslo_config.cfg [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 01:25:11.160 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:11.179 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:13.175 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:13.195 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:17.193 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:17.210 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:23.211 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:23.227 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 8.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:31.232 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:31.248 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 10.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:41.254 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:41.270 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 12.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:53.280 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:25:53.295 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 14.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:07.309 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:07.322 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 16.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:23.339 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:23.351 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 18.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:41.370 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:26:41.382 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 20.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:01.402 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:01.414 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 22.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:23.437 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:23.449 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 24.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:47.473 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:27:47.485 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 26.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:13.511 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:13.521 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 28.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:41.556 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:28:41.563 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 30.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:11.601 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:11.606 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:43.654 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:29:43.654 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:15.700 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:15.700 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:47.755 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:30:47.756 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:19.804 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:19.804 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:51.855 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:31:51.855 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:23.901 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:23.902 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:55.958 3134 ERROR oslo.messaging._drivers.impl_rabbit [req-58e45156-9a88-424f-8a34-ae9c6df58a08 - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:32:55.958 3135 ERROR oslo.messaging._drivers.impl_rabbit [req-402af6d8-0524-46c3-be35-9fedcdb67a5a - - - - -] Connection failed: [Errno 111] ECONNREFUSED (retrying in 32.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 01:39:42.378 3135 ERROR stevedore.extension [req-d1dd37be-fbd5-40a1-82d6-cc794be38f3d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server [req-d1dd37be-fbd5-40a1-82d6-cc794be38f3d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 01:39:42.618 3135 ERROR oslo_messaging.rpc.server 2022-07-08 02:03:03.123 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown'. Trying again in 1 seconds.: amqp.exceptions.ConnectionForced: (0, 0): (320) CONNECTION_FORCED - broker forced connection closure with reason 'shutdown' 2022-07-08 02:03:03.137 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer 2022-07-08 02:03:03.140 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer 2022-07-08 02:03:04.190 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:04.199 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:04.229 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:05.213 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:05.245 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:06.210 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 4 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:07.229 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:07.263 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:07.598 3134 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer 2022-07-08 02:03:07.624 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:08.242 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:08.279 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:09.641 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:10.226 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 6 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:10.257 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 4 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:10.301 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 4 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:13.486 3135 INFO oslo.messaging._drivers.impl_rabbit [-] A recoverable connection/channel error occurred, trying to reconnect: [Errno 104] Connection reset by peer 2022-07-08 02:03:13.501 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 2.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:13.657 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 6.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:14.273 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:14.317 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: . Trying again in 1 seconds.: amqp.exceptions.RecoverableConnectionError: 2022-07-08 02:03:15.287 3134 ERROR oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:15.331 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 2 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:15.518 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] Connection failed: [Errno 111] ECONNREFUSED (retrying in 4.0 seconds): ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:16.247 3135 ERROR oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] AMQP server on laena.internal.teamorange.hu:5672 is unreachable: [Errno 111] ECONNREFUSED. Trying again in 8 seconds.: ConnectionRefusedError: [Errno 111] ECONNREFUSED 2022-07-08 02:03:17.354 3134 INFO oslo.messaging._drivers.impl_rabbit [-] [72484087-8b6c-4cfb-b0c4-52ef7f7fa16c] Reconnected to AMQP server on laena.internal.teamorange.hu:5672 via [amqp] client with port 46022. 2022-07-08 02:03:17.418 3135 INFO oslo.messaging._drivers.impl_rabbit [-] [6b7eedb1-5128-450b-accc-0f78a93b563e] Reconnected to AMQP server on laena.internal.teamorange.hu:5672 via [amqp] client with port 46026. 2022-07-08 02:03:24.285 3135 INFO oslo.messaging._drivers.impl_rabbit [-] [844ee0e5-c6c4-46ca-ad87-d0151cd6ac9b] Reconnected to AMQP server on laena.internal.teamorange.hu:5672 via [amqp] client with port 46054. 2022-07-08 02:03:57.591 3134 ERROR stevedore.extension [req-da7fd51a-06a4-420b-9987-17c5ceb0047f 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server [req-da7fd51a-06a4-420b-9987-17c5ceb0047f 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:03:57.799 3134 ERROR oslo_messaging.rpc.server 2022-07-08 02:05:17.158 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Caught SIGTERM, stopping children 2022-07-08 02:05:17.264 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Waiting on 2 children to exit 2022-07-08 02:05:17.267 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Child 3134 killed by signal 15 2022-07-08 02:05:17.268 2697 INFO oslo_service.service [req-9449b74b-519f-4206-af91-a28562c646ea - - - - -] Child 3135 killed by signal 15 2022-07-08 02:05:30.091 19820 INFO oslo_service.service [req-aa2efa96-8b64-4db9-a74b-ddb22587846f - - - - -] Starting 2 workers 2022-07-08 02:05:30.113 19911 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 02:05:30.137 19820 WARNING oslo_config.cfg [req-aa2efa96-8b64-4db9-a74b-ddb22587846f - - - - -] Deprecated: Option "auth_strategy" from group "api" is deprecated for removal ( The only non-default choice, ``noauth2``, is for internal development and testing purposes only and should not be used in deployments. This option and its middleware, NoAuthMiddleware[V2_18], will be removed in a future release. ). Its value may be silently ignored in the future. 2022-07-08 02:05:30.165 19912 INFO nova.service [-] Starting conductor node (version 21.2.2-21.2.2~dev12-lp152.1.34) 2022-07-08 02:05:30.173 19820 WARNING oslo_config.cfg [req-aa2efa96-8b64-4db9-a74b-ddb22587846f - - - - -] Deprecated: Option "api_servers" from group "glance" is deprecated for removal ( Support for image service configuration via standard keystoneauth1 Adapter options was added in the 17.0.0 Queens release. The api_servers option was retained temporarily to allow consumers time to cut over to a real load balancing solution. ). Its value may be silently ignored in the future. 2022-07-08 02:06:19.381 19911 ERROR stevedore.extension [req-41a6d1e9-1ffd-496f-bf84-ddad3226456d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Could not load 'oslo_cache.etcd3gw': No module named 'etcd3gw': ModuleNotFoundError: No module named 'etcd3gw' 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server [req-41a6d1e9-1ffd-496f-bf84-ddad3226456d 741cb0b6280cb6fbedf1d8c6df4fc854b3e177e331441e019b961839089154d6 9b41f3e228984401a7642f94cd47dc73 - 4596c7f3b97740b3adfa8f82b6240654 4596c7f3b97740b3adfa8f82b6240654] Exception during message handling: amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server Traceback (most recent call last): 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server res = self.dispatcher.dispatch(message) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self._do_dispatch(endpoint, method, ctxt, args) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server result = func(ctxt, **new_args) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/conductor/manager.py", line 1655, in schedule_and_build_instances 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server accel_uuids=accel_uuids) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/nova/compute/rpcapi.py", line 1448, in build_and_run_instance 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server cctxt.cast(ctxt, 'build_and_run_instance', **kwargs) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/rpc/client.py", line 152, in cast 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server transport_options=self.transport_options) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/transport.py", line 128, in _send 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 654, in send 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server transport_options=transport_options) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 616, in _send 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server with self._get_connection(rpc_common.PURPOSE_SEND) as conn: 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 570, in _get_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server purpose=purpose) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/common.py", line 423, in __init__ 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.connection = connection_pool.get() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 109, in get 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self.create() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/pool.py", line 146, in create 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self.connection_cls(self.conf, self.url, purpose) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 618, in __init__ 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.ensure_connection() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/oslo_messaging/_drivers/impl_rabbit.py", line 735, in ensure_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.connection.ensure_connection(errback=on_error) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 389, in ensure_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self._ensure_connection(*args, **kwargs) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 445, in _ensure_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server callback, timeout=timeout 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/utils/functional.py", line 344, in retry_over_time 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return fun(*args, **kwargs) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 874, in _connection_factory 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self._connection = self._establish_connection() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/connection.py", line 809, in _establish_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server conn = self.transport.establish_connection() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/kombu/transport/pyamqp.py", line 130, in establish_connection 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server conn.connect() 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 320, in connect 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server self.drain_events(timeout=self.connect_timeout) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 508, in drain_events 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server while not self.blocking_read(timeout): 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 514, in blocking_read 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server return self.on_inbound_frame(frame) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/method_framing.py", line 55, in on_frame 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server callback(channel, method_sig, buf, None) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 521, in on_inbound_method 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server method_sig, payload, content, 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/abstract_channel.py", line 145, in dispatch_method 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server listener(*args) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server File "/usr/lib/python3.6/site-packages/amqp/connection.py", line 651, in _on_close 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server (class_id, method_id), ConnectionError) 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server amqp.exceptions.AccessRefused: (0, 0): (403) ACCESS_REFUSED - Login was refused using authentication mechanism AMQPLAIN. For details see the broker logfile. 2022-07-08 02:06:19.617 19911 ERROR oslo_messaging.rpc.server -------------- next part -------------- [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:[ActualPasswordWouldBeHere]@laena.internal.teamorange.hu// my_ip = 10.66.33.14 rootwrap_config=/etc/nova/rootwrap.conf [api_database] connection = mysql+pymysql://nova:[ActualPasswordWouldBeHere]@laena.internal.teamorange.hu/nova_api [database] connection = mysql+pymysql://nova:[ActualPasswordWouldBeHere]@laena.internal.teamorange.hu/nova [api] auth_strategy = keystone [keystone_authtoken] www_authenticate_uri = http://laena.internal.teamorange.hu:5000/ auth_url = http://laena.internal.teamorange.hu:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = [ActualPasswordWouldBeHere] [vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip [glance] api_servers = http://laena.internal.teamorange.hu:9292 [oslo_concurrency] lock_path = /var/run/nova [placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://laena.internal.teamorange.hu:5000/v3 username = placement password = [ActualPasswordWouldBeHere] [neutron] auth_url = http://laena.internal.teamorange.hu:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = [ActualPasswordWouldBeHere] service_metadata_proxy = true metadata_proxy_shared_secret = [ActualPasswordWouldBeHere] [cinder] os_region_name = RegionOne [scheduler] discover_hosts_in_cells_interval = 300 From kkchn.in at gmail.com Fri Jul 8 05:47:23 2022 From: kkchn.in at gmail.com (KK CHN) Date: Fri, 8 Jul 2022 11:17:23 +0530 Subject: Cluster management tool for Openstack Message-ID: List, 1. Is there any specific tool or project component available to maintain multiple OpenStack Clusters 2. Is there any concept of Supervisory cluster ? Regards, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Fri Jul 8 06:38:27 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 8 Jul 2022 08:38:27 +0200 Subject: [nova][neutron] do not recheck failing nova-next or nova-ovs-hybird-plug failures. In-Reply-To: <423de88694a70e179e5dfb4172fdf879ae95a574.camel@redhat.com> References: <423de88694a70e179e5dfb4172fdf879ae95a574.camel@redhat.com> Message-ID: Hi, Thanks for the quick fix, and sorry for the inconvenience. I pushed a similar patch for rally-openstack to remove linuxbridge from mechanism_driver list, as I see the neutron task no need for linuxbridge: https://review.opendev.org/c/openstack/rally-openstack/+/849069 Lajos Sean Mooney ezt ?rta (id?pont: 2022. j?l. 8., P, 0:15): > https://review.opendev.org/c/openstack/nova/+/848948 has now merged in > nova and the > os-vif change have also merged > > so the nova/os-vif check and gate pipelines should now be unblocked and > its is ok to recheck (with a reason) if requried. > > On Thu, 2022-07-07 at 12:58 +0100, Sean Mooney wrote: > > i have filed a bug for this > https://bugs.launchpad.net/os-vif/+bug/1980948 and submited two patches > for os-vif and nova > > https://review.opendev.org/q/topic:bug%252F1980948 > > other projects might also be affected by the cahnge intorduced in > https://github.com/openstack/neutron/commit/7f0413c84c4515cd2fae31d823613c4d7ea43110 > > > > until those are merged please continue to hold of rechecking nova or > os-vif ci failures. > > On Thu, 2022-07-07 at 12:32 +0100, Sean Mooney wrote: > > > hi o/ > > > > > > it looks like neutron recently moved linuxbridge to be experimental > > > Jul 06 16:21:46.640517 ubuntu-focal-rax-ord-0030301377 > neutron-server[90491]: ERROR neutron.common.experimental [-] Feature > 'linuxbridge' is > > > experimental and has to be explicitly enabled in > 'cfg.CONF.experimental' > > > > > > we do not actully deploy it in nova-next or nova-ovs-hybrid-plug but > it is enabeld in our job config as one of the configured mech drivers > > > even if we dont install the agent. > > > > > > i have not looked up what change in neutron change this yet but im > going to propose a patch to nova to disable it and i likely need to fix > os-vif too > > > so if you see a post fialure in either the nova-next of novs hybrid > plug jobs (and or look into it and see " die 2385 'Neutron did not start'" > in the > > > Run devstack task summery that is why this is failing. > > > > > > ill update this thread once we have fixed the issue. > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Fri Jul 8 07:56:41 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Fri, 8 Jul 2022 09:56:41 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> Message-ID: <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> So... no news about this? Should we just assume that the licenses will be no longer? Bummer... On 7/7/22 13:08, Daniel Mellado wrote: > Just noticed that as well, thanks for bringing this up Eyal! > > On 7/7/22 12:04, Eyal B wrote: >> Hello, >> >> Will the licenses be renewed ? they ended on July 5 >> >> Eyal >> >> On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni > > wrote: >> >> ??? Sorry for the typo, It'd be July 5, 2022 >> >> >> ??? On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray > ??? > wrote: >> >> ??????? Hi Swapnil,____ >> >> ??????? We?re at July 2021 already ? so they expire at the end of this >> ??????? month?____ >> >> ??????? __ __ >> >> ??????? *From: *Swapnil Kulkarni > ??????? > >> ??????? *Date: *Tuesday, 6 July 2021 at 17:50 >> ??????? *To: *openstack-discuss at lists.openstack.org >> ??????? >> ??????? > ??????? > >> ??????? *Subject: *[all] PyCharm Licenses Renewed till July 2021____ >> >> ??????? Hello,____ >> >> ??????? __ __ >> >> ??????? Happy to inform you the open source developer?license for >> ??????? Pycharm has been renewed for 1 additional year till July 2021. >> ____ >> >> >> ??????? ____ >> >> ??????? Best?Regards, >> ??????? Swapnil Kulkarni >> ??????? coolsvap at gmail dot com____ >> >> ??????? __ __ >> From katonalala at gmail.com Fri Jul 8 11:43:27 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 8 Jul 2022 13:43:27 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> Message-ID: Hi, Thanks for asking, I have the same problem, my license also expired this week. Lajos Katona Daniel Mellado ezt ?rta (id?pont: 2022. j?l. 8., P, 10:04): > So... no news about this? Should we just assume that the licenses will > be no longer? Bummer... > > On 7/7/22 13:08, Daniel Mellado wrote: > > Just noticed that as well, thanks for bringing this up Eyal! > > > > On 7/7/22 12:04, Eyal B wrote: > >> Hello, > >> > >> Will the licenses be renewed ? they ended on July 5 > >> > >> Eyal > >> > >> On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni >> > wrote: > >> > >> Sorry for the typo, It'd be July 5, 2022 > >> > >> > >> On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray >> > wrote: > >> > >> Hi Swapnil,____ > >> > >> We?re at July 2021 already ? so they expire at the end of this > >> month?____ > >> > >> __ __ > >> > >> *From: *Swapnil Kulkarni >> > > >> *Date: *Tuesday, 6 July 2021 at 17:50 > >> *To: *openstack-discuss at lists.openstack.org > >> > >> >> > > >> *Subject: *[all] PyCharm Licenses Renewed till July 2021____ > >> > >> Hello,____ > >> > >> __ __ > >> > >> Happy to inform you the open source developer license for > >> Pycharm has been renewed for 1 additional year till July 2021. > >> ____ > >> > >> > >> ____ > >> > >> Best Regards, > >> Swapnil Kulkarni > >> coolsvap at gmail dot com____ > >> > >> __ __ > >> > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Fri Jul 8 12:07:35 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Fri, 8 Jul 2022 17:37:35 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL Message-ID: Hi Team, We were trying to install overcloud with SSL enabled for which the UC is installed, but OC install is getting failed at step 4: ERROR :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 600, in urlopen\n chunked=chunked)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, in _make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in connect\n _match_hostname(cert, self.assert_hostname or server_hostname)\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in _match_hostname\n match_hostname(cert, asserted_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % (hostname, dnsnames[0]))\nssl.CertificateError: hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, in urlopen\n _stacktrace=sys.exc_info()[2])\n File \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 138, in _do_create_plugin\n authenticated=False)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 610, in get_discovery\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, in __init__\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 141, in run\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 517, in search_services\n services = self.list_services()\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 492, in list_services\n if self._is_client_version('identity', 2):\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 460, in _is_client_version\n client = getattr(self, client_name)\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n 'identity', min_version=2, max_version='3.latest')\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in get_endpoint\n return self.session.get_endpoint(auth or self.auth, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n allow_version_hack=allow_version_hack, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 206, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 161, in _do_create_plugin\n 'auth_url is correct. %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s PLAY RECAP ********************************************************************* localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 overcloud-controller-0 : ok=437 changed=104 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-1 : ok=436 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-2 : ok=431 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 undercloud : ok=28 changed=7 unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ in the deploy.sh: openstack overcloud deploy --templates \ -r /home/stack/templates/roles_data.yaml \ --networks-file /home/stack/templates/custom_network_data.yaml \ --vip-file /home/stack/templates/custom_vip_data.yaml \ --baremetal-deployment /home/stack/templates/overcloud-baremetal-deploy.yaml \ --network-config \ -e /home/stack/templates/environment.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ -e /home/stack/templates/ironic-config.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ -e /home/stack/containers-prepare-parameter.yaml Addition lines as highlighted in yellow were passed with modifications: tls-endpoints-public-ip.yaml: Passed as is in the defaults. enable-tls.yaml: # ******************************************************************* # This file was created automatically by the sample environment # generator. Developers should use `tox -e genconfig` to update it. # Users are recommended to make changes to a copy of the file instead # of the original, if any customizations are needed. # ******************************************************************* # title: Enable SSL on OpenStack Public Endpoints # description: | # Use this environment to pass in certificates for SSL deployments. # For these values to take effect, one of the tls-endpoints-*.yaml # environments must also be used. parameter_defaults: # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon # Type: boolean HorizonSecureCookies: True # Specifies the default CA cert to use if TLS is used for services in the public network. # Type: string PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' # The content of the SSL certificate (without Key) in PEM format. # Type: string SSLRootCertificate: | -----BEGIN CERTIFICATE----- ----*** CERTICATELINES TRIMMED ** -----END CERTIFICATE----- SSLCertificate: | -----BEGIN CERTIFICATE----- ----*** CERTICATELINES TRIMMED ** -----END CERTIFICATE----- # The content of an SSL intermediate CA certificate in PEM format. # Type: string SSLIntermediateCertificate: '' # The content of the SSL Key in PEM format. # Type: string SSLKey: | -----BEGIN PRIVATE KEY----- ----*** CERTICATELINES TRIMMED ** -----END PRIVATE KEY----- # ****************************************************** # Static parameters - these are values that must be # included in the environment but should not be changed. # ****************************************************** # The filepath of the certificate as it will be stored in the controller. # Type: string DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem # ********************* # End static parameters # ********************* inject-trust-anchor.yaml # ******************************************************************* # This file was created automatically by the sample environment # generator. Developers should use `tox -e genconfig` to update it. # Users are recommended to make changes to a copy of the file instead # of the original, if any customizations are needed. # ******************************************************************* # title: Inject SSL Trust Anchor on Overcloud Nodes # description: | # When using an SSL certificate signed by a CA that is not in the default # list of CAs, this environment allows adding a custom CA certificate to # the overcloud nodes. parameter_defaults: # The content of a CA's SSL certificate file in PEM format. This is evaluated on the client side. # Mandatory. This parameter must be set by the user. # Type: string SSLRootCertificate: | -----BEGIN CERTIFICATE----- ----*** CERTICATELINES TRIMMED ** -----END CERTIFICATE----- resource_registry: OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml The procedure to create such files was followed using: Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based certificate, without DNS. * Any idea around this error would be of great help. -- skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Fri Jul 8 12:31:33 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 8 Jul 2022 18:01:33 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: What is the domain name you have specified in the undercloud.conf file? And what is the fqdn name used for the generation of the SSL cert? On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, wrote: > Hi Team, > We were trying to install overcloud with SSL enabled for which the UC is > installed, but OC install is getting failed at step 4: > > ERROR > :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries > exceeded with url: / (Caused by SSLError(CertificateError(\"hostname > 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", > "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the > exact error", "rc": 1} > 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | > FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | > item={'service_name': 'cinderv3', 'service_type': 'volume'} | > error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": > "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": > "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover > available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. > Attempting to parse version from URL.\nTraceback (most recent call last):\n > File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line > 600, in urlopen\n chunked=chunked)\n File > \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, > in _make_request\n self._validate_conn(conn)\n File > \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, > in _validate_conn\n conn.connect()\n File > \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in > connect\n _match_hostname(cert, self.assert_hostname or > server_hostname)\n File > \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in > _match_hostname\n match_hostname(cert, asserted_hostname)\n File > \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % > (hostname, dnsnames[0]))\nssl.CertificateError: hostname > 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring > handling of the above exception, another exception occurred:\n\nTraceback > (most recent call last):\n File > \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in > send\n timeout=timeout\n File > \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, > in urlopen\n _stacktrace=sys.exc_info()[2])\n File > \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in > increment\n raise MaxRetryError(_pool, url, error or > ResponseError(cause))\nurllib3.exceptions.MaxRetryError: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'undercloud.com'\",),))\n\nDuring handling of the above exception, > another exception occurred:\n\nTraceback (most recent call last):\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, > in _send_request\n resp = self.session.request(method, url, **kwargs)\n > File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, > in request\n resp = self.send(prep, **send_kwargs)\n File > \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in > send\n r = adapter.send(request, **kwargs)\n File > \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in > send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'undercloud.com'\",),))\n\nDuring handling of the above exception, > another exception occurred:\n\nTraceback (most recent call last):\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", > line 138, in _do_create_plugin\n authenticated=False)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 610, in get_discovery\n authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, > in get_discovery\n disc = Discover(session, url, > authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, > in __init__\n authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, > in get_version_data\n resp = session.get(url, headers=headers, > authenticated=authenticated)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, > in get\n return self.request(url, 'GET', **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in > request\n resp = send(**kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, > in _send_request\n raise > exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL > exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'undercloud.com'\",),))\n\nDuring handling of the above exception, > another exception occurred:\n\nTraceback (most recent call last):\n File > \"\", line 102, in \n File \"\", line 94, in > _ansiballz_main\n File \"\", line 40, in invoke_module\n File > \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return > _run_module_code(code, init_globals, run_name, mod_spec)\n File > \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n > mod_name, mod_spec, pkg_name, script_name)\n File > \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, > run_globals)\n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", > line 185, in \n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", > line 181, in main\n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", > line 407, in __call__\n File > \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", > line 141, in run\n File > \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line > 517, in search_services\n services = self.list_services()\n File > \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line > 492, in list_services\n if self._is_client_version('identity', 2):\n > File > \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", > line 460, in _is_client_version\n client = getattr(self, client_name)\n > File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", > line 32, in _identity_client\n 'identity', min_version=2, > max_version='3.latest')\n File > \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", > line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in > get_endpoint\n return self.session.get_endpoint(auth or self.auth, > **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, > in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 380, in get_endpoint\n allow_version_hack=allow_version_hack, > **kwargs)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 271, in get_endpoint_data\n service_catalog = > self.get_access(session).service_catalog\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line > 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", > line 206, in get_auth_ref\n self._plugin = > self._do_create_plugin(session)\n File > \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", > line 161, in _do_create_plugin\n 'auth_url is correct. %s' % > e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find > versioned identity endpoints when attempting to authenticate. Please check > that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: > HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max > retries exceeded with url: / (Caused by > SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't > match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": > "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} > 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:01.271914 | 2.47s > 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:01.273659 | 2.47s > > PLAY RECAP > ********************************************************************* > localhost : ok=0 changed=0 unreachable=0 > failed=0 skipped=2 rescued=0 ignored=0 > overcloud-controller-0 : ok=437 changed=104 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-1 : ok=436 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-2 : ok=431 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 > failed=0 skipped=198 rescued=0 ignored=0 > undercloud : ok=28 changed=7 unreachable=0 > failed=1 skipped=3 rescued=0 ignored=0 > 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary > Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: > 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > in the deploy.sh: > > openstack overcloud deploy --templates \ > -r /home/stack/templates/roles_data.yaml \ > --networks-file /home/stack/templates/custom_network_data.yaml \ > --vip-file /home/stack/templates/custom_vip_data.yaml \ > --baremetal-deployment > /home/stack/templates/overcloud-baremetal-deploy.yaml \ > --network-config \ > -e /home/stack/templates/environment.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml > \ > -e /home/stack/templates/ironic-config.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml > \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e > /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > > Addition lines as highlighted in yellow were passed with modifications: > tls-endpoints-public-ip.yaml: > Passed as is in the defaults. > enable-tls.yaml: > > # ******************************************************************* > # This file was created automatically by the sample environment > # generator. Developers should use `tox -e genconfig` to update it. > # Users are recommended to make changes to a copy of the file instead > # of the original, if any customizations are needed. > # ******************************************************************* > # title: Enable SSL on OpenStack Public Endpoints > # description: | > # Use this environment to pass in certificates for SSL deployments. > # For these values to take effect, one of the tls-endpoints-*.yaml > # environments must also be used. > parameter_defaults: > # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon > # Type: boolean > HorizonSecureCookies: True > > # Specifies the default CA cert to use if TLS is used for services in > the public network. > # Type: string > PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' > > # The content of the SSL certificate (without Key) in PEM format. > # Type: string > SSLRootCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > > SSLCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > # The content of an SSL intermediate CA certificate in PEM format. > # Type: string > SSLIntermediateCertificate: '' > > # The content of the SSL Key in PEM format. > # Type: string > SSLKey: | > -----BEGIN PRIVATE KEY----- > ----*** CERTICATELINES TRIMMED ** > -----END PRIVATE KEY----- > > # ****************************************************** > # Static parameters - these are values that must be > # included in the environment but should not be changed. > # ****************************************************** > # The filepath of the certificate as it will be stored in the controller. > # Type: string > DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem > > # ********************* > # End static parameters > # ********************* > > inject-trust-anchor.yaml > > # ******************************************************************* > # This file was created automatically by the sample environment > # generator. Developers should use `tox -e genconfig` to update it. > # Users are recommended to make changes to a copy of the file instead > # of the original, if any customizations are needed. > # ******************************************************************* > # title: Inject SSL Trust Anchor on Overcloud Nodes > # description: | > # When using an SSL certificate signed by a CA that is not in the default > # list of CAs, this environment allows adding a custom CA certificate to > # the overcloud nodes. > parameter_defaults: > # The content of a CA's SSL certificate file in PEM format. This is > evaluated on the client side. > # Mandatory. This parameter must be set by the user. > # Type: string > SSLRootCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > > resource_registry: > OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml > > > > > The procedure to create such files was followed using: > Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) > > > Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based > certificate, without DNS. * > > Any idea around this error would be of great help. > > -- > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From development at manuel-bentele.de Fri Jul 8 12:39:49 2022 From: development at manuel-bentele.de (Manuel Bentele) Date: Fri, 8 Jul 2022 14:39:49 +0200 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> Message-ID: Hi developers, After an invitation from Rados?aw, I join the VDI discussion as well. I'm one of the authors of the OpenStack VDI paper [1] mentioned by Rados?aw. The paper is now freely accessible to everyone in the pre-published version [2]. Thank you Rados?aw for sharing. The main repository mentioned in the OpenStack VDI paper is currently hosted on GitHub [3] and contains configurations for general testing and development purposes (for me and my colleagues). The repository also provides a little DevStack setup for some simple VDI tests based on the functionality that OpenStack already provides. If we aim for a more active development in the future, we can also move the repository to another more official location. Our needs for a VDI solution are already outlined in our paper [1]. We are currently hosting part of the bwCloud [4] infrastructure and a HPC cluster [5]. Researchers and students can carry out computations there. However, we are receiving more and more requests from these people to offer virtual (powerful) graphical desktops for graphic(-intense) applications, so that computation results can be visualized or other use cases can be covered (e.g. office work). That's why we've had the idea of a free VDI solution for a long time, where we now (hopefully) initiated a further development with our published paper.I'm looking forward to further exciting ideas and a great cooperation with all interested people from the OpenStack community. [1] https://doi.org/10.1007/978-3-030-99191-3_12 [2] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf [3] https://github.com/bwLehrpool/osvdi [4] https://www.bw-cloud.org/en/ [5] https://www.nemo.uni-freiburg.de Regards, Manuel On 7/7/22 11:26, radoslaw.piliszek at gmail.com (Rados?aw Piliszek) wrote: > Hi Allison, > > I am also in touch with folks at rz.uni-freiburg.de who are also > interested in this topic. We might be able to gather a panel for > discussion. I think we need to introduce the topic properly with some > presentations and then move onto a discussion if time allows (I > believe it will as the time slot is 1h and the presentations should > not be overly detailed for an introductory session). > > Cheers, > Radek > -yoctozepto > > On Wed, 6 Jul 2022 at 23:12, Allison Price wrote: >> I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. >> >> Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. >> >> Cheers, >> Allison >> >>> On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: >>> >>> Just a quick follow up - I was permitted to share a pre-published >>> version of the article I was citing in my email from June 4th. [1] >>> Please enjoy responsibly. :-) >>> >>> [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf >>> >>> Cheers, >>> Radek >>> -yoctozepto >>> >>> On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek >>> wrote: >>>> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: >>>>> Hi Rados?aw, >>>> Hi Andy, >>>> >>>> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. >>>> >>>>>> First of all, wow, that looks very interesting and in fact very much >>>>>> what I'm looking for. As I mentioned in the original message, the >>>>>> things this solution lacks are not something blocking for me. >>>>>> Regarding the approach to Guacamole, I know that it's preferable to >>>>>> have guacamole extension (that provides the dynamic inventory) >>>>>> developed rather than meddle with the internal database but I guess it >>>>>> is a good start. >>>>> An even better approach would be something like the Guacozy project >>>>> (https://guacozy.readthedocs.io) >>>> I am not convinced. The project looks dead by now. [1] >>>> It offers a different UI which may appeal to certain users but I think >>>> sticking to vanilla Guacamole should do us right... For the time being >>>> at least. ;-) >>>> >>>>> They were able to use the Guacmole JavaScript libraries directly to >>>>> embed the HTML5 desktop within a React? app. I think this is a much >>>>> better approach, and I'd love to be able to do something similar in >>>>> the future. Would make the integration that much nicer. >>>> Well, as an example of embedding in the UI - sure. But it does not >>>> invalidate the need to modify Guacamole's database or write an >>>> extension to it so that it has the necessary creds. >>>> >>>>>> Any "quickstart setting up" would be awesome to have at this stage. As >>>>>> this is a Django app, I think I should be able to figure out the bits >>>>>> and bolts to get it up and running in some shape but obviously it will >>>>>> impede wider adoption. >>>>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get >>>>> a quickstart guide together. >>>>> >>>>> I have a private repo with code to set up a development environment >>>>> which uses Heat and Ansible - this might be the quickest way to get >>>>> started. I'm happy to share this with you privately if you like. >>>> I'm interested. Please share it. >>>> >>>>>> On the note of adoption, if I find it usable, I can provide support >>>>>> for it in Kolla [1] and help grow the project's adoption this way. >>>>> Kolla could be useful. We're already using containers for this project >>>>> now, and I have a helm chart for deploying to k8s. >>>>> https://github.com/NeCTAR-RC/bumblebee-helm >>>> Nice! The catch is obviously that some orgs frown upon K8s because >>>> they lack the necessary know-how. >>>> Kolla by design avoids the use of K8s. OpenStack components are not >>>> cloud-native anyway so benefits of using K8s are diminished (yet it >>>> makes sense to use K8s if there is enough experience with it as it >>>> makes certain ops more streamlined and simpler this way). >>>> >>>>> Also, an important part is making sure the images are set up correctly >>>>> with XRDP, etc. Our images are built using Packer, and the config for >>>>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images >>>> Ack, thanks for sharing. >>>> >>>>>> Also, since this is OpenStack-centric, maybe you could consider >>>>>> migrating to OpenDev at some point to collaborate with interested >>>>>> parties using a common system? >>>>>> Just food for thought at the moment. >>>>> I think it would be more appropriate to start a new project. I think >>>>> our codebase has too many assumptions about the underlying cloud. >>>>> >>>>> We inherited the code from another project too, so it's got twice the cruft. >>>> I see. Well, that's good to know at least. >>>> >>>>>> Writing to let you know I have also found the following related paper: [1] >>>>>> and reached out to its authors in the hope to enable further >>>>>> collaboration to happen. >>>>>> The paper is not open access so I have only obtained it for myself and >>>>>> am unsure if licensing permits me to share, thus I also asked the >>>>>> authors to share their copy (that they have copyrights to). >>>>>> I have obviously let them know of the existence of this thread. ;-) >>>>>> Let's stay tuned. >>>>>> >>>>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 >>>>> This looks interesting. A collaboration would be good if there is >>>>> enough interest in the community. >>>> I am looking forward to the collaboration happening. This could really >>>> liven up the OpenStack VDI. >>>> >>>> [1] https://github.com/paidem/guacozy/ >>>> >>>> -yoctozepto > From arne.wiebalck at cern.ch Fri Jul 8 13:48:03 2022 From: arne.wiebalck at cern.ch (Arne Wiebalck) Date: Fri, 8 Jul 2022 15:48:03 +0200 Subject: [baremetal-sig][ironic] Tue July 12, 2022, 2pm UTC: "Bare metal Kubernetes at G-Research" Message-ID: Dear all, The Bare Metal SIG will meet next week on Tue July 12, 2022, 2pm UTC featuring a topic of the day presentation by Scott Solkhon & Jamie Poole: "Bare metal Kubernetes at G-Research" Everyone is welcome, all details on how to join can be found on the SIG's etherpad: https://etherpad.opendev.org/p/bare-metal-sig Hope to see you there! Arne -- Arne Wiebalck CERN IT From skaplons at redhat.com Fri Jul 8 14:21:44 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Fri, 08 Jul 2022 16:21:44 +0200 Subject: [neutron] CI meeting on Tuesday 12.07.2022 cancelled Message-ID: <1740893.ZqjkciB8ac@p1> Hi, I will be on PTO next week. As there is nothing really critical to our CI currently, lets cancel next week's meeting. See You on CI meeting on Tuesday 19th of July. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From rdhasman at redhat.com Fri Jul 8 15:23:06 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 8 Jul 2022 20:53:06 +0530 Subject: [cinder] Spec Freeze Exception Request In-Reply-To: References: <20220701185518.6cid4paqrsnxnq6a@localhost> Message-ID: Hi, Today is the last day for spec freeze exception but there were some unexpected circumstances because of which the review process was delayed. So we will be granting extra days to this spec since it doesn't require much revisions to be in a mergiable state. We will provide an additional extra week since we also have M-2 next week which has the new volume and target driver merge deadline so some of the review bandwidth will be shared there. In conclusion, granting this spec the new deadline to be merged by 15th July. Thanks and regards Rajat Dhasmana On Sat, Jul 2, 2022 at 10:35 AM Rajat Dhasmana wrote: > Thanks Gorka for spelling out all the changes made to the spec since the > initial submission in yoga, would make the review experience quite better. > The quota issues have indeed been a pain point for OpenStack operators for > a long time and it's really crucial to fix them. > I'm OK with granting this an FFE (+1) > > On Sat, Jul 2, 2022 at 12:31 AM Gorka Eguileor > wrote: > >> Hi, >> >> I would like to request a spec freeze exception for the new Cinder Quota >> System spec [1]. >> >> Analysis of the required spec changes needed to implement the second >> quota driver, as agreed in the PTG/mid-cycle, were non trivial. >> >> In the latest spec update I just pushed there are considerable changes: >> >> - General improvements to existing sections to increase readability. >> >> - Description of the additional driver and reasons why we decided to >> implement it. >> >> - Spell out through the entire spec the similarities and differences of >> both drivers. >> >> - Change in tracking of the reservations to accommodate the new driver. >> >> - Description on how switching from one driver to the other would work. >> >> - Updated the driver interface to accommodate the particularities of the >> new driver. >> >> - Updated the performance section with a very brief summary of the >> performance tests done with some code prototipe. >> >> - Updating the phases of the effort as well as the work items. >> >> Cheers, >> Gorka. >> >> >> [1]: https://review.opendev.org/c/openstack/cinder-specs/+/819693 >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Jul 8 15:53:07 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 8 Jul 2022 17:53:07 +0200 Subject: [release] Release countdown for week R-12, Jul 11 - 15 Message-ID: Development Focus ----------------- The Zed-2 milestone is next week, on July 14th, 2022! Zed-related specs should now be finalized so that teams can move to implementation ASAP. Some teams observe specific deadlines on the second milestone (mostly spec freezes): please refer to https://releases.openstack.org/zed/schedule.html for details. General Information ------------------- Libraries need to be released at least once per milestone period. Next week, the release team will propose releases for any library that has not been otherwise released since milestone 1. PTL's and release liaisons, please watch for these and give a +1 to acknowledge them. If there is some reason to hold off on a release, let us know that as well. A +1 would be appreciated, but if we do not hear anything at all by the end of the week, we will assume things are OK to proceed. Remember that non-library deliverables that follow the cycle-with-intermediary release model should have an intermediary release before milestone-2. Those who haven't will be proposed to switch to the cycle-with-rc model, which is more suited to deliverables that are released only once per cycle. Next week is also the deadline to freeze the contents of the final release. All new 'Zed' deliverables need to have a deliverable file in https://opendev.org/openstack/releases/src/branch/master/deliverables and need to have done a release by milestone-2. Changes proposing those deliverables for inclusion in Zed have been posted, please update them with an actual release request before the milestone-2 deadline if you plan on including that deliverable in Zed, or -1 if you need one more cycle to be ready. Upcoming Deadlines & Dates -------------------------- Zed-2 Milestone: July 14th, 2022 Next PTG: October 17-20, 2022 (Columbus, Ohio) El?d Ill?s irc: elodilles From james.slagle at gmail.com Fri Jul 8 16:32:38 2022 From: james.slagle at gmail.com (James Slagle) Date: Fri, 8 Jul 2022 12:32:38 -0400 Subject: [TripleO] PTL outage Message-ID: Hello TripleO, I'll be out on PTO for the next 2 weeks. Not that I'm typically needed :), but If anything urgent requires attention, please reach out to the kind folks directly in #tripleo. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Fri Jul 8 16:50:34 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 8 Jul 2022 16:50:34 +0000 Subject: [all][tc][gerrit] Ownership of *-stable-maint groups In-Reply-To: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> References: <230a314d1af172e328cc89a45f3e32e1ce34b4bb.camel@redhat.com> Message-ID: <20220708165033.vbj35dgo6hjhoh2v@yuggoth.org> On 2022-07-01 16:31:16 +0100 (+0100), Stephen Finucane wrote: [...] > The following projects are owned by 'stable-maint-core': > > * barbican-stable-maint > * ceilometer-stable-maint > * cinder-stable-maint > * designate-stable-maint > * glance-stable-maint > * heat-stable-maint > * horizon-stable-maint > * ironic-stable-maint > * keystone-stable-maint > * manila-stable-maint > * neutron-stable-maint > * nova-stable-maint > * oslo-stable-maint > * python-openstackclient-stable-maint > * sahara-stable-maint > * swift-stable-maint > * trove-stable-maint > * zaqar-stable-maint [...] I've gone through and switched all 18 of these to self-owned just now, as requested in Ghanshyam's reply. Please let me know if you find any others which need similar treatment. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From gmann at ghanshyammann.com Fri Jul 8 19:13:49 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Jul 2022 14:13:49 -0500 Subject: [tc][qa][centos stream] CentOS-stream-9 testing stability and collaboration with CentOS-stream maintainers Message-ID: <181df3b8755.10aa7593b285941.3522470925080375540@ghanshyammann.com> Hello Everyone, As you know, CentOS-stream-9 is in Zed cycle testing runtime and we are facing some stability issues to make its testing voting (detach volume, qemu issues are some known ones). Currently, it is a non-voting job which is not good for the long term as non-voting job failures are mostly ignored. We called CentOS stream folks to discuss how to make it stable testing or a better way to improve the communication between both communities as well as coordinated debugging. Alan, Brian, and Aleksandra from the CentOS stream team joined the TC call on 7 July. With the CentOS stream model of having the latest packages version like libvirt etc it is good to test and capture the potential issues in advance but at the same time, there will more failure than other distros like ubuntu etc. Knowing the failure in the early stage is good but we have a few challenges to triage such failure. OpenStack or CentOS team alone might not be able to know the root cause of the issue at first glance, whether it is in OpenStack component or in CentOS stream. CentOS team has less experience in debugging devstack or tempest tests. And if such failures are more frequent (current case) then having it as a voting testing on every patch is another challenge for OpenStack smooth development cycle. We could not find any best solution to solve all these challenges but we all agree that we need some coordinated debugging. We need some of the initial level of failure debugging in OpenStack and pass the information to CentOS team for further debugging. That is one possible step forward and we will see how it goes. Basically doing the following : * If OpenStack members see failure in CentOS stream then do some initial level of debugging to know which OpenStack component is failing and collect some log information. * Report the bug with all those information at https://bugzilla.redhat.com/ * Contact CenntOS Stream team (you can address/ping Alan, Brian or Aleksandra ) via ** ML: https://lists.centos.org/mailman/listinfo/centos-devel ** IRC channel on Libera chat #centos-devel and #centos-stream Alan, Brian, Aleksandra, or any TC members feel free to add the things if I missed something. -gmann From gmann at ghanshyammann.com Fri Jul 8 19:22:16 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Jul 2022 14:22:16 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 08 July 2022: Reading: 5 min Message-ID: <181df4344ac.ed01c4bb286104.1709392448799473428@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on 07 July. Most of the meeting discussions are summarized in this email. As it was video meeting, full recording is available @https://www.youtube.com/watch?v=zeJG6Mujalo&t=29s and brief logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-07-15.00.log.html * Next TC weekly meeting will be on 14 July Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by 13 July. 2. What we completed this week: ========================= * Nothing specific in this week. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[2], Two are completed and others items are in-progress. Open Reviews ----------------- * Three open reviews for ongoing activities[3]. CentOS-stream-9 testing stability and collaboration with centos-stream maintainers --------------------------------------------------------------------------------------------------- We discussed this topic in the TC meeting and I summarized the discussion in separate ML[4]. Create the Environmental Sustainability SIG --------------------------------------------------- Not much update on this except what we discussed last time, feel free to add your feedback in review[5]. Consistent and Secure Default RBAC ------------------------------------------- We have a good amount of discussion and review in the goal document updates[6] and I have updated the patch by resolving the review comments. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[7]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- No updates on this. Only Adjutant project is leaderless/maintainer-less. We will check Adjutant's the situation again on ML and hope Braden will be ready with their company side permission[8]. Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[9][10]. Project updates ------------------- * None 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [12] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847413 [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://lists.openstack.org/pipermail/openstack-discuss/2022-July/029468.html [5] https://review.opendev.org/c/openstack/governance-sigs/+/845336 [6] https://review.opendev.org/c/openstack/governance/+/847418 [7] https://review.opendev.org/c/openstack/governance/+/836888 [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027626.html [9] https://etherpad.opendev.org/p/zuul-config-error-openstack [10] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From gmann at ghanshyammann.com Fri Jul 8 23:06:07 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 08 Jul 2022 18:06:07 -0500 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> Message-ID: <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> ---- On Fri, 22 Apr 2022 15:53:37 -0500 Ghanshyam Mann wrote --- > Hi Braden, > > Please let us know about the status of your company's permission to maintain the project. > As we are in Zed cycle development and there is no one to maintain/lead this project we > need to start thinking about the next steps mentioned in the leaderless project etherpad > Hi Braden, We have not heard back from you if you can help in maintaining the Adjutant. As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project list - https://review.opendev.org/c/openstack/governance/+/849153/1 -gmann > - https://etherpad.opendev.org/p/zed-leaderless > > -gmann > > ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- > > I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. > > On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote: > > > > Hi, > > > > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. > > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > > From swogatpradhan22 at gmail.com Sat Jul 9 05:54:41 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sat, 9 Jul 2022 11:24:41 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: I had faced a similar kind of issue, for ip based setup you need to specify the domain name as the ip that you are going to use, this error is showing up because the ssl is ip based but the fqdns seems to be undercloud.com or overcloud.example.com. I think for undercloud you can change the undercloud.conf. And will it work if we specify clouddomain parameter to the IP address for overcloud? because it seems he has not specified the clouddomain parameter and overcloud.example.com is the default domain for overcloud.example.com. On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, wrote: > What is the domain name you have specified in the undercloud.conf file? > And what is the fqdn name used for the generation of the SSL cert? > > On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, > wrote: > >> Hi Team, >> We were trying to install overcloud with SSL enabled for which the UC is >> installed, but OC install is getting failed at step 4: >> >> ERROR >> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >> exact error", "rc": 1} >> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >> Attempting to parse version from URL.\nTraceback (most recent call last):\n >> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >> 600, in urlopen\n chunked=chunked)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >> in _make_request\n self._validate_conn(conn)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >> in _validate_conn\n conn.connect()\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >> connect\n _match_hostname(cert, self.assert_hostname or >> server_hostname)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >> handling of the above exception, another exception occurred:\n\nTraceback >> (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >> send\n timeout=timeout\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >> increment\n raise MaxRetryError(_pool, url, error or >> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >> in request\n resp = self.send(prep, **send_kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >> send\n r = adapter.send(request, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 138, in _do_create_plugin\n authenticated=False)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 610, in get_discovery\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >> in get_discovery\n disc = Discover(session, url, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >> in __init__\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >> in get_version_data\n resp = session.get(url, headers=headers, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >> in get\n return self.request(url, 'GET', **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >> request\n resp = send(**kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >> in _send_request\n raise >> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"\", line 102, in \n File \"\", line 94, in >> _ansiballz_main\n File \"\", line 40, in invoke_module\n File >> \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return >> _run_module_code(code, init_globals, run_name, mod_spec)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >> mod_name, mod_spec, pkg_name, script_name)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >> run_globals)\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 185, in \n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 181, in main\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >> line 407, in __call__\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 141, in run\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 517, in search_services\n services = self.list_services()\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 492, in list_services\n if self._is_client_version('identity', 2):\n >> File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 460, in _is_client_version\n client = getattr(self, client_name)\n >> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >> line 32, in _identity_client\n 'identity', min_version=2, >> max_version='3.latest')\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 271, in get_endpoint_data\n service_catalog = >> self.get_access(session).service_catalog\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 206, in get_auth_ref\n self._plugin = >> self._do_create_plugin(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >> versioned identity endpoints when attempting to authenticate. Please check >> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.271914 | 2.47s >> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.273659 | 2.47s >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=0 changed=0 unreachable=0 >> failed=0 skipped=2 rescued=0 ignored=0 >> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >> failed=0 skipped=198 rescued=0 ignored=0 >> undercloud : ok=28 changed=7 unreachable=0 >> failed=1 skipped=3 rescued=0 ignored=0 >> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> >> in the deploy.sh: >> >> openstack overcloud deploy --templates \ >> -r /home/stack/templates/roles_data.yaml \ >> --networks-file /home/stack/templates/custom_network_data.yaml \ >> --vip-file /home/stack/templates/custom_vip_data.yaml \ >> --baremetal-deployment >> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >> --network-config \ >> -e /home/stack/templates/environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> >> Addition lines as highlighted in yellow were passed with modifications: >> tls-endpoints-public-ip.yaml: >> Passed as is in the defaults. >> enable-tls.yaml: >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Enable SSL on OpenStack Public Endpoints >> # description: | >> # Use this environment to pass in certificates for SSL deployments. >> # For these values to take effect, one of the tls-endpoints-*.yaml >> # environments must also be used. >> parameter_defaults: >> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >> # Type: boolean >> HorizonSecureCookies: True >> >> # Specifies the default CA cert to use if TLS is used for services in >> the public network. >> # Type: string >> PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >> >> # The content of the SSL certificate (without Key) in PEM format. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> SSLCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> # The content of an SSL intermediate CA certificate in PEM format. >> # Type: string >> SSLIntermediateCertificate: '' >> >> # The content of the SSL Key in PEM format. >> # Type: string >> SSLKey: | >> -----BEGIN PRIVATE KEY----- >> ----*** CERTICATELINES TRIMMED ** >> -----END PRIVATE KEY----- >> >> # ****************************************************** >> # Static parameters - these are values that must be >> # included in the environment but should not be changed. >> # ****************************************************** >> # The filepath of the certificate as it will be stored in the >> controller. >> # Type: string >> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >> >> # ********************* >> # End static parameters >> # ********************* >> >> inject-trust-anchor.yaml >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Inject SSL Trust Anchor on Overcloud Nodes >> # description: | >> # When using an SSL certificate signed by a CA that is not in the >> default >> # list of CAs, this environment allows adding a custom CA certificate to >> # the overcloud nodes. >> parameter_defaults: >> # The content of a CA's SSL certificate file in PEM format. This is >> evaluated on the client side. >> # Mandatory. This parameter must be set by the user. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> resource_registry: >> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >> >> >> >> >> The procedure to create such files was followed using: >> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >> >> >> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >> certificate, without DNS. * >> >> Any idea around this error would be of great help. >> >> -- >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From amotoki at gmail.com Sat Jul 9 09:03:13 2022 From: amotoki at gmail.com (Akihiro Motoki) Date: Sat, 9 Jul 2022 18:03:13 +0900 Subject: [heat][horizon] heat-dashboard-core maintenance In-Reply-To: References: Message-ID: Hi, > In addition, I would like to drop the following folks from heat-dashboard-core. > They were active in heat-dashboard but they had no activities for over > three years. > If there is no objection, I will drop them next week. > - Kazunori Shinohara > - Keiichi Hikita > - Xinni Ge There is no objection on the above proposal. I have updated heat-dashboard-core member in Gerrit. -- Akihiro Motoki (amotoki) On Wed, Jun 22, 2022 at 6:03 PM Akihiro Motoki wrote: > > Hi, > > I added heat-core gerrit group to heat-dashboard-core [1]. > Heat team and horizon team agreed to maintain heat-dashboard together > a while ago > and horizon-core was added to heat-dashboard-core. At that time > heat-dashboard had > active contributors so head-dashboard-core did not include heat-core team. > The active contributors in heat-dashboard-core have gone and Rico is > the only person > from the heat team, so it is time to add heat-core to heat-dashboard-core. > > In addition, I would like to drop the following folks from heat-dashboard-core. > They were active in heat-dashboard but they had no activities for over > three years. > If there is no objection, I will drop them next week. > - Kazunori Shinohara > - Keiichi Hikita > - Xinni Ge > > [1] https://review.opendev.org/admin/groups/8803fcad46b4bce76ed436861474878f36e0a8e4,members > > Thanks, > Akihiro Motoki (amotoki) From bshephar at redhat.com Fri Jul 8 21:21:11 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Sat, 9 Jul 2022 07:21:11 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, It looks like you have set the dns name on the SSL certificate to overcloud.example.com instead of the IP address. So the SSL cert validation is failing. Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'overcloud.example.com'\",),)) Note point number 1 here: https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html#certificate-and-public-vip-configuration It's actually worded poorly. I don't believe IP's can be set for the common name, and we need to use subjectAltName instead. See below: So, when you create this file: [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.comCN=openstack.example.com Remove the CN= part from that file: [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.com Then in the v3.ext file set IP.1=fd00:fd00:fd00:9900::2ef like so: authorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentsubjectAltName = @alt_names[alt_names]IP.1=fd00:fd00:fd00:9900::2ef On Fri, 8 Jul 2022 at 10:31 pm, Swogat Pradhan wrote: > What is the domain name you have specified in the undercloud.conf file? > And what is the fqdn name used for the generation of the SSL cert? > > On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, > wrote: > >> Hi Team, >> We were trying to install overcloud with SSL enabled for which the UC is >> installed, but OC install is getting failed at step 4: >> >> ERROR >> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >> exact error", "rc": 1} >> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >> Attempting to parse version from URL.\nTraceback (most recent call last):\n >> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >> 600, in urlopen\n chunked=chunked)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >> in _make_request\n self._validate_conn(conn)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >> in _validate_conn\n conn.connect()\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >> connect\n _match_hostname(cert, self.assert_hostname or >> server_hostname)\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >> handling of the above exception, another exception occurred:\n\nTraceback >> (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >> send\n timeout=timeout\n File >> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >> increment\n raise MaxRetryError(_pool, url, error or >> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >> in request\n resp = self.send(prep, **send_kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >> send\n r = adapter.send(request, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 138, in _do_create_plugin\n authenticated=False)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 610, in get_discovery\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >> in get_discovery\n disc = Discover(session, url, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >> in __init__\n authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >> in get_version_data\n resp = session.get(url, headers=headers, >> authenticated=authenticated)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >> in get\n return self.request(url, 'GET', **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >> request\n resp = send(**kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >> in _send_request\n raise >> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'undercloud.com'\",),))\n\nDuring handling of the above exception, >> another exception occurred:\n\nTraceback (most recent call last):\n File >> \"\", line 102, in \n File \"\", line 94, in >> _ansiballz_main\n File \"\", line 40, in invoke_module\n File >> \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return >> _run_module_code(code, init_globals, run_name, mod_spec)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >> mod_name, mod_spec, pkg_name, script_name)\n File >> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >> run_globals)\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 185, in \n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 181, in main\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >> line 407, in __call__\n File >> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >> line 141, in run\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 517, in search_services\n services = self.list_services()\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >> 492, in list_services\n if self._is_client_version('identity', 2):\n >> File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 460, in _is_client_version\n client = getattr(self, client_name)\n >> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >> line 32, in _identity_client\n 'identity', min_version=2, >> max_version='3.latest')\n File >> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >> **kwargs)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 271, in get_endpoint_data\n service_catalog = >> self.get_access(session).service_catalog\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 206, in get_auth_ref\n self._plugin = >> self._do_create_plugin(session)\n File >> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >> versioned identity endpoints when attempting to authenticate. Please check >> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >> retries exceeded with url: / (Caused by >> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.271914 | 2.47s >> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:01.273659 | 2.47s >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=0 changed=0 unreachable=0 >> failed=0 skipped=2 rescued=0 ignored=0 >> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >> failed=0 skipped=198 rescued=0 ignored=0 >> undercloud : ok=28 changed=7 unreachable=0 >> failed=1 skipped=3 rescued=0 ignored=0 >> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >> >> >> in the deploy.sh: >> >> openstack overcloud deploy --templates \ >> -r /home/stack/templates/roles_data.yaml \ >> --networks-file /home/stack/templates/custom_network_data.yaml \ >> --vip-file /home/stack/templates/custom_vip_data.yaml \ >> --baremetal-deployment >> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >> --network-config \ >> -e /home/stack/templates/environment.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >> \ >> -e /home/stack/templates/ironic-config.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >> \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >> -e >> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >> -e /home/stack/containers-prepare-parameter.yaml >> >> Addition lines as highlighted in yellow were passed with modifications: >> tls-endpoints-public-ip.yaml: >> Passed as is in the defaults. >> enable-tls.yaml: >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Enable SSL on OpenStack Public Endpoints >> # description: | >> # Use this environment to pass in certificates for SSL deployments. >> # For these values to take effect, one of the tls-endpoints-*.yaml >> # environments must also be used. >> parameter_defaults: >> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >> # Type: boolean >> HorizonSecureCookies: True >> >> # Specifies the default CA cert to use if TLS is used for services in >> the public network. >> # Type: string >> PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >> >> # The content of the SSL certificate (without Key) in PEM format. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> SSLCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> # The content of an SSL intermediate CA certificate in PEM format. >> # Type: string >> SSLIntermediateCertificate: '' >> >> # The content of the SSL Key in PEM format. >> # Type: string >> SSLKey: | >> -----BEGIN PRIVATE KEY----- >> ----*** CERTICATELINES TRIMMED ** >> -----END PRIVATE KEY----- >> >> # ****************************************************** >> # Static parameters - these are values that must be >> # included in the environment but should not be changed. >> # ****************************************************** >> # The filepath of the certificate as it will be stored in the >> controller. >> # Type: string >> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >> >> # ********************* >> # End static parameters >> # ********************* >> >> inject-trust-anchor.yaml >> >> # ******************************************************************* >> # This file was created automatically by the sample environment >> # generator. Developers should use `tox -e genconfig` to update it. >> # Users are recommended to make changes to a copy of the file instead >> # of the original, if any customizations are needed. >> # ******************************************************************* >> # title: Inject SSL Trust Anchor on Overcloud Nodes >> # description: | >> # When using an SSL certificate signed by a CA that is not in the >> default >> # list of CAs, this environment allows adding a custom CA certificate to >> # the overcloud nodes. >> parameter_defaults: >> # The content of a CA's SSL certificate file in PEM format. This is >> evaluated on the client side. >> # Mandatory. This parameter must be set by the user. >> # Type: string >> SSLRootCertificate: | >> -----BEGIN CERTIFICATE----- >> ----*** CERTICATELINES TRIMMED ** >> -----END CERTIFICATE----- >> >> resource_registry: >> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >> >> >> >> >> The procedure to create such files was followed using: >> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >> >> >> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >> certificate, without DNS. * >> >> Any idea around this error would be of great help. >> >> -- >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Sat Jul 9 04:29:41 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Sat, 9 Jul 2022 09:59:41 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Thanks Brandon for your input. We have this IP as stated getting allocated. Maybe we can pass domain name to get this more predictable. But in that case also we would need to do the same way as you suggest ? Will try your and Swogat's suggestions. Best Regards, Lokendra On Sat, 9 Jul 2022, 02:51 Brendan Shephard, wrote: > Hey, > > It looks like you have set the dns name on the SSL certificate to > overcloud.example.com instead of the IP address. So the SSL cert > validation is failing. > > Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' > doesn't match 'overcloud.example.com'\",),)) > > Note point number 1 here: > > https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html#certificate-and-public-vip-configuration > > It's actually worded poorly. I don't believe IP's can be set for the > common name, and we need to use subjectAltName instead. See below: > > So, when you create this file: > > [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.comCN=openstack.example.com > > > Remove the CN= part from that file: > > [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.com > > > Then in the v3.ext file set IP.1=fd00:fd00:fd00:9900::2ef like so: > > authorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentsubjectAltName = @alt_names[alt_names]IP.1=fd00:fd00:fd00:9900::2ef > > > > > On Fri, 8 Jul 2022 at 10:31 pm, Swogat Pradhan > wrote: > >> What is the domain name you have specified in the undercloud.conf file? >> And what is the fqdn name used for the generation of the SSL cert? >> >> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >> wrote: >> >>> Hi Team, >>> We were trying to install overcloud with SSL enabled for which the UC is >>> installed, but OC install is getting failed at step 4: >>> >>> ERROR >>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>> exact error", "rc": 1} >>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>> 600, in urlopen\n chunked=chunked)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>> in _make_request\n self._validate_conn(conn)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>> in _validate_conn\n conn.connect()\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>> connect\n _match_hostname(cert, self.assert_hostname or >>> server_hostname)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>> handling of the above exception, another exception occurred:\n\nTraceback >>> (most recent call last):\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>> send\n timeout=timeout\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>> increment\n raise MaxRetryError(_pool, url, error or >>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>> in request\n resp = self.send(prep, **send_kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>> send\n r = adapter.send(request, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 138, in _do_create_plugin\n authenticated=False)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 610, in get_discovery\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>> in get_discovery\n disc = Discover(session, url, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>> in __init__\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>> in get_version_data\n resp = session.get(url, headers=headers, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>> in get\n return self.request(url, 'GET', **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>> request\n resp = send(**kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>> in _send_request\n raise >>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File \"\", line 102, in \n File \"\", line >>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>> mod_name, mod_spec, pkg_name, script_name)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>> run_globals)\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 185, in \n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 181, in main\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>> line 407, in __call__\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 141, in run\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 517, in search_services\n services = self.list_services()\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>> File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>> line 32, in _identity_client\n 'identity', min_version=2, >>> max_version='3.latest')\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 271, in get_endpoint_data\n service_catalog = >>> self.get_access(session).service_catalog\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 206, in get_auth_ref\n self._plugin = >>> self._do_create_plugin(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>> versioned identity endpoints when attempting to authenticate. Please check >>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.271914 | 2.47s >>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.273659 | 2.47s >>> >>> PLAY RECAP >>> ********************************************************************* >>> localhost : ok=0 changed=0 unreachable=0 >>> failed=0 skipped=2 rescued=0 ignored=0 >>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>> failed=0 skipped=198 rescued=0 ignored=0 >>> undercloud : ok=28 changed=7 unreachable=0 >>> failed=1 skipped=3 rescued=0 ignored=0 >>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> >>> >>> in the deploy.sh: >>> >>> openstack overcloud deploy --templates \ >>> -r /home/stack/templates/roles_data.yaml \ >>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>> --baremetal-deployment >>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>> --network-config \ >>> -e /home/stack/templates/environment.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>> \ >>> -e /home/stack/templates/ironic-config.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>> -e /home/stack/containers-prepare-parameter.yaml >>> >>> Addition lines as highlighted in yellow were passed with modifications: >>> tls-endpoints-public-ip.yaml: >>> Passed as is in the defaults. >>> enable-tls.yaml: >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Enable SSL on OpenStack Public Endpoints >>> # description: | >>> # Use this environment to pass in certificates for SSL deployments. >>> # For these values to take effect, one of the tls-endpoints-*.yaml >>> # environments must also be used. >>> parameter_defaults: >>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>> # Type: boolean >>> HorizonSecureCookies: True >>> >>> # Specifies the default CA cert to use if TLS is used for services in >>> the public network. >>> # Type: string >>> PublicTLSCAFile: >>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>> >>> # The content of the SSL certificate (without Key) in PEM format. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> SSLCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> # The content of an SSL intermediate CA certificate in PEM format. >>> # Type: string >>> SSLIntermediateCertificate: '' >>> >>> # The content of the SSL Key in PEM format. >>> # Type: string >>> SSLKey: | >>> -----BEGIN PRIVATE KEY----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END PRIVATE KEY----- >>> >>> # ****************************************************** >>> # Static parameters - these are values that must be >>> # included in the environment but should not be changed. >>> # ****************************************************** >>> # The filepath of the certificate as it will be stored in the >>> controller. >>> # Type: string >>> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >>> >>> # ********************* >>> # End static parameters >>> # ********************* >>> >>> inject-trust-anchor.yaml >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>> # description: | >>> # When using an SSL certificate signed by a CA that is not in the >>> default >>> # list of CAs, this environment allows adding a custom CA certificate >>> to >>> # the overcloud nodes. >>> parameter_defaults: >>> # The content of a CA's SSL certificate file in PEM format. This is >>> evaluated on the client side. >>> # Mandatory. This parameter must be set by the user. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> resource_registry: >>> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >>> >>> >>> >>> >>> The procedure to create such files was followed using: >>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>> >>> >>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>> certificate, without DNS. * >>> >>> Any idea around this error would be of great help. >>> >>> -- >>> skype: lokendrarathour >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Sat Jul 9 05:46:00 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Sat, 9 Jul 2022 15:46:00 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, I personally use DNS names. I updated that documentation, so that is essentially exactly what I'm using in my environment. I just pasted in exactly what I have in my files and changed the domain names to example.com. So what we have in that documentation should work with DNS names. I also made a video about this: https://www.youtube.com/watch?v=FmO6n1fUiYU I believe the only difference when using IP's instead of domain names is that you can't use the common name (CN) field. Brendan Shephard Software Engineer Red Hat APAC 193 N Quay Brisbane City QLD 4000 @RedHat Red Hat Red Hat On Sat, Jul 9, 2022 at 2:30 PM Lokendra Rathour wrote: > Thanks Brandon for your input. > We have this IP as stated getting allocated. > Maybe we can pass domain name to get this more predictable. > But in that case also we would need to do the same way as you suggest ? > Will try your and Swogat's suggestions. > > Best Regards, > Lokendra > > On Sat, 9 Jul 2022, 02:51 Brendan Shephard, wrote: > >> Hey, >> >> It looks like you have set the dns name on the SSL certificate to >> overcloud.example.com instead of the IP address. So the SSL cert >> validation is failing. >> >> Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' >> doesn't match 'overcloud.example.com'\",),)) >> >> Note point number 1 here: >> >> https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/ssl.html#certificate-and-public-vip-configuration >> >> It's actually worded poorly. I don't believe IP's can be set for the >> common name, and we need to use subjectAltName instead. See below: >> >> So, when you create this file: >> >> [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.comCN=openstack.example.com >> >> >> Remove the CN= part from that file: >> >> [req]default_bits = 2048prompt = nodefault_md = sha256distinguished_name = dn[dn]C=AUST=QueenslandL=BrisbaneO=your-orgOU=adminemailAddress=me at example.com >> >> >> Then in the v3.ext file set IP.1=fd00:fd00:fd00:9900::2ef like so: >> >> authorityKeyIdentifier=keyid,issuerbasicConstraints=CA:FALSEkeyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEnciphermentsubjectAltName = @alt_names[alt_names]IP.1=fd00:fd00:fd00:9900::2ef >> >> >> >> >> On Fri, 8 Jul 2022 at 10:31 pm, Swogat Pradhan >> wrote: >> >>> What is the domain name you have specified in the undercloud.conf file? >>> And what is the fqdn name used for the generation of the SSL cert? >>> >>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >>> wrote: >>> >>>> Hi Team, >>>> We were trying to install overcloud with SSL enabled for which the UC >>>> is installed, but OC install is getting failed at step 4: >>>> >>>> ERROR >>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>>> exact error", "rc": 1} >>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>> 600, in urlopen\n chunked=chunked)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>> in _make_request\n self._validate_conn(conn)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>> in _validate_conn\n conn.connect()\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>> connect\n _match_hostname(cert, self.assert_hostname or >>>> server_hostname)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>> handling of the above exception, another exception occurred:\n\nTraceback >>>> (most recent call last):\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>> send\n timeout=timeout\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>> increment\n raise MaxRetryError(_pool, url, error or >>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>> send\n r = adapter.send(request, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>> in get_discovery\n disc = Discover(session, url, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>> in __init__\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>> in get_version_data\n resp = session.get(url, headers=headers, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>> request\n resp = send(**kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>> in _send_request\n raise >>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File \"\", line 102, in \n File \"\", line >>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>> run_globals)\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 185, in \n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 181, in main\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>> line 407, in __call__\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 141, in run\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 517, in search_services\n services = self.list_services()\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>> File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>> line 32, in _identity_client\n 'identity', min_version=2, >>>> max_version='3.latest')\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 271, in get_endpoint_data\n service_catalog = >>>> self.get_access(session).service_catalog\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 206, in get_auth_ref\n self._plugin = >>>> self._do_create_plugin(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>> versioned identity endpoints when attempting to authenticate. Please check >>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.271914 | 2.47s >>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.273659 | 2.47s >>>> >>>> PLAY RECAP >>>> ********************************************************************* >>>> localhost : ok=0 changed=0 unreachable=0 >>>> failed=0 skipped=2 rescued=0 ignored=0 >>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>> failed=0 skipped=198 rescued=0 ignored=0 >>>> undercloud : ok=28 changed=7 unreachable=0 >>>> failed=1 skipped=3 rescued=0 ignored=0 >>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> >>>> >>>> in the deploy.sh: >>>> >>>> openstack overcloud deploy --templates \ >>>> -r /home/stack/templates/roles_data.yaml \ >>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>> --baremetal-deployment >>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>> --network-config \ >>>> -e /home/stack/templates/environment.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>> \ >>>> -e /home/stack/templates/ironic-config.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>> -e /home/stack/containers-prepare-parameter.yaml >>>> >>>> Addition lines as highlighted in yellow were passed with modifications: >>>> tls-endpoints-public-ip.yaml: >>>> Passed as is in the defaults. >>>> enable-tls.yaml: >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Enable SSL on OpenStack Public Endpoints >>>> # description: | >>>> # Use this environment to pass in certificates for SSL deployments. >>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>> # environments must also be used. >>>> parameter_defaults: >>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>> # Type: boolean >>>> HorizonSecureCookies: True >>>> >>>> # Specifies the default CA cert to use if TLS is used for services in >>>> the public network. >>>> # Type: string >>>> PublicTLSCAFile: >>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>> >>>> # The content of the SSL certificate (without Key) in PEM format. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> SSLCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> # The content of an SSL intermediate CA certificate in PEM format. >>>> # Type: string >>>> SSLIntermediateCertificate: '' >>>> >>>> # The content of the SSL Key in PEM format. >>>> # Type: string >>>> SSLKey: | >>>> -----BEGIN PRIVATE KEY----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END PRIVATE KEY----- >>>> >>>> # ****************************************************** >>>> # Static parameters - these are values that must be >>>> # included in the environment but should not be changed. >>>> # ****************************************************** >>>> # The filepath of the certificate as it will be stored in the >>>> controller. >>>> # Type: string >>>> DeployedSSLCertificatePath: >>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>> >>>> # ********************* >>>> # End static parameters >>>> # ********************* >>>> >>>> inject-trust-anchor.yaml >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>> # description: | >>>> # When using an SSL certificate signed by a CA that is not in the >>>> default >>>> # list of CAs, this environment allows adding a custom CA certificate >>>> to >>>> # the overcloud nodes. >>>> parameter_defaults: >>>> # The content of a CA's SSL certificate file in PEM format. This is >>>> evaluated on the client side. >>>> # Mandatory. This parameter must be set by the user. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> resource_registry: >>>> OS::TripleO::NodeTLSCAData: >>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>> >>>> >>>> >>>> >>>> The procedure to create such files was followed using: >>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>> >>>> >>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>>> certificate, without DNS. * >>>> >>>> Any idea around this error would be of great help. >>>> >>>> -- >>>> skype: lokendrarathour >>>> >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jul 9 13:26:36 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 9 Jul 2022 13:26:36 +0000 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II Message-ID: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> It became apparent in a Python community discussion[0] yesterday that lockfile has been designated as a "critical project" by PyPI, even though it was effectively abandoned in 2010. Because our release automation account is listed as one of its maintainers, I looked into whether we still need it for anything... For those who have been around here a while, you may recall that the OpenStack project temporarily assumed maintenance[1] of lockfile in late 2014 and uploaded a few new releases for it, as a stop-gap until we could replace our uses with oslo.concurrency. That work was completed[2] early in the Liberty development cycle. Unfortunately for us, that's not the end of the story. I looked yesterday expecting to see that we've not needed lockfile for 7 years, and was disappointed to discover it's still in our constraints list. Why? After much wailing and gnashing of teeth and manually installing multiple bisections of the requirements list, I narrowed it down to one dependency: ansible-runner. Apparently, ansible-runner currently depends[3] on python-daemon, which still has a dependency on lockfile[4]. Our uses of ansible-runner seem to be pretty much limited to TripleO repositories (hence tagging them in the subject), so it's possible they could find an alternative to it and solve this dilemma. Optionally, we could try to help the ansible-runner or python-daemon maintainers with new implementations of the problem dependencies as a way out. Whatever path we take, we're long overdue. The reasons we moved off lockfile ages ago are still there, and the risk to us has only continued to increase in the meantime. I'm open to suggestions, but we really ought to make sure we have it out of our constraints list by the Zed release. [0] https://discuss.python.org/t/17219 [1] https://lists.openstack.org/pipermail/openstack-dev/2014-June/038387.html [2] https://review.openstack.org/151224 [3] https://github.com/ansible/ansible-runner/issues/379 [4] https://pagure.io/python-daemon/issue/42 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From kdhall at binghamton.edu Sat Jul 9 21:28:32 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sat, 9 Jul 2022 17:28:32 -0400 Subject: [OpenStack-Ansible] Log Aggregation in Yoga Message-ID: Hello, I've been told that Openstack now uses journald rather than rsyslog, but the Yoga docs still show a log aggregation host. Although I still prefer rsyslog, what I really prefer is to have logs for all hosts in the cluster collected in one place. How do I configure this for a fresh installation? Is it reasonable to assign this duty to an infrastructure host? Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Sat Jul 9 21:40:31 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sat, 9 Jul 2022 17:40:31 -0400 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? Message-ID: Hello. I'm preparing to do my first deployment (of Yoga) on real hardware. The documentation regarding br-vlan was hard for me to understand. Could I get a clarification on what to do with this? Note: my intended use case is as an academic/instructional environment. I'm thinking along the lines of treating each student as a separate tenant that would be preconfigured with templated set of VMs appropriate to the course content. Any other thoughts regarding this scenario would also be greatly appreciated. Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Sat Jul 9 23:36:04 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Sat, 9 Jul 2022 18:36:04 -0500 Subject: [event] OpenInfradays Mexico 2022 (Virtual) In-Reply-To: References: Message-ID: hey hey community!!!!! don't forget to share your knowledge with LATAM community -- https://openinfradays.mx 6:34 CFP will close on Monday next week, but send us an email and we can talk about waiting or extending the date On Tue, Jul 5, 2022 at 12:38 AM Alvaro Soto wrote: > Hello Community, > the CFP will close in 6 days, don?t forget to submit your proposals, we > only need the title and abstracts, video talk needs to be submitted later > on. > Remember that this is a virtual event and a great opportunity to share and > spread knowledge across the LATAM region. > > https://events.linuxfoundation.org/about/community/?_sft_lfevent-country=mx > https://openinfradays.mx/ > > Cheers! > > --- > Alvaro Soto > > Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you. > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > > On Thu, Jun 23, 2022, 6:13 PM Alvaro Soto wrote: > >> You're all invited to participate in the CFP for OID-MX22 >> https://openinfradays.mx >> >> Let me know if you have any questions. >> >> --- >> Alvaro Soto Escobar >> >> Note: My work hours may not be your work hours. Please do not feel the >> need to respond during a time that is not convenient for you. >> ---------------------------------------------------------- >> Great people talk about ideas, >> ordinary people talk about things, >> small people talk... about other people. >> > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From tjoen at dds.nl Sun Jul 10 05:25:15 2022 From: tjoen at dds.nl (tjoen) Date: Sun, 10 Jul 2022 07:25:15 +0200 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: Message-ID: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> On 7/9/22 23:40, Dave Hall wrote: > I'm preparing to do my first deployment (of Yoga) on real hardware. The > documentation regarding br-vlan was hard for me to understand. Could I > get a clarification on what to do with this? I think in Yoga br-vlan has been replaced by a real bridge > Note: my intended use case is as an academic/instructional environment. > I'm thinking along the lines of treating each student as a separate tenant > that would be preconfigured with templated set of VMs appropriate to the > course content. In my case I am trying to get a new version running (login to Cirros) on a LFS system. I got Yoga woring on py3.9. Currently migrating to py3.10 for Zed From noonedeadpunk at gmail.com Sun Jul 10 06:57:52 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 08:57:52 +0200 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: Message-ID: Hi Dave, Intended use-case for br-vlan is when you want or need to provide vlan networks in the environment. As example, we use vlan networks to bring in customers owned public networks, as we need to pass vlan from the gateway to the compute nodes, and we are not able to set vxlan on the gateway due to hardware that is used there. At the same time in many environments you might not need using vlans at all, as vxlan is what will be used by default to provide tenant networks. ??, 9 ???. 2022 ?., 23:44 Dave Hall : > Hello. > > I'm preparing to do my first deployment (of Yoga) on real hardware. The > documentation regarding br-vlan was hard for me to understand. Could I > get a clarification on what to do with this? > > Note: my intended use case is as an academic/instructional environment. > I'm thinking along the lines of treating each student as a separate tenant > that would be preconfigured with templated set of VMs appropriate to the > course content. > > Any other thoughts regarding this scenario would also be greatly > appreciated. > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jul 10 10:02:03 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 12:02:03 +0200 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Hey, Yes, indeed, we do store all logs with journald nowadays and rsyslog is not being used. These roles and documentation is our technical debt and we should have deprecated that back in Victoria. Journald for containers is bind mounted and you can check for all container logs on host. At the same time there plenty of tools to convert journald to any format of your taste including rsyslog. To have that said, I would discourage using rsyslog as with that you loose tons of important metadata and it's hard to parse them properly. If you're using any central logging tool, like elk or graylog, there re ways to forward journal to these as well. We also have roles for elk or graylog in our ops repo https://opendev.org/openstack/openstack-ansible-ops Though we don't provide support for them, thus don't guarantee they're working as expected and some effort might be needed to update them. ??, 9 ???. 2022 ?., 23:33 Dave Hall : > Hello, > > I've been told that Openstack now uses journald rather than rsyslog, but > the Yoga docs still show a log aggregation host. Although I still prefer > rsyslog, what I really prefer is to have logs for all hosts in the cluster > collected in one place. How do I configure this for a fresh installation? > Is it reasonable to assign this duty to an infrastructure host? > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Sun Jul 10 13:42:50 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sun, 10 Jul 2022 09:42:50 -0400 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Two further questions: After looking a bit further, journald seems to have the ability to forward to an journald aggregation target. Is there an option in OpenStack-Ansible to configure this in all of the deployed containers? Are there any relevant services deployed in a typical OSA deployment that don't log to journald? Thanks. -Dave On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov wrote: > Hey, > > Yes, indeed, we do store all logs with journald nowadays and rsyslog is > not being used. These roles and documentation is our technical debt and we > should have deprecated that back in Victoria. > > Journald for containers is bind mounted and you can check for all > container logs on host. At the same time there plenty of tools to convert > journald to any format of your taste including rsyslog. To have that said, > I would discourage using rsyslog as with that you loose tons of important > metadata and it's hard to parse them properly. If you're using any central > logging tool, like elk or graylog, there re ways to forward journal to > these as well. We also have roles for elk or graylog in our ops repo > https://opendev.org/openstack/openstack-ansible-ops > Though we don't provide support for them, thus don't guarantee they're > working as expected and some effort might be needed to update them. > > ??, 9 ???. 2022 ?., 23:33 Dave Hall : > >> Hello, >> >> I've been told that Openstack now uses journald rather than rsyslog, but >> the Yoga docs still show a log aggregation host. Although I still prefer >> rsyslog, what I really prefer is to have logs for all hosts in the cluster >> collected in one place. How do I configure this for a fresh installation? >> Is it reasonable to assign this duty to an infrastructure host? >> >> Thanks. >> >> -Dave >> >> -- >> Dave Hall >> Binghamton University >> kdhall at binghamton.edu >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jul 10 13:50:56 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 15:50:56 +0200 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Yes, we have a role openstack.osa.journald_remote that should be shipped and installed during bootstrap: https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/journald_remote I think as of today only ceph can not log into journald. ??, 10 ???. 2022 ?., 15:43 Dave Hall : > Two further questions: > > After looking a bit further, journald seems to have the ability to forward > to an journald aggregation target. Is there an option in OpenStack-Ansible > to configure this in all of the deployed containers? > > Are there any relevant services deployed in a typical OSA deployment that > don't log to journald? > > Thanks. > > -Dave > > On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov > wrote: > >> Hey, >> >> Yes, indeed, we do store all logs with journald nowadays and rsyslog is >> not being used. These roles and documentation is our technical debt and we >> should have deprecated that back in Victoria. >> >> Journald for containers is bind mounted and you can check for all >> container logs on host. At the same time there plenty of tools to convert >> journald to any format of your taste including rsyslog. To have that said, >> I would discourage using rsyslog as with that you loose tons of important >> metadata and it's hard to parse them properly. If you're using any central >> logging tool, like elk or graylog, there re ways to forward journal to >> these as well. We also have roles for elk or graylog in our ops repo >> https://opendev.org/openstack/openstack-ansible-ops >> Though we don't provide support for them, thus don't guarantee they're >> working as expected and some effort might be needed to update them. >> >> ??, 9 ???. 2022 ?., 23:33 Dave Hall : >> >>> Hello, >>> >>> I've been told that Openstack now uses journald rather than rsyslog, but >>> the Yoga docs still show a log aggregation host. Although I still prefer >>> rsyslog, what I really prefer is to have logs for all hosts in the cluster >>> collected in one place. How do I configure this for a fresh installation? >>> Is it reasonable to assign this duty to an infrastructure host? >>> >>> Thanks. >>> >>> -Dave >>> >>> -- >>> Dave Hall >>> Binghamton University >>> kdhall at binghamton.edu >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Sun Jul 10 13:59:57 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 10 Jul 2022 15:59:57 +0200 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: Btw journald-remote on itself has quite a few long-lasting bugs, and it's development seems quite absent as of today. Most significant one is regarding rotation of logs on the remote server, check https://github.com/systemd/systemd/issues/5242 ??, 10 ???. 2022 ?., 15:43 Dave Hall : > Two further questions: > > After looking a bit further, journald seems to have the ability to forward > to an journald aggregation target. Is there an option in OpenStack-Ansible > to configure this in all of the deployed containers? > > Are there any relevant services deployed in a typical OSA deployment that > don't log to journald? > > Thanks. > > -Dave > > On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov > wrote: > >> Hey, >> >> Yes, indeed, we do store all logs with journald nowadays and rsyslog is >> not being used. These roles and documentation is our technical debt and we >> should have deprecated that back in Victoria. >> >> Journald for containers is bind mounted and you can check for all >> container logs on host. At the same time there plenty of tools to convert >> journald to any format of your taste including rsyslog. To have that said, >> I would discourage using rsyslog as with that you loose tons of important >> metadata and it's hard to parse them properly. If you're using any central >> logging tool, like elk or graylog, there re ways to forward journal to >> these as well. We also have roles for elk or graylog in our ops repo >> https://opendev.org/openstack/openstack-ansible-ops >> Though we don't provide support for them, thus don't guarantee they're >> working as expected and some effort might be needed to update them. >> >> ??, 9 ???. 2022 ?., 23:33 Dave Hall : >> >>> Hello, >>> >>> I've been told that Openstack now uses journald rather than rsyslog, but >>> the Yoga docs still show a log aggregation host. Although I still prefer >>> rsyslog, what I really prefer is to have logs for all hosts in the cluster >>> collected in one place. How do I configure this for a fresh installation? >>> Is it reasonable to assign this duty to an infrastructure host? >>> >>> Thanks. >>> >>> -Dave >>> >>> -- >>> Dave Hall >>> Binghamton University >>> kdhall at binghamton.edu >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From neil at shrug.pw Sun Jul 10 15:28:15 2022 From: neil at shrug.pw (Neil Hanlon) Date: Sun, 10 Jul 2022 11:28:15 -0400 Subject: [OpenStack-Ansible] Log Aggregation in Yoga In-Reply-To: References: Message-ID: I had been poking at some updated documentation for OSA around remote journaling/centralized logging that I have been meaning to put in for review which may be useful here. I'll try and get to tidying it up in the next couple weeks. --Neil On Sun, Jul 10, 2022, 10:04 Dmitriy Rabotyagov wrote: > Btw journald-remote on itself has quite a few long-lasting bugs, and it's > development seems quite absent as of today. Most significant one is > regarding rotation of logs on the remote server, check > https://github.com/systemd/systemd/issues/5242 > > ??, 10 ???. 2022 ?., 15:43 Dave Hall : > >> Two further questions: >> >> After looking a bit further, journald seems to have the ability to >> forward to an journald aggregation target. Is there an option in >> OpenStack-Ansible to configure this in all of the deployed containers? >> >> Are there any relevant services deployed in a typical OSA deployment that >> don't log to journald? >> >> Thanks. >> >> -Dave >> >> On Sun, Jul 10, 2022, 6:02 AM Dmitriy Rabotyagov >> wrote: >> >>> Hey, >>> >>> Yes, indeed, we do store all logs with journald nowadays and rsyslog is >>> not being used. These roles and documentation is our technical debt and we >>> should have deprecated that back in Victoria. >>> >>> Journald for containers is bind mounted and you can check for all >>> container logs on host. At the same time there plenty of tools to convert >>> journald to any format of your taste including rsyslog. To have that said, >>> I would discourage using rsyslog as with that you loose tons of important >>> metadata and it's hard to parse them properly. If you're using any central >>> logging tool, like elk or graylog, there re ways to forward journal to >>> these as well. We also have roles for elk or graylog in our ops repo >>> https://opendev.org/openstack/openstack-ansible-ops >>> Though we don't provide support for them, thus don't guarantee they're >>> working as expected and some effort might be needed to update them. >>> >>> ??, 9 ???. 2022 ?., 23:33 Dave Hall : >>> >>>> Hello, >>>> >>>> I've been told that Openstack now uses journald rather than rsyslog, >>>> but the Yoga docs still show a log aggregation host. Although I still >>>> prefer rsyslog, what I really prefer is to have logs for all hosts in the >>>> cluster collected in one place. How do I configure this for a fresh >>>> installation? Is it reasonable to assign this duty to an infrastructure >>>> host? >>>> >>>> Thanks. >>>> >>>> -Dave >>>> >>>> -- >>>> Dave Hall >>>> Binghamton University >>>> kdhall at binghamton.edu >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Sun Jul 10 17:39:18 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Sun, 10 Jul 2022 23:09:18 +0530 Subject: Podman issue when building custom horizon container Message-ID: Hi, I am following the https://access.redhat.com/documentation/zh-cn/red_hat_openstack_platform/16.0/html/introduction_to_the_openstack_dashboard/dashboard-customization link to custom build the container image. but when trying to push i am getting the following error: Error: writing blob: initiating layer upload to /v2/tripleomaster/openstack-horizon/blobs/uploads/ in 172.25.201.68:8787: StatusCode: 404, ... [root at hkg2director httpd]# tail -f image_serve_error.log [Fri Jul 08 12:58:09.788817 2022] [core:error] [pid 5033:tid 140483665843968] [client 172.25.163.196:33866] AH00126: Invalid URI in request GET /././.. HTTP/1.1 [Sun Jul 10 21:42:59.744102 2022] [negotiation:error] [pid 33922:tid 140484739585792] (2)No such file or directory: [client 172.25.201.106:58678] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 21:43:01.280405 2022] [negotiation:error] [pid 5035:tid 140483917494016] (2)No such file or directory: [client 172.25.201.97:45222] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 21:43:02.024126 2022] [negotiation:error] [pid 33922:tid 140484043319040] (2)No such file or directory: [client 172.25.201.103:34508] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 21:43:16.852186 2022] [negotiation:error] [pid 33922:tid 140484076889856] (2)No such file or directory: [client 172.25.201.105:53542] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:05.220181 2022] [negotiation:error] [pid 33922:tid 140485133846272] (2)No such file or directory: [client 172.25.201.106:56804] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:06.679326 2022] [negotiation:error] [pid 5035:tid 140485108668160] (2)No such file or directory: [client 172.25.201.103:54208] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:07.026650 2022] [negotiation:error] [pid 5035:tid 140483917494016] (2)No such file or directory: [client 172.25.201.97:35056] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Sun Jul 10 22:10:22.600075 2022] [negotiation:error] [pid 33922:tid 140484060104448] (2)No such file or directory: [client 172.25.201.105:45446] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-8.type-map [Mon Jul 11 01:23:45.354025 2022] [negotiation:error] [pid 129344:tid 139984250070784] (2)No such file or directory: [client 172.25.201.68:59514] AH00683: cannot access type map file: /var/lib/image-serve/v2/tripleomaster/openstack-horizon/manifests/0-9.type-map [stack at hkg2director horizon-themes]$ sudo podman push 172.25.201.68:8787/tripleomaster/openstack-horizon:0-9 WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". WARN[0000] Failed to decode the keys ["network.network_backend"] from "/usr/share/containers/containers.conf". Getting image source signatures Copying blob 80c0be683ac9 skipped: already exists Copying blob bab1c7b6a899 [--------------------------------------] 8.0b / 59.5KiB Copying blob f9f09cbae066 [--------------------------------------] 8.0b / 59.5KiB Copying blob b7b591e3443f [--------------------------------------] 8.0b / 20.0KiB Copying blob bb83a400dc7e [--------------------------------------] 8.0b / 7.0KiB Copying blob 1ca9ec783ad6 [--------------------------------------] 8.0b / 7.0KiB Copying blob 3a6b7f2864e6 [--------------------------------------] 8.0b / 7.0KiB Copying blob ffd77c9907b7 [--------------------------------------] 8.0b / 9.0KiB Copying blob 6ae06b1bf643 [--------------------------------------] 8.0b / 7.5KiB Copying blob bc3dd4002908 [--------------------------------------] 8.0b / 7.5KiB Copying blob 8b974d9968c1 [--------------------------------------] 8.0b / 14.0KiB Copying blob 30f87a7de8b5 [--------------------------------------] 8.0b / 6.0KiB Copying blob aa02c13deb86 [--------------------------------------] 0.0b / 4.0KiB Copying blob 89b853af1aa2 [--------------------------------------] 8.0b / 14.0KiB Copying blob 3ee1ead6db5f [--------------------------------------] 8.0b / 59.5KiB Copying blob fd1baa2a1cd0 [--------------------------------------] 8.0b / 59.5KiB Copying blob 33afb269824d [--------------------------------------] 8.0b / 7.0KiB Copying blob f1117feaa844 [--------------------------------------] 8.0b / 7.0KiB Copying blob b36e5fbb3eab [--------------------------------------] 8.0b / 7.0KiB Copying blob 5d6773201dfb [--------------------------------------] 8.0b / 7.5KiB Copying blob f9ae49c1e307 [--------------------------------------] 8.0b / 7.5KiB Copying blob 64e624a7f305 [--------------------------------------] 8.0b / 6.0KiB Copying blob 2fa93f4f2a45 [--------------------------------------] 8.0b / 128.6MiB Copying blob ccf04fbd6e19 [--------------------------------------] 8.0b / 201.0MiB Copying blob 8a36fc7d2feb [--------------------------------------] 8.0b / 281.3MiB Error: writing blob: initiating layer upload to /v2/tripleomaster/openstack-horizon/blobs/uploads/ in 172.25.201.68:8787: StatusCode: 404, ... Can someone please guide me to fix this issue? With regards, Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gael.therond at bitswalk.com Sun Jul 10 23:28:36 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 11 Jul 2022 01:28:36 +0200 Subject: [IRONIC] - Various questions around network features. Message-ID: I everyone, I?m currently working back again with Ironic and it?s amazing! However, during our demo session to our users few questions arise. We?re currently deploying nodes using a private vlan that can?t be reached from outside of the Openstack network fabric (vlan 101 - 192.168.101.0/24) and everything is fine with this provisioning network as our ToR switch all know about it and other Control plan VLANs such as the internal APIs VLAN which allow the IPA Ramdisk to correctly and seamlessly be able to contact the internal IRONIC APIs. (When you declare a port as a trunk allowed all vlan on a aruba switch it seems it automatically analyse the CIDR your host try to reach from your VLAN and route everything to the corresponding VLAN that match the destination IP). So know, I still get few tiny issues: 1?/- When I spawn a nova instance on a ironic host that is set to use flat network (From horizon as a user), why does the nova wizard still ask for a neutron network if it?s not set on the provisioned host by the IPA ramdisk right after the whole disk image copy? Is that some missing development on horizon or did I missed something? 2?/- In a flat network layout deployment using direct deploy scenario for images, am I still supposed to create a ironic provisioning network in neutron? >From my understanding (and actually my tests) we don?t, as any host booting on the provisioning vlan will catch up an IP and initiate the bootp sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, but it?s a bit confusing as many documentation explicitly require for this network to exist on neutron. 3?/- My whole Openstack network setup is using Openvswitch and vxlan tunnels on top of a spine/leaf architecture using aruba CX8360 switches (for both spine and leafs), am I required to use either the networking-generic-switch driver or a vendor neutron driver ? If that?s right, how will this driver be able to instruct the switch to assign the host port the correct openvswitch vlan id and register the correct vxlan to openvswitch from this port? I mean, ok neutron know the vxlan and openvswitch the tunnel vlan id/interface but what is the glue of all that? 4?/- I?ve successfully used openstack cloud oriented CentOS and debian images or snapshot of VMs to provision my hosts, this is an awesome feature, but I?m wondering if there is a way to let those host cloud-init instance to request for neutron metadata endpoint? I was a bit surprised about the ironic networking part as I was expecting the IPA ramdisk to at least be able to set the host os with the appropriate network configuration file for whole disk images that do not use encryption by injecting those information from the neutron api into the host disk while mounted (right after the image dd). All in all I really like the ironic approach of the baremetal provisioning process, and I?m pretty sure that I?m just missing a bit of understanding of the networking part but it?s really the most confusing part of it to me as I feel like if there is a missing link in between neutron and the host HW or the switches. Thanks a lot anyone that will take time to explain me this :-) -------------- next part -------------- An HTML attachment was scrubbed... URL: From iwienand at redhat.com Mon Jul 11 02:43:36 2022 From: iwienand at redhat.com (Ian Wienand) Date: Mon, 11 Jul 2022 12:43:36 +1000 Subject: [service-announce] Updating Zuul's Default Ansible Version to Ansible v5 In-Reply-To: <8f869fba-10b8-488c-8f58-065115822555@www.fastmail.com> References: <8f869fba-10b8-488c-8f58-065115822555@www.fastmail.com> Message-ID: On Wed, Jun 15, 2022 at 12:11:00PM -0700, Clark Boylan wrote: > The OpenDev team will be updating the default Ansible version in our > Zuul tenants from Ansible 2.9 to Ansible 5 on June 30, 2022. Zuul > itself will eventually update its default, but making the change in > our tenant configs allows us to control exactly when this happens. Note this has been merged with https://review.opendev.org/c/openstack/project-config/+/849120 Just for visibility I've cc'd this to openstack-discuss; but please subscribe to service-announce [1] if you're interested in such OpenDev infra updates. Thanks, -i [1] https://lists.opendev.org/cgi-bin/mailman/listinfo/service-announce From jonathan.rosser at rd.bbc.co.uk Mon Jul 11 07:28:35 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 11 Jul 2022 08:28:35 +0100 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> References: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> Message-ID: On 10/07/2022 06:25, tjoen wrote: > On 7/9/22 23:40, Dave Hall wrote: >> I'm preparing to do my first deployment (of Yoga) on real hardware.? The >> documentation regarding br-vlan was hard for me to understand. Could I >> get a clarification on what to do with this? > > I think in Yoga br-vlan has been replaced by a real bridge With openstack deployed with Openstack-Ansible,? br-vlan remains unchanged in the Yoga release. Jonathan. From gibi at redhat.com Mon Jul 11 07:33:04 2022 From: gibi at redhat.com (Balazs Gibizer) Date: Mon, 11 Jul 2022 09:33:04 +0200 Subject: [nova] Review guide for PCI tracking for Placement patches In-Reply-To: References: Message-ID: <4BIUER.4Z8ORYASOYO42@redhat.com> On Tue, Jun 21 2022 at 02:04:01 PM +02:00:00, Balazs Gibizer wrote: > Hi Nova, > > The first batch of patches are up for review for the PCI tracking for > Placement feature. These mostly covers two aspects of the spec[1]: > 1) renaming [pci]passthrough_whitelist to [pci]device_spec > 2) pci inventory reporting to placement, excluding existing PCI > allocation healing in placement > > This covers the first 4 sub chapters of Proposed Change chapter of > the spec[1] up until "PCI alias configuration". I noted intentional > deviations from the spec in the spec review [2] and I will push a > follow up to the spec at some point fixing those. > > I tried to do it in small steps hence the long list of commits[3]: > > #2) pci inventory reporting to placement, excluding existing PCI > allocation healing in placement > 5827d56310 Stop if tracking is disable after it was enabled before > a4b5788858 Support [pci]device_spec reconfiguration > 10642c787a Reject devname based device_spec config > b0ad05fb69 Ignore PCI devs with physical_network tag > f5a34ee441 Reject mixed VF rc and trait config > c60b26014f Reject PCI dependent device config > 5cf7325221 Extend device_spec with resource_class and traits > eff0df6a98 Basics for PCI Placement reporting > #1) renaming [pci]passthrough_whitelist to [pci]device_spec > adfe34080a Rename whitelist in tests > ea955a0c15 Rename exception.PciConfigInvalidWhitelist to > PciConfigInvalidSpec > 55770e4c14 Rename [pci]passthrough_whitelist to device_spec > > There is a side track branching out from "adfe34080a Rename whitelist > in tests" to clean up the device spec handling[4]: > > 514500b5a4 Move __str__ to the PciAddressSpec base class > 3a6198c8fb Fix type annotation of pci.Whitelist class > f70adbb613 Remove unused PF checking from get_function_by_ifname > b7eef53b1d Clean up mapping input to address spec types > 93bbd67101 Poison /sys access via various calls in test > 467ef91a86 Remove dead code from PhysicalPciAddress > 233212d30f Fix PciAddressSpec descendants to call super.__init__ > ad5bd46f46 Unparent PciDeviceSpec from PciAddressSpec > cef0d2de4c Extra tests for remote managed dev spec > 2fa2825afb Add more test coverage for devname base dev spec > adfe34080a Rename whitelist in tests > > This is not a mandatory part of the feature but I think they improve > the code in hand and even fixing some small bugs. > > > I will continue with adding allocation healing for existing PCI > allocations. > > Any feedback is highly appreciated. Pinging this thread as I would like to ask for at least a high level review round to see that the implementation direction is OK before I produce the next bunch of commits of the series. > Cheers, > gibi > > [1] > https://specs.openstack.org/openstack/nova-specs/specs/zed/approved/pci-device-tracking-in-placement.html > [2] https://review.opendev.org/c/openstack/nova-specs/+/791047 > [3] > https://review.opendev.org/q/topic:bp/pci-device-tracking-in-placement > [4] https://review.opendev.org/q/topic:bp/pci-device-spec-cleanup > > From jonathan.rosser at rd.bbc.co.uk Mon Jul 11 07:34:19 2022 From: jonathan.rosser at rd.bbc.co.uk (Jonathan Rosser) Date: Mon, 11 Jul 2022 08:34:19 +0100 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: Message-ID: <76fdd631-f5dc-5f45-7dc0-b1e375060857@rd.bbc.co.uk> If you choose to use vxlan for your tenant networks (the default in OSA) you would probably be using a vlan for the external provider network. This would default to br-vlan, but alternatively can be any interface of your choice. With the default configuration br-vlan would need to be present on all of your controller (or dedicated network) nodes and carry the tagged external vlan from your upstream switches. Jonathan. On 10/07/2022 07:57, Dmitriy Rabotyagov wrote: > Hi Dave, > > Intended use-case for br-vlan is when you want or need to provide vlan > networks in the environment. > > As example, we use vlan networks to bring in customers owned public > networks, as we need to pass vlan from the gateway to the compute > nodes, and we are not able to set vxlan on the gateway due to hardware > that is used there. > > At the same time in many environments you might not need using vlans > at all, as vxlan is what will be used by default to provide tenant > networks. > > ??, 9 ???. 2022 ?., 23:44 Dave Hall : > > Hello. > > I'm preparing to do my first deployment (of Yoga) on > real?hardware.? The documentation regarding br-vlan was hard for > me to understand.? ?Could I get a clarification?on what to do with > this? > > Note:? my intended use case is as an academic/instructional > environment.? I'm thinking along the lines of treating each > student as a separate tenant that would be preconfigured with > templated set of VMs appropriate to the course content. > > Any other thoughts regarding this scenario would also?be greatly > appreciated. > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Mon Jul 11 07:39:08 2022 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 11 Jul 2022 09:39:08 +0200 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> Message-ID: Hello there, Tripleo can't really drop the dependency on ansible-runner, since it's the official and supported way to run ansible from within python. We could of course replace it by subprocess.Popen and the whole family, but we'll lose all the facilities, especially the ones involved in the Ansible Execution Environment, which is kind of the "future" for running ansible (for the better and the worst). Since ansible-runner has an open issue for that (you even linked it), and at least one open pull-request to correct this issue, I'm not really sure we're needing to do anything but follow that open issue and ensure we get the right version of the package once they merge any of the proposal to remove it. Would that be OK? Though, of course, it won't allow to set a deadline on a specific date... Cheers, C. On 7/9/22 15:26, Jeremy Stanley wrote: > It became apparent in a Python community discussion[0] yesterday > that lockfile has been designated as a "critical project" by PyPI, > even though it was effectively abandoned in 2010. Because our > release automation account is listed as one of its maintainers, I > looked into whether we still need it for anything... > > For those who have been around here a while, you may recall that the > OpenStack project temporarily assumed maintenance[1] of lockfile in > late 2014 and uploaded a few new releases for it, as a stop-gap > until we could replace our uses with oslo.concurrency. That work was > completed[2] early in the Liberty development cycle. > > Unfortunately for us, that's not the end of the story. I looked > yesterday expecting to see that we've not needed lockfile for 7 > years, and was disappointed to discover it's still in our > constraints list. Why? After much wailing and gnashing of teeth and > manually installing multiple bisections of the requirements list, I > narrowed it down to one dependency: ansible-runner. > > Apparently, ansible-runner currently depends[3] on python-daemon, > which still has a dependency on lockfile[4]. Our uses of > ansible-runner seem to be pretty much limited to TripleO > repositories (hence tagging them in the subject), so it's possible > they could find an alternative to it and solve this dilemma. > Optionally, we could try to help the ansible-runner or python-daemon > maintainers with new implementations of the problem dependencies as > a way out. > > Whatever path we take, we're long overdue. The reasons we moved off > lockfile ages ago are still there, and the risk to us has only > continued to increase in the meantime. I'm open to suggestions, but > we really ought to make sure we have it out of our constraints list > by the Zed release. > > [0] https://discuss.python.org/t/17219 > [1] https://lists.openstack.org/pipermail/openstack-dev/2014-June/038387.html > [2] https://review.openstack.org/151224 > [3] https://github.com/ansible/ansible-runner/issues/379 > [4] https://pagure.io/python-daemon/issue/42 -- C?dric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From lucasagomes at gmail.com Mon Jul 11 09:13:16 2022 From: lucasagomes at gmail.com (Lucas Alvares Gomes) Date: Mon, 11 Jul 2022 10:13:16 +0100 Subject: [neutron] Bug Deputy Report July 04 - 11 Message-ID: Hi, This is the Neutron bug report from July 4th to 11th. Quieter week than usual. High: * https://bugs.launchpad.net/neutron/+bug/1980967 - "get_hypervisor_hostname helper function is failing silently" Assigned to: Miro Tomaska * https://bugs.launchpad.net/neutron/+bug/1981077 - "Remove import of 'imp' module" Unassigned * https://bugs.launchpad.net/neutron/+bug/1981113 - "OVN metadata agent can be slow with large amount of subnets" Assigned to: Miro Tomaska Medium: * https://bugs.launchpad.net/neutron/+bug/1980671 - "Neutron-dynamic-routing: missing transaction wrapper" Assigned to: Dr. Jens Harbott Cheers, Lucas -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Mon Jul 11 09:59:02 2022 From: jpodivin at redhat.com (Jiri Podivin) Date: Mon, 11 Jul 2022 11:59:02 +0200 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> Message-ID: We could, and maybe should, ask people working on ansible to give that issue/PR more attention. I see the PR is untriaged for some time now... On Mon, Jul 11, 2022 at 9:50 AM C?dric Jeanneret wrote: > Hello there, > > Tripleo can't really drop the dependency on ansible-runner, since it's > the official and supported way to run ansible from within python. We > could of course replace it by subprocess.Popen and the whole family, but > we'll lose all the facilities, especially the ones involved in the > Ansible Execution Environment, which is kind of the "future" for running > ansible (for the better and the worst). > > Since ansible-runner has an open issue for that (you even linked it), > and at least one open pull-request to correct this issue, I'm not really > sure we're needing to do anything but follow that open issue and ensure > we get the right version of the package once they merge any of the > proposal to remove it. > > Would that be OK? Though, of course, it won't allow to set a deadline on > a specific date... > > Cheers, > > C. > > On 7/9/22 15:26, Jeremy Stanley wrote: > > It became apparent in a Python community discussion[0] yesterday > > that lockfile has been designated as a "critical project" by PyPI, > > even though it was effectively abandoned in 2010. Because our > > release automation account is listed as one of its maintainers, I > > looked into whether we still need it for anything... > > > > For those who have been around here a while, you may recall that the > > OpenStack project temporarily assumed maintenance[1] of lockfile in > > late 2014 and uploaded a few new releases for it, as a stop-gap > > until we could replace our uses with oslo.concurrency. That work was > > completed[2] early in the Liberty development cycle. > > > > Unfortunately for us, that's not the end of the story. I looked > > yesterday expecting to see that we've not needed lockfile for 7 > > years, and was disappointed to discover it's still in our > > constraints list. Why? After much wailing and gnashing of teeth and > > manually installing multiple bisections of the requirements list, I > > narrowed it down to one dependency: ansible-runner. > > > > Apparently, ansible-runner currently depends[3] on python-daemon, > > which still has a dependency on lockfile[4]. Our uses of > > ansible-runner seem to be pretty much limited to TripleO > > repositories (hence tagging them in the subject), so it's possible > > they could find an alternative to it and solve this dilemma. > > Optionally, we could try to help the ansible-runner or python-daemon > > maintainers with new implementations of the problem dependencies as > > a way out. > > > > Whatever path we take, we're long overdue. The reasons we moved off > > lockfile ages ago are still there, and the risk to us has only > > continued to increase in the meantime. I'm open to suggestions, but > > we really ought to make sure we have it out of our constraints list > > by the Zed release. > > > > [0] https://discuss.python.org/t/17219 > > [1] > https://lists.openstack.org/pipermail/openstack-dev/2014-June/038387.html > > [2] https://review.openstack.org/151224 > > [3] https://github.com/ansible/ansible-runner/issues/379 > > [4] https://pagure.io/python-daemon/issue/42 > > -- > C?dric Jeanneret (He/Him/His) > Sr. Software Engineer - OpenStack Platform > Deployment Framework TC > Red Hat EMEA > https://www.redhat.com/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From development at manuel-bentele.de Mon Jul 11 10:18:38 2022 From: development at manuel-bentele.de (Manuel Bentele) Date: Mon, 11 Jul 2022 12:18:38 +0200 Subject: [all][dev] Where are cross-repository blueprints and specs located? Message-ID: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> Hi all, I have two general developer questions: * Where are the blueprints and specs located that address changes across the code base? * How to deal with changes that have dependencies on multiple repositories (where a specific merge and release order needs to be satisfied)? For example: An enumeration member has to be added and merged in repository A before it can be used in repository B and C. If this order is not satisfied, a DevStack setup may break. Regards, Manuel From smooney at redhat.com Mon Jul 11 11:13:35 2022 From: smooney at redhat.com (Sean Mooney) Date: Mon, 11 Jul 2022 12:13:35 +0100 Subject: [all][dev] Where are cross-repository blueprints and specs located? In-Reply-To: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> References: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> Message-ID: <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> On Mon, 2022-07-11 at 12:18 +0200, Manuel Bentele wrote: > Hi all, > > I have two general developer questions: > > * Where are the blueprints and specs located that address changes > across the code base? in generall we create "sibling specs" in each project when we have cross project work. its rare that we have more then 2 or 3 projects that take part in any one feature or change so we dont have a singel repo for specs across all repos. the close thing we have to that is a a TC selected cross porject goal. but even then while the goal document might be held locally we would still track any work on that goal within each proejct. > * How to deal with changes that have dependencies on multiple > repositories (where a specific merge and release order needs to be > satisfied)?? > zuul (or ci/gating system) support cross repo dependiceis and speculitive merging for testing via "Depends-On:" lines in the commit message. That will prevent a patch on project A form merging until the patch it depens on in project B is merged however for testing if you are using the devstack jobs it will locally merge the patch to proejct B when testing the patch to project A. this is generaly refered to as co-gatating. To enable this the jobs have to declare the other project as a required-project which will resutl in zuul prepareing a version fo the git repo with all depencies merged which the jobs can then use instead of the current top of the master branch when testing. the simple tox based unit tests do not support using deps form git. they install from pypi so unit/functional tests will not pass until a change in a lib is merged but the integration jobs such as the devstack ones will work. in general however, out side of constantats and data structure taht are part of the lib api if project A depens on project B, the Project A should mock all calls to B in its unit tests and provide a testfixture for B in its functional tests where that is resonable if they want to mitigate the dependicty issues. > For example: An enumeration member has to be added and > merged in repository A before it can be used in repository B and C. > If this order is not satisfied, a DevStack setup may break. yes which is a good thing. we want devstack to brake in this case since the set of code you are deploying is broken. but for a ci perspecitve depend-on prevents this form happening. for local developemnt you can override the branches used and can tell devstack to deploy specific patch reviiosn form gerrit or spicic git sha's/branches via usign the _repo and _branch vars that you can define in the local.conf. look at the stackrc file for examples. > > Regards, > Manuel > > From development at manuel-bentele.de Mon Jul 11 12:13:13 2022 From: development at manuel-bentele.de (Manuel Bentele) Date: Mon, 11 Jul 2022 14:13:13 +0200 Subject: [all][dev] Where are cross-repository blueprints and specs located? In-Reply-To: <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> References: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> Message-ID: <6ef35f20-f3f5-d5e4-3176-2d14ca377386@manuel-bentele.de> Hi Sean, Thanks for you quick and detailed answer. It's good to know that such situations are solved with "sibling specs". Also the "Depends-On:" hint was very informative for me, especially that this statement can be evaluated by the CI/CD chain to prevent it from breaking. Cheers, Manuel On 7/11/22 13:13, Sean Mooney wrote: > On Mon, 2022-07-11 at 12:18 +0200, Manuel Bentele wrote: >> Hi all, >> >> I have two general developer questions: >> >> * Where are the blueprints and specs located that address changes >> across the code base? > in generall we create "sibling specs" in each project when we have > cross project work. its rare that we have more then 2 or 3 projects that take part > in any one feature or change so we dont have a singel repo for specs across all > repos. > > the close thing we have to that is a a TC selected cross porject goal. > but even then while the goal document might be held locally we would still > track any work on that goal within each proejct. >> * How to deal with changes that have dependencies on multiple >> repositories (where a specific merge and release order needs to be >> satisfied)? >> > zuul (or ci/gating system) support cross repo dependiceis and speculitive merging > for testing via "Depends-On:" lines in the commit message. > > That will prevent a patch on project A form merging until the patch it depens on in project > B is merged however for testing if you are using the devstack jobs it will locally merge > the patch to proejct B when testing the patch to project A. > > this is generaly refered to as co-gatating. > > To enable this the jobs have to declare the other project as a required-project > which will resutl in zuul prepareing a version fo the git repo with all depencies merged > which the jobs can then use instead of the current top of the master branch when testing. > > the simple tox based unit tests do not support using deps form git. they install > from pypi so unit/functional tests will not pass until a change in a lib is merged > but the integration jobs such as the devstack ones will work. > > in general however, out side of constantats and data structure taht are part of the > lib api if project A depens on project B, the Project A should mock all calls to B in > its unit tests and provide a testfixture for B in its functional tests where that > is resonable if they want to mitigate the dependicty issues. >> For example: An enumeration member has to be added and >> merged in repository A before it can be used in repository B and C. >> If this order is not satisfied, a DevStack setup may break. > yes which is a good thing. we want devstack to brake in this case since the > set of code you are deploying is broken. but for a ci perspecitve depend-on > prevents this form happening. > > for local developemnt you can override the branches used and can tell devstack > to deploy specific patch reviiosn form gerrit or spicic git sha's/branches > via usign the _repo and _branch vars that you can define in > the local.conf. > look at the stackrc file for examples. > >> Regards, >> Manuel -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jul 11 12:25:05 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 11 Jul 2022 12:25:05 +0000 Subject: [all][dev] Where are cross-repository blueprints and specs located? In-Reply-To: <6ef35f20-f3f5-d5e4-3176-2d14ca377386@manuel-bentele.de> References: <32fcbfa6-8adf-cf7a-fc4d-a7667fbbfe6f@manuel-bentele.de> <52879fafe8d91f9699439a12ded0affbb7cb2feb.camel@redhat.com> <6ef35f20-f3f5-d5e4-3176-2d14ca377386@manuel-bentele.de> Message-ID: <20220711122504.zxdlh4aeq2s6kdpg@yuggoth.org> On 2022-07-11 14:13:13 +0200 (+0200), Manuel Bentele wrote: > Thanks for you quick and detailed answer. It's good to know that such > situations are solved with "sibling specs". Also the "Depends-On:" hint was > very informative for me, especially that this statement can be evaluated by > the CI/CD chain to prevent it from breaking. [...] For more details on this feature, see Zuul's documentation: https://zuul-ci.org/docs/zuul/latest/gating.html#cross-project-dependencies -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tjoen at dds.nl Mon Jul 11 16:31:09 2022 From: tjoen at dds.nl (tjoen) Date: Mon, 11 Jul 2022 18:31:09 +0200 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: References: <54268eb5-86ac-a907-3e57-f1c2c0869a8b@dds.nl> Message-ID: On 7/11/22 09:28, Jonathan Rosser wrote: > On 10/07/2022 06:25, tjoen wrote: >> On 7/9/22 23:40, Dave Hall wrote: >>> I'm preparing to do my first deployment (of Yoga) on real hardware.? The >>> documentation regarding br-vlan was hard for me to understand. Could I >>> get a clarification on what to do with this? >> >> I think in Yoga br-vlan has been replaced by a real bridge > > With openstack deployed with Openstack-Ansible,? br-vlan remains > unchanged in the Yoga release. I have no Ansible. Maybe that explains my situation From zigo at debian.org Mon Jul 11 16:33:37 2022 From: zigo at debian.org (Thomas Goirand) Date: Mon, 11 Jul 2022 18:33:37 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> Message-ID: <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> On 6/16/22 19:53, Stephen Finucane wrote: > On Thu, 2022-06-16 at 17:13 +0100, Stephen Finucane wrote: >> On Wed, 2022-06-15 at 01:04 +0100, Sean Mooney wrote: >>> On Wed, 2022-06-15 at 00:58 +0100, Sean Mooney wrote: >>>> On Tue, 2022-06-14 at 18:49 -0500, Ghanshyam Mann wrote: >>>>> ---- On Tue, 14 Jun 2022 17:47:59 -0500 Thomas Goirand wrote ---- >>>>> > On 6/13/22 00:10, Thomas Goirand wrote: >>>>> > > Hi, >>>>> > > >>>>> > > A few DDs are pushing me to upgrade jsonschema in Debian Unstable. >>>>> > > However, OpenStack global requirements are still stuck at 3.2.0. Is >>>>> > > there any reason for it, or should we attempt to upgrade to 4.6.0? >>>>> > > >>>>> > > I'd really appreciate if someone (else than me) was driving this... >>>>> > > >>>>> > > Cheers, >>>>> > > >>>>> > > Thomas Goirand (zigo) >>>>> > > >>>>> > >>>>> > FYI, Nova fails with it: >>>>> > https://ci.debian.net/data/autopkgtest/unstable/amd64/n/nova/22676760/log.gz >>>>> > >>>>> > Can someone from the Nova team investigate? >>>>> >>>>> Nova failures are due to the error message change (it happens regularly and they change them in most versions) >>>>> in jsonschema new version. I remember we faced this type of issue previously also and we updated >>>>> nova tests not to assert the error message but seems like there are a few left which is failing with >>>>> jsonschema 4.6.0. >>>>> >>>>> Along with these error message failures fixes and using the latest jsonschema, in nova, we use >>>>> Draft4Validator from jsonschema and the latest validator is Draft7Validator so we should check >>>>> what are backward-incompatible changes in Draft7Validator and bump this too? >>>> >>>> well i think the reason this is on 3.2.0 iis nothing to do with nova >>>> i bumpt it manually in https://review.opendev.org/c/openstack/requirements/+/845859/ >>>> >>>> The conflict is caused by: >>>> The user requested jsonschema===4.0.1 >>>> tripleo-common 16.4.0 depends on jsonschema>=3.2.0 >>>> rsd-lib 1.2.0 depends on jsonschema>=2.6.0 >>>> taskflow 4.7.0 depends on jsonschema>=3.2.0 >>>> zvmcloudconnector 1.4.1 depends on jsonschema>=2.3.0 >>>> os-net-config 15.2.0 depends on jsonschema>=3.2.0 >>>> task-core 0.2.1 depends on jsonschema>=3.2.0 >>>> python-zaqarclient 2.3.0 depends on jsonschema>=2.6.0 >>>> warlock 1.3.3 depends on jsonschema<4 and >=0.7 >>>> >>>> https://zuul.opendev.org/t/openstack/build/06ed295bb8244c16b48e2698c1049be9 >>>> >>>> it looks like warlock is clamping it ot less then 4 which is why we are stokc on 3.2.0 >>> glance client seams to be the only real user of this >>> https://codesearch.opendev.org/?q=warlock&i=nope&literal=nope&files=&excludeFiles=&repos= >>> perhaps we could jsut remove the dependcy? >> >> I've proposed vendoring this dependency in glanceclient [1]. I've also proposed >> a change to fix this in warlock [2] but given the lack of activity there, I >> doubt it'll merge anytime soon so the former sounds like a better option. > > My efforts to collect *all* the projects continues. Just cut 2.0.0 of warlock so > we should see this make it's way through the requirements machinery in the next > few days. I'll abandon the glanceclient change now. > > Stephen Hi Stephen, I hope you don't mind I ping and up this thread. Thanks a lot for this work. Any more progress here? I'm being pressed by the Debian community to update jsonschema in Unstable, because 3.2.0 is breaking other software (at least 2 packages). If I do, I know things will break in OpenStack. So this *MUST* be fixed for Zed... Cheers, Thomas Goirand (zigo) From johnsomor at gmail.com Mon Jul 11 16:39:48 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Mon, 11 Jul 2022 09:39:48 -0700 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: References: Message-ID: +1 from me. Takashi has been doing some great work. Michael On Thu, Jun 30, 2022 at 6:44 AM Herve Beraud wrote: > > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member of the oslo core team. > > During the last months Takashi has been a significant contributor to the oslo projects. > > Obviously we think he'd make a good addition to the core team. If there are no objections, I'll make that happen in a week. > > Thanks. > -- > Herv? Beraud > Senior Software Engineer at Red Hat > irc: hberaud > https://github.com/4383/ > https://twitter.com/4383hberaud > From kdhall at binghamton.edu Mon Jul 11 16:52:51 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Mon, 11 Jul 2022 12:52:51 -0400 Subject: [Openstack-Ansible] br-vlan configuration and intended usage? In-Reply-To: <76fdd631-f5dc-5f45-7dc0-b1e375060857@rd.bbc.co.uk> References: <76fdd631-f5dc-5f45-7dc0-b1e375060857@rd.bbc.co.uk> Message-ID: OK. I think for my plan I would have students (as tennants) access their private subnets via an IP address on the native VLAN on my switch - probably via some 10.x.x.x IP address. I'm using a bonded 10G NIC with bridges on VLANs for mgmt, storage, and vxlan. It sounds like I should set up a bridge on the native VLAN for br-vlan, right? My apologies, but I'll admit that I haven't quite comprehended the nuances of OpenStack networking yet, especially regarding external access. I'm sure it will soon be as obvious to me as it is to all of you. (I'll also admit to a fear of having an error in my initial configuration that ends up being hard to correct. But, hey, Ansible. Right?) -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu 607-760-2328 (Cell) 607-777-4641 (Office) On Mon, Jul 11, 2022 at 3:34 AM Jonathan Rosser < jonathan.rosser at rd.bbc.co.uk> wrote: > If you choose to use vxlan for your tenant networks (the default in OSA) > you would probably be using a vlan for the external provider network. > > This would default to br-vlan, but alternatively can be any interface of > your choice. With the default configuration br-vlan would need to be > present on all of your controller (or dedicated network) nodes and carry > the tagged external vlan from your upstream switches. > > Jonathan. > On 10/07/2022 07:57, Dmitriy Rabotyagov wrote: > > Hi Dave, > > Intended use-case for br-vlan is when you want or need to provide vlan > networks in the environment. > > As example, we use vlan networks to bring in customers owned public > networks, as we need to pass vlan from the gateway to the compute nodes, > and we are not able to set vxlan on the gateway due to hardware that is > used there. > > At the same time in many environments you might not need using vlans at > all, as vxlan is what will be used by default to provide tenant networks. > > ??, 9 ???. 2022 ?., 23:44 Dave Hall : > >> Hello. >> >> I'm preparing to do my first deployment (of Yoga) on real hardware. The >> documentation regarding br-vlan was hard for me to understand. Could I >> get a clarification on what to do with this? >> >> Note: my intended use case is as an academic/instructional environment. >> I'm thinking along the lines of treating each student as a separate tenant >> that would be preconfigured with templated set of VMs appropriate to the >> course content. >> >> Any other thoughts regarding this scenario would also be greatly >> appreciated. >> >> Thanks. >> >> -Dave >> >> -- >> Dave Hall >> Binghamton University >> kdhall at binghamton.edu >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From rafaelweingartner at gmail.com Mon Jul 11 18:26:27 2022 From: rafaelweingartner at gmail.com (=?UTF-8?Q?Rafael_Weing=C3=A4rtner?=) Date: Mon, 11 Jul 2022 15:26:27 -0300 Subject: [CLOUDKITTY] Missed CloudKitty meeting today Message-ID: Hello guys, I would like to apologize for missing the CloudKitty meeting today. I was concentrating on some work, and my alarm for the meeting did not ring. If you need something, just let me know. Again, sorry for the inconvenience; see you guys at our next meeting. -- Rafael Weing?rtner -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jul 11 18:36:28 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 11 Jul 2022 13:36:28 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 14 July 2022 at 1500 UTC Message-ID: <181ee8c6a9e.b95207cb391423.5781004865867855521@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 14 July 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, 13 July at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From ozzzo at yahoo.com Mon Jul 11 20:10:53 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Mon, 11 Jul 2022 20:10:53 +0000 (UTC) Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> Message-ID: <479542002.91408.1657570253503@mail.yahoo.com> Unfortunately I was not able to get permission to be Adjutant PTL. They didn't say no, but the decision makers are too busy to address the issue. As I settle into this new position, I am realizing that I don't have time to do it anyway, so I will have to regretfully agree to placing Adjutant on the "inactive" list. If circumstances change, I will ask about resurrecting the project. Albert On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann wrote: ---- On Fri, 22 Apr 2022 15:53:37 -0500? Ghanshyam Mann wrote --- > Hi Braden, > > Please let us know about the status of your company's permission to maintain the project. > As we are in Zed cycle development and there is no one to maintain/lead this project we > need to start thinking about the next steps mentioned in the leaderless project etherpad > Hi Braden, We have not heard back from you if you can help in maintaining the Adjutant. As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project list - https://review.opendev.org/c/openstack/governance/+/849153/1 -gmann > - https://etherpad.opendev.org/p/zed-leaderless > > -gmann > >? ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- >? >? ? ? ? ? ? ? ? I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. >? >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote:? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? Hi, >? > >? > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. >? > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. >? > >? > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html >? > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html >? > >? > -- >? > Slawek Kaplonski >? > Principal Software Engineer >? > Red Hat? ? ? ? ? ? ? ? ? ? ? ? ? ? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gagehugo at gmail.com Mon Jul 11 21:50:48 2022 From: gagehugo at gmail.com (Gage Hugo) Date: Mon, 11 Jul 2022 16:50:48 -0500 Subject: [openstack-helm] No Meeting Tomorrow Message-ID: Hey team, Since there's nothing on the agenda, tomorrow's meeting is cancelled. We will plan on meeting again same time next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Mon Jul 11 22:13:14 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 11 Jul 2022 15:13:14 -0700 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: Greetings! Hopefully these answers help! On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND wrote: > > I everyone, I?m currently working back again with Ironic and it?s amazing! > > However, during our demo session to our users few questions arise. > > We?re currently deploying nodes using a private vlan that can?t be reached from outside of the Openstack network fabric (vlan 101 - 192.168.101.0/24) and everything is fine with this provisioning network as our ToR switch all know about it and other Control plan VLANs such as the internal APIs VLAN which allow the IPA Ramdisk to correctly and seamlessly be able to contact the internal IRONIC APIs. Nice, I've had my lab configured like this in the past. > > (When you declare a port as a trunk allowed all vlan on a aruba switch it seems it automatically analyse the CIDR your host try to reach from your VLAN and route everything to the corresponding VLAN that match the destination IP). > Ugh, that... could be fun :\ > So know, I still get few tiny issues: > > 1?/- When I spawn a nova instance on a ironic host that is set to use flat network (From horizon as a user), why does the nova wizard still ask for a neutron network if it?s not set on the provisioned host by the IPA ramdisk right after the whole disk image copy? Is that some missing development on horizon or did I missed something? Horizon just is not aware... and you can actually have entirely different DHCP pools on the same flat network, so that neutron network is intended for the instance's addressing to utilize. Ironic does just ask from an allocation from a provisioning network, which can and *should* be a different network than the tenant network. > > 2?/- In a flat network layout deployment using direct deploy scenario for images, am I still supposed to create a ironic provisioning network in neutron? > > From my understanding (and actually my tests) we don?t, as any host booting on the provisioning vlan will catch up an IP and initiate the bootp sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, but it?s a bit confusing as many documentation explicitly require for this network to exist on neutron. Yes. Direct is short hand for "Copy it over the network and write it directly to disk". It still needs an IP address on the provisioning network (think, subnet instead of distinct L2 broadcast domain). When you ask nova for an instance, it sends over what the machine should use as a "VIF" (neutron port), however that is never actually bound configuration wise into neutron until after the deployment completes. It *could* be that your neutron config is such that it just works anyway, but I suspect upstream contributors would be a bit confused if you reported an issue and had no provisioning network defined. > > 3?/- My whole Openstack network setup is using Openvswitch and vxlan tunnels on top of a spine/leaf architecture using aruba CX8360 switches (for both spine and leafs), am I required to use either the networking-generic-switch driver or a vendor neutron driver ? If that?s right, how will this driver be able to instruct the switch to assign the host port the correct openvswitch vlan id and register the correct vxlan to openvswitch from this port? I mean, ok neutron know the vxlan and openvswitch the tunnel vlan id/interface but what is the glue of all that? If your happy with flat networks, no. If you want tenant isolation networking wise, yes. NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port level local link configuration (well, Ironic includes the port information (local link connection, physical network, and some other details) to Neutron with the port binding request. Those ML2 drivers, then either request the switch configuration be updated, or take locally configured credentials to modify port configuration in Neutron, and logs into the switch to toggle the access port's configuration which the baremetal node is attached to. Generally, they are not vxlan network aware, and at least with networking-generic-switch vlan ID numbers are expected and allocated via neutron. Sort of like the software is logging into the switch and running something along the lines of "conf t;int gi0/21;switchport mode access;switchport access vlan 391 ; wri mem" > > 4?/- I?ve successfully used openstack cloud oriented CentOS and debian images or snapshot of VMs to provision my hosts, this is an awesome feature, but I?m wondering if there is a way to let those host cloud-init instance to request for neutron metadata endpoint? > Generally yes, you *can* use network attached metadata with neutron *as long as* your switches know to direct the traffic for the metadata IP to the Neutron metadata service(s). We know of operators who ahve done it without issues, but often that additional switch configured route is not always the best hting. Generally we recommend enabling and using configuration drives, so the metadata is able to be picked up by cloud-init. > I was a bit surprised about the ironic networking part as I was expecting the IPA ramdisk to at least be able to set the host os with the appropriate network configuration file for whole disk images that do not use encryption by injecting those information from the neutron api into the host disk while mounted (right after the image dd). > IPA has no knowledge of how to modify the host OS in this regard. modifying the host OS has generally been something the ironic community has avoided since it is not exactly cloudy to have to do so. Generally most clouds are running with DHCP, so as long as that is enabled and configured, things should generally "just work". Hopefully that provides a little more context. Nothing prevents you from writing your own hardware manager that does exactly this, for what it is worth. > All in all I really like the ironic approach of the baremetal provisioning process, and I?m pretty sure that I?m just missing a bit of understanding of the networking part but it?s really the most confusing part of it to me as I feel like if there is a missing link in between neutron and the host HW or the switches. > Thanks! It is definitely one of the more complex parts given there are many moving parts, and everyone wants (or needs) to have their networking configured just a little differently. Hopefully I've kind of put some of the details out there, if you need more information, please feel free to reach out, and also please feel free to ask questions in #openstack-ironic on irc.oftc.net. > Thanks a lot anyone that will take time to explain me this :-) :) From stephenfin at redhat.com Tue Jul 12 12:14:03 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Jul 2022 13:14:03 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> Message-ID: <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > On 6/16/22 19:53, Stephen Finucane wrote: > > On Thu, 2022-06-16 at 17:13 +0100, Stephen Finucane wrote: > > > On Wed, 2022-06-15 at 01:04 +0100, Sean Mooney wrote: > > > > On Wed, 2022-06-15 at 00:58 +0100, Sean Mooney wrote: > > > > > On Tue, 2022-06-14 at 18:49 -0500, Ghanshyam Mann wrote: > > > > > > ---- On Tue, 14 Jun 2022 17:47:59 -0500 Thomas Goirand wrote ---- > > > > > > > On 6/13/22 00:10, Thomas Goirand wrote: > > > > > > > > Hi, > > > > > > > > > > > > > > > > A few DDs are pushing me to upgrade jsonschema in Debian Unstable. > > > > > > > > However, OpenStack global requirements are still stuck at 3.2.0. Is > > > > > > > > there any reason for it, or should we attempt to upgrade to 4.6.0? > > > > > > > > > > > > > > > > I'd really appreciate if someone (else than me) was driving this... > > > > > > > > > > > > > > > > Cheers, > > > > > > > > > > > > > > > > Thomas Goirand (zigo) > > > > > > > > > > > > > > > > > > > > > > FYI, Nova fails with it: > > > > > > > https://ci.debian.net/data/autopkgtest/unstable/amd64/n/nova/22676760/log.gz > > > > > > > > > > > > > > Can someone from the Nova team investigate? > > > > > > > > > > > > Nova failures are due to the error message change (it happens regularly and they change them in most versions) > > > > > > in jsonschema new version. I remember we faced this type of issue previously also and we updated > > > > > > nova tests not to assert the error message but seems like there are a few left which is failing with > > > > > > jsonschema 4.6.0. > > > > > > > > > > > > Along with these error message failures fixes and using the latest jsonschema, in nova, we use > > > > > > Draft4Validator from jsonschema and the latest validator is Draft7Validator so we should check > > > > > > what are backward-incompatible changes in Draft7Validator and bump this too? > > > > > > > > > > well i think the reason this is on 3.2.0 iis nothing to do with nova > > > > > i bumpt it manually in https://review.opendev.org/c/openstack/requirements/+/845859/ > > > > > > > > > > The conflict is caused by: > > > > > The user requested jsonschema===4.0.1 > > > > > tripleo-common 16.4.0 depends on jsonschema>=3.2.0 > > > > > rsd-lib 1.2.0 depends on jsonschema>=2.6.0 > > > > > taskflow 4.7.0 depends on jsonschema>=3.2.0 > > > > > zvmcloudconnector 1.4.1 depends on jsonschema>=2.3.0 > > > > > os-net-config 15.2.0 depends on jsonschema>=3.2.0 > > > > > task-core 0.2.1 depends on jsonschema>=3.2.0 > > > > > python-zaqarclient 2.3.0 depends on jsonschema>=2.6.0 > > > > > warlock 1.3.3 depends on jsonschema<4 and >=0.7 > > > > > > > > > > https://zuul.opendev.org/t/openstack/build/06ed295bb8244c16b48e2698c1049be9 > > > > > > > > > > it looks like warlock is clamping it ot less then 4 which is why we are stokc on 3.2.0 > > > > glance client seams to be the only real user of this > > > > https://codesearch.opendev.org/?q=warlock&i=nope&literal=nope&files=&excludeFiles=&repos= > > > > perhaps we could jsut remove the dependcy? > > > > > > I've proposed vendoring this dependency in glanceclient [1]. I've also proposed > > > a change to fix this in warlock [2] but given the lack of activity there, I > > > doubt it'll merge anytime soon so the former sounds like a better option. > > > > My efforts to collect *all* the projects continues. Just cut 2.0.0 of warlock so > > we should see this make it's way through the requirements machinery in the next > > few days. I'll abandon the glanceclient change now. > > > > Stephen > > Hi Stephen, > > I hope you don't mind I ping and up this thread. > > Thanks a lot for this work. Any more progress here? We've uncapped warlock in openstack/requirements [1]. We just need the glance folks to remove their own cap now [2] so that we can raise the version in upper constraint. Stephen [1] https://review.opendev.org/c/openstack/requirements/+/849284 [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > I'm being pressed by the Debian community to update jsonschema in > Unstable, because 3.2.0 is breaking other software (at least 2 > packages). If I do, I know things will break in OpenStack. So this > *MUST* be fixed for Zed... > > Cheers, > > Thomas Goirand (zigo) > From stephenfin at redhat.com Tue Jul 12 15:50:12 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Jul 2022 16:50:12 +0100 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: References: Message-ID: <42c2a184499470bdaa62a16b5f59def2a59e08dd.camel@redhat.com> On Thu, 2022-06-30 at 15:39 +0200, Herve Beraud wrote: > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member of > the oslo core team. > > During the last months Takashi has been a significant contributor to the oslo > projects. > > Obviously we think he'd make a good addition to the core team. If there are no > objections, I'll make that happen in a week. > > Thanks. +1 from me. It would be great to have tkajinam onboard. Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jul 12 15:50:30 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 12 Jul 2022 16:50:30 +0100 Subject: Propose to add Tobias Urdin as Tooz core reviewer In-Reply-To: References: Message-ID: <573e57d95ca7553239e576f8b41f07b006dab513.camel@redhat.com> On Thu, 2022-06-30 at 15:43 +0200, Herve Beraud wrote: > Hello everybody, > > It is my pleasure to propose Tobias Urdin (tobias-urdin) as a new member of > the Tooz project core team. > > During the last months Tobias has been a significant contributor to the Tooz > project. > > Obviously we think he'd make a good addition to the core team. If there are no > objections, I'll make that happen in a week. > > Thanks. +1 from me! Stephen -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Tue Jul 12 16:48:19 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 12 Jul 2022 22:18:19 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Shephard/Swogat, I tried changing the setting as suggested and it looks like it has failed at step 4 with error: :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:24:47.736198 | 2.21s 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | TASK | Create identity internal endpoint 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request you have made requires authentication."} 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 Checking further the endpoint list: I see only one endpoint for keystone is gettin created. DeprecationWarning +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity | True | admin | http://30.30.30.173:35357 | | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ it looks like something related to the SSL, we have also verified that the GUI login screen shows that Certificates are applied. exploring more in logs, meanwhile any suggestions or know observation would be of great help. thanks again for the support. Best Regards, Lokendra On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan wrote: > I had faced a similar kind of issue, for ip based setup you need to > specify the domain name as the ip that you are going to use, this error is > showing up because the ssl is ip based but the fqdns seems to be > undercloud.com or overcloud.example.com. > I think for undercloud you can change the undercloud.conf. > > And will it work if we specify clouddomain parameter to the IP address for > overcloud? because it seems he has not specified the clouddomain parameter > and overcloud.example.com is the default domain for overcloud.example.com. > > On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, > wrote: > >> What is the domain name you have specified in the undercloud.conf file? >> And what is the fqdn name used for the generation of the SSL cert? >> >> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >> wrote: >> >>> Hi Team, >>> We were trying to install overcloud with SSL enabled for which the UC is >>> installed, but OC install is getting failed at step 4: >>> >>> ERROR >>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>> exact error", "rc": 1} >>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>> 600, in urlopen\n chunked=chunked)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>> in _make_request\n self._validate_conn(conn)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>> in _validate_conn\n conn.connect()\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>> connect\n _match_hostname(cert, self.assert_hostname or >>> server_hostname)\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>> handling of the above exception, another exception occurred:\n\nTraceback >>> (most recent call last):\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>> send\n timeout=timeout\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>> increment\n raise MaxRetryError(_pool, url, error or >>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>> in request\n resp = self.send(prep, **send_kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>> send\n r = adapter.send(request, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 138, in _do_create_plugin\n authenticated=False)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 610, in get_discovery\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>> in get_discovery\n disc = Discover(session, url, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>> in __init__\n authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>> in get_version_data\n resp = session.get(url, headers=headers, >>> authenticated=authenticated)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>> in get\n return self.request(url, 'GET', **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>> request\n resp = send(**kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>> in _send_request\n raise >>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>> exception, another exception occurred:\n\nTraceback (most recent call >>> last):\n File \"\", line 102, in \n File \"\", line >>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>> mod_name, mod_spec, pkg_name, script_name)\n File >>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>> run_globals)\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 185, in \n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 181, in main\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>> line 407, in __call__\n File >>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>> line 141, in run\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 517, in search_services\n services = self.list_services()\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>> File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>> line 32, in _identity_client\n 'identity', min_version=2, >>> max_version='3.latest')\n File >>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>> **kwargs)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 271, in get_endpoint_data\n service_catalog = >>> self.get_access(session).service_catalog\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 206, in get_auth_ref\n self._plugin = >>> self._do_create_plugin(session)\n File >>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>> versioned identity endpoints when attempting to authenticate. Please check >>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>> retries exceeded with url: / (Caused by >>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.271914 | 2.47s >>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:01.273659 | 2.47s >>> >>> PLAY RECAP >>> ********************************************************************* >>> localhost : ok=0 changed=0 unreachable=0 >>> failed=0 skipped=2 rescued=0 ignored=0 >>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>> failed=0 skipped=198 rescued=0 ignored=0 >>> undercloud : ok=28 changed=7 unreachable=0 >>> failed=1 skipped=3 rescued=0 ignored=0 >>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>> >>> >>> in the deploy.sh: >>> >>> openstack overcloud deploy --templates \ >>> -r /home/stack/templates/roles_data.yaml \ >>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>> --baremetal-deployment >>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>> --network-config \ >>> -e /home/stack/templates/environment.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>> \ >>> -e /home/stack/templates/ironic-config.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>> \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>> -e >>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>> -e /home/stack/containers-prepare-parameter.yaml >>> >>> Addition lines as highlighted in yellow were passed with modifications: >>> tls-endpoints-public-ip.yaml: >>> Passed as is in the defaults. >>> enable-tls.yaml: >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Enable SSL on OpenStack Public Endpoints >>> # description: | >>> # Use this environment to pass in certificates for SSL deployments. >>> # For these values to take effect, one of the tls-endpoints-*.yaml >>> # environments must also be used. >>> parameter_defaults: >>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>> # Type: boolean >>> HorizonSecureCookies: True >>> >>> # Specifies the default CA cert to use if TLS is used for services in >>> the public network. >>> # Type: string >>> PublicTLSCAFile: >>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>> >>> # The content of the SSL certificate (without Key) in PEM format. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> SSLCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> # The content of an SSL intermediate CA certificate in PEM format. >>> # Type: string >>> SSLIntermediateCertificate: '' >>> >>> # The content of the SSL Key in PEM format. >>> # Type: string >>> SSLKey: | >>> -----BEGIN PRIVATE KEY----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END PRIVATE KEY----- >>> >>> # ****************************************************** >>> # Static parameters - these are values that must be >>> # included in the environment but should not be changed. >>> # ****************************************************** >>> # The filepath of the certificate as it will be stored in the >>> controller. >>> # Type: string >>> DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem >>> >>> # ********************* >>> # End static parameters >>> # ********************* >>> >>> inject-trust-anchor.yaml >>> >>> # ******************************************************************* >>> # This file was created automatically by the sample environment >>> # generator. Developers should use `tox -e genconfig` to update it. >>> # Users are recommended to make changes to a copy of the file instead >>> # of the original, if any customizations are needed. >>> # ******************************************************************* >>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>> # description: | >>> # When using an SSL certificate signed by a CA that is not in the >>> default >>> # list of CAs, this environment allows adding a custom CA certificate >>> to >>> # the overcloud nodes. >>> parameter_defaults: >>> # The content of a CA's SSL certificate file in PEM format. This is >>> evaluated on the client side. >>> # Mandatory. This parameter must be set by the user. >>> # Type: string >>> SSLRootCertificate: | >>> -----BEGIN CERTIFICATE----- >>> ----*** CERTICATELINES TRIMMED ** >>> -----END CERTIFICATE----- >>> >>> resource_registry: >>> OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml >>> >>> >>> >>> >>> The procedure to create such files was followed using: >>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>> >>> >>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>> certificate, without DNS. * >>> >>> Any idea around this error would be of great help. >>> >>> -- >>> skype: lokendrarathour >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Tue Jul 12 19:22:18 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Wed, 13 Jul 2022 00:52:18 +0530 Subject: CRITICAL rally [-] Unhandled error: KeyError: 'openstack' Edit | Openstack wallaby tripleo | centos 8 stream Message-ID: Hi, I am using openstack wallaby in tripleo architecture. I googled around and found i can use openstack-rally for testing the openstack deployment and will be able to genrate report also. so i tried installing openstack rally using "yum install openstack-rally", "pip3 install openstack-rally" and finally cloned the git repo and ran "python3 setup.py install" but no matter what i do i am getting the error 'unhandled error : keyerror 'openstack'' (overcloud) [root at hkg2director ~]# rally deployment create --fromenv --name=existing +--------------------------------------+----------------------------+----------+------------------+--------+ | uuid | created_at | name | status | active | +--------------------------------------+----------------------------+----------+------------------+--------+ | 484aae52-a690-4163-828b-16adcaa0d8fb | 2022-06-07T05:48:39.039296 | existing | deploy->finished | | +--------------------------------------+----------------------------+----------+------------------+--------+ Using deployment: 484aae52-a690-4163-828b-16adcaa0d8fb (overcloud) [root at hkg2director ~]# rally deployment show 484aae52-a690-4163-828b-16adcaa0d8fb Command failed, please check log for more info 2022-06-07 13:48:58.651 482053 CRITICAL rally [-] Unhandled error: KeyError: 'openstack' 2022-06-07 13:48:58.651 482053 ERROR rally Traceback (most recent call last): 2022-06-07 13:48:58.651 482053 ERROR rally File "/bin/rally", line 10, in 2022-06-07 13:48:58.651 482053 ERROR rally sys.exit(main()) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/main.py", line 40, in main 2022-06-07 13:48:58.651 482053 ERROR rally return cliutils.run(sys.argv, categories) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/cliutils.py", line 669, in run 2022-06-07 13:48:58.651 482053 ERROR rally ret = fn(*fn_args, **fn_kwargs) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/envutils.py", line 142, in inner 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/plugins/__init__.py", line 59, in wrapper 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) 2022-06-07 13:48:58.651 482053 ERROR rally File "/usr/local/lib/python3.6/site-packages/rally/cli/commands/deployment.py", line 205, in show 2022-06-07 13:48:58.651 482053 ERROR rally creds = deployment["credentials"]["openstack"][0] 2022-06-07 13:48:58.651 482053 ERROR rally KeyError: 'openstack' 2022-06-07 13:48:58.651 482053 ERROR rally can someone please help me in fixing this issue or give any suggestion on which tool to use to test the openstack deployment and benchmark. Also is tempest available for wallaby?? i checked the opendev and github repos last tags available are for victoria. With regards, Swogat pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From dale at catalystcloud.nz Tue Jul 12 21:18:57 2022 From: dale at catalystcloud.nz (Dale Smith) Date: Wed, 13 Jul 2022 09:18:57 +1200 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <479542002.91408.1657570253503@mail.yahoo.com> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> <479542002.91408.1657570253503@mail.yahoo.com> Message-ID: <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> Hi gmann and Albert, I'd like to put my hand up for PTL of Adjutant if you are unable, Albert. Catalyst Cloud continue to have an interest in keeping this project active and maintained, and I am an early contributor/reviewer of Adjutant codebase alongside Adrian Turjak in 2015/2016. cheers, Dale Smith On 12/07/22 08:10, Albert Braden wrote: > Unfortunately I was not able to get permission to be Adjutant PTL. > They didn't say no, but the decision makers are too busy to address > the issue. As I settle into this new position, I am realizing that I > don't have time to do it anyway, so I will have to regretfully agree > to placing Adjutant on the "inactive" list. If circumstances change, I > will ask about resurrecting the project. > > Albert > On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann > wrote: > > > ---- On Fri, 22 Apr 2022 15:53:37 -0500? Ghanshyam Mann > wrote --- > > Hi Braden, > > > > Please let us know about the status of your company's permission to > maintain the project. > > As we are in Zed cycle development and there is no one to > maintain/lead this project we > > need to start thinking about the next steps mentioned in the > leaderless project etherpad > > > Hi Braden, > > We have not heard back from you if you can help in maintaining the > Adjutant. > > As it has no PTL and no patches for the last 250 days, I am adding it > to the 'Inactive' project > list > - https://review.opendev.org/c/openstack/governance/+/849153/1 > > -gmann > > > > - https://etherpad.opendev.org/p/zed-leaderless > > > > -gmann > > > >? ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden > wrote ---- > >? >? ? ? ? ? ? ? ? I'm still waiting for permission to work on > Adjutant. My contract ends this month and I'm taking 2 months off > before I start fulltime. I have hope that permission will be granted > while I'm out. I expect that I will be able to start working on > Adjutant in June. > >? >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? On Saturday, March 5, 2022, 01:32:13 > PM EST, Slawek Kaplonski wrote:? ? ? ? ? ? ? ? ? > ? ? ? ? ? ? ? > >? >? ? ? ? ? ? ? ? > >? >? ? ? ? ? ? ? ? Hi, > >? > > >? > After last PTL elections [1] Adjutant project don't have any PTL. > It also didn't had PTL in the Yoga cycle already. > >? > So this is call for maintainters for Adjutant. If You are using > it or interested in it, and if You are willing to help maintaining > this project, please contact TC members through this mailing list or > directly on the #openstack-tc channel @OFTC. We can talk possibilities > to make someone a PTL of the project or going with this project to the > Distributed Project Leadership [2] model. > >? > > >? > [1] > http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > >? > [2] > https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > >? > > >? > -- > >? > Slawek Kaplonski > >? > Principal Software Engineer > >? > Red Hat? ? ? ? ? ? ? ? ? ? ? ? ? ? > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From gmann at ghanshyammann.com Wed Jul 13 00:27:45 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 12 Jul 2022 19:27:45 -0500 Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> <479542002.91408.1657570253503@mail.yahoo.com> <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> Message-ID: <181f4f4636e.1179ff511475329.1733911049127188418@ghanshyammann.com> ---- On Tue, 12 Jul 2022 16:18:57 -0500 Dale Smith wrote --- > Hi gmann and Albert, > I'd like to put my hand up for PTL of Adjutant if you are unable, Albert. > Catalyst Cloud continue to have an interest in keeping this project active and maintained, and I am an early contributor/reviewer of Adjutant codebase alongside Adrian Turjak in 2015/2016. Thanks Dale, please propose a patch in governance to update the PTL info. Example: https://review.opendev.org/c/openstack/governance/+/807884 -gmann > > > > cheers, > > Dale Smith > > > > On 12/07/22 08:10, Albert Braden wrote: > Unfortunately I was not able to get permission to be Adjutant PTL. They didn't say no, but the decision makers are too busy to address the issue. As I settle into this new position, I am realizing that I don't have time to do it anyway, so I will have to regretfully agree to placing Adjutant on the "inactive" list. If circumstances change, I will ask about resurrecting the project. > > Albert > On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann wrote: > > ---- On Fri, 22 Apr 2022 15:53:37 -0500 Ghanshyam Mann wrote --- > > Hi Braden, > > > > Please let us know about the status of your company's permission to maintain the project. > > As we are in Zed cycle development and there is no one to maintain/lead this project we > > need to start thinking about the next steps mentioned in the leaderless project etherpad > > > Hi Braden, > > We have not heard back from you if you can help in maintaining the Adjutant. > > As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project > list > - https://review.opendev.org/c/openstack/governance/+/849153/1 > > -gmann > > > - https://etherpad.opendev.org/p/zed-leaderless > > > > -gmann > > > > ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- > > > I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. > > > On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote: > > > > > > Hi, > > > > > > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. > > > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. > > > > > > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html > > > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html > > > > > > -- > > > Slawek Kaplonski > > > Principal Software Engineer > > > Red Hat > > > > > > From park0kyung0won at dgist.ac.kr Wed Jul 13 05:45:33 2022 From: park0kyung0won at dgist.ac.kr (=?UTF-8?B?67CV6rK97JuQ?=) Date: Wed, 13 Jul 2022 14:45:33 +0900 (KST) Subject: Guide for Openstack installation with HA, OVN ? Message-ID: <321848520.267831.1657691133322.JavaMail.root@mailwas2> An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Wed Jul 13 06:17:03 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Wed, 13 Jul 2022 01:17:03 -0500 Subject: Guide for Openstack installation with HA, OVN ? In-Reply-To: <321848520.267831.1657691133322.JavaMail.root@mailwas2> References: <321848520.267831.1657691133322.JavaMail.root@mailwas2> Message-ID: Any idea on what you want to use to deploy your cluster? https://docs.openstack.org/openstack-ansible/latest/ https://wiki.openstack.org/wiki/TripleO ??? Cheers! On Wed, Jul 13, 2022 at 12:53 AM ??? wrote: > Hello > > > I've tried minimal installation of openstack following official > documentation > > Now I want to install openstack with > > > 1. High availability configuration - keystone, placement, neutron, glance, > cinder, ... > > 2. Open Virtual Network for neutron driver > > > But it's hard to find documentation online > > Could you provide some links for the material? > > > Thank you! > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From manchandavishal143 at gmail.com Wed Jul 13 07:36:37 2022 From: manchandavishal143 at gmail.com (vishal manchanda) Date: Wed, 13 Jul 2022 13:06:37 +0530 Subject: [horizon] Cancelling Today's Weekly meeting Message-ID: Hello Team, As discussed in the last weakly meeting, I am on vaccination this week. So there will be no horizon weekly meeting today. If anything urgent, please reach out to horizon core team. Thanks & regards, Vishal Manchanda -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Wed Jul 13 08:37:32 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 13 Jul 2022 10:37:32 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> Message-ID: <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Well... they expired... Is this a matter of just contacting JetBrains? I assume coolsvap won't be able to do this any longer, but IMHO this may just be handled by someone from the foundation/TC. mnaser maybe? xD CC'ing you just in case you'd like to step up, otherwise I'll try contacting JetBrains on my own ;) Thanks! Daniel On 8/7/22 13:43, Lajos Katona wrote: > Hi, > Thanks for asking, I have the same problem, my license also expired this > week. > > Lajos Katona > > Daniel Mellado > ezt > ?rta (id?pont: 2022. j?l. 8., P, 10:04): > > So... no news about this? Should we just assume that the licenses will > be no longer? Bummer... > > On 7/7/22 13:08, Daniel Mellado wrote: > > Just noticed that as well, thanks for bringing this up Eyal! > > > > On 7/7/22 12:04, Eyal B wrote: > >> Hello, > >> > >> Will the licenses be renewed ? they ended on July 5 > >> > >> Eyal > >> > >> On Thu, Jul 8, 2021 at 10:52 AM Swapnil Kulkarni > > >> >> wrote: > >> > >> ??? Sorry for the typo, It'd be July 5, 2022 > >> > >> > >> ??? On Thu, Jul 8, 2021 at 12:34 PM Kobi Samoray > > >> ??? >> > wrote: > >> > >> ??????? Hi Swapnil,____ > >> > >> ??????? We?re at July 2021 already ? so they expire at the end > of this > >> ??????? month?____ > >> > >> ??????? __ __ > >> > >> ??????? *From: *Swapnil Kulkarni > >> ??????? >> > >> ??????? *Date: *Tuesday, 6 July 2021 at 17:50 > >> ??????? *To: *openstack-discuss at lists.openstack.org > > >> ??????? > > >> ??????? > >> ??????? >> > >> ??????? *Subject: *[all] PyCharm Licenses Renewed till July 2021____ > >> > >> ??????? Hello,____ > >> > >> ??????? __ __ > >> > >> ??????? Happy to inform you the open source developer?license for > >> ??????? Pycharm has been renewed for 1 additional year till July > 2021. > >> ____ > >> > >> > >> ??????? ____ > >> > >> ??????? Best?Regards, > >> ??????? Swapnil Kulkarni > >> ??????? coolsvap at gmail dot com____ > >> > >> ??????? __ __ > >> > > From thierry at openinfra.dev Wed Jul 13 09:04:54 2022 From: thierry at openinfra.dev (Thierry Carrez) Date: Wed, 13 Jul 2022 11:04:54 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: Daniel Mellado wrote: > Well... they expired... Is this a matter of just contacting JetBrains? I > assume coolsvap won't be able to do this any longer, but IMHO this may > just be handled by someone from the foundation/TC. > > mnaser maybe? xD > > CC'ing you just in case you'd like to step up, otherwise I'll try > contacting JetBrains on my own ;) I'd recommend that someone relying on JetBrains handles the relationship. This is why Swapnil was handling it before :) -- Thierry Carrez From senrique at redhat.com Wed Jul 13 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 13 Jul 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 07-13-2022 Message-ID: This is a bug report from 07-06-2022 to 07-13-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- High - https://bugs.launchpad.net/os-brick/+bug/1981455 "RBD disconnect fails with AttributeError for startswith." Fix proposed to master. Medium - https://bugs.launchpad.net/cinder/+bug/1981068 "Dell PowerStore - cinder cannot delete volumes." No patch proposed to master yet. - https://bugs.launchpad.net/cinder/+bug/1981420 "Dell PowerMax - error when creating synchronous volumes." Low - https://bugs.launchpad.net/cinder/+bug/1980870 "Dell PowerMax driver may deadlock moving volumes between SGs." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1981354 "Infinidat Cinder driver does not return all iSCSI portals for multipath storage." Fix proposed to master. Incomplete - https://bugs.launchpad.net/cinder/+bug/1981211 "[stable/yoga] Update attachment failed for attachment." Unassigned. Cheers, Sofia -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmellado at redhat.com Wed Jul 13 11:21:34 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 13 Jul 2022 13:21:34 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: <123a6045-0ab2-e5d4-f200-53c6d5493137@redhat.com> Totally agree, In any case, and if there's anyone who would be able to rely on JetBrains please feel free to take this over, I've sent an email to JetBrains asking for this. Initial reply below: ##- Please type your reply above this line -## Hello, Thanks for contacting JetBrains Community Support. Your request (4161236) has been received and is being reviewed by our staff. To add additional comments, reply to this email or follow the link below: https://community-support.jetbrains.com/hc/requests/4161236 Thanks! ;) On 13/7/22 11:04, Thierry Carrez wrote: > Daniel Mellado wrote: >> Well... they expired... Is this a matter of just contacting JetBrains? >> I assume coolsvap won't be able to do this any longer, but IMHO this >> may just be handled by someone from the foundation/TC. >> >> mnaser maybe? xD >> >> CC'ing you just in case you'd like to step up, otherwise I'll try >> contacting JetBrains on my own ;) > > I'd recommend that someone relying on JetBrains handles the > relationship. This is why Swapnil was handling it before :) > From smooney at redhat.com Wed Jul 13 11:48:23 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Jul 2022 12:48:23 +0100 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> On Wed, 2022-07-13 at 11:04 +0200, Thierry Carrez wrote: > Daniel Mellado wrote: > > Well... they expired... Is this a matter of just contacting JetBrains? I > > assume coolsvap won't be able to do this any longer, but IMHO this may > > just be handled by someone from the foundation/TC. > > > > mnaser maybe? xD > > > > CC'ing you just in case you'd like to step up, otherwise I'll try > > contacting JetBrains on my own ;) > > I'd recommend that someone relying on JetBrains handles the > relationship. This is why Swapnil was handling it before :) i belive you can still use the comunity edition by the way without a liceince for opensouce or personal work and im not sure it the current lience is jut for future updates or if you can only use the softwhere during the licene periord. if its just for future updates you can continue to use it while you reach out to to JetBrains. i have used inteliJ and pycharm form time to time. but normally use emacs/nano but on rare ocations having a full fledge IDE has been handy for debuging but in general my experice is eventlet tends to break it out side of simple unit tests but the gevent compat option can help. thierry maybe the foundation could help get opendev added to https://www.jetbrains.com/community/opensource/#partner but in general its proably better for existing users to reach out to opensource at jetbrains.com > From fungi at yuggoth.org Wed Jul 13 12:20:21 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Jul 2022 12:20:21 +0000 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> Message-ID: <20220713122021.kscrifp3l5pe7od3@yuggoth.org> On 2022-07-13 12:48:23 +0100 (+0100), Sean Mooney wrote: [...] > thierry maybe the foundation could help get opendev added to > https://www.jetbrains.com/community/opensource/#partner [...] Maybe you meant OpenInfra (the foundation)? Given the OpenDev Collaboratory's strong stance in favor of purely free/libre open source developer tools, OpenDev partnering with a proprietary software vendor in order to get developers gratis access to closed-source tools would definitely send the wrong signal. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From lokendrarathour at gmail.com Wed Jul 13 12:13:55 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 13 Jul 2022 17:43:55 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Team, Any input on this case raised. Thanks, Lokendra On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour wrote: > Hi Shephard/Swogat, > I tried changing the setting as suggested and it looks like it has failed > at step 4 with error: > > :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | > tripleo_keystone_resources : Create identity public endpoint | undercloud | > 0:24:47.736198 | 2.21s > 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | > TASK | Create identity internal endpoint > 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | > FATAL | Create identity internal endpoint | undercloud | error={"changed": > false, "extra_data": {"data": null, "details": "The request you have made > requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list > services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, > The request you have made requires authentication."} > 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 > > > Checking further the endpoint list: > I see only one endpoint for keystone is gettin created. > > DeprecationWarning > > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > | ID | Region | Service Name | Service > Type | Enabled | Interface | URL | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity > | True | admin | http://30.30.30.173:35357 | > | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity > | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | > | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity > | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > > > it looks like something related to the SSL, we have also verified that the > GUI login screen shows that Certificates are applied. > exploring more in logs, meanwhile any suggestions or know observation > would be of great help. > thanks again for the support. > > Best Regards, > Lokendra > > > On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan > wrote: > >> I had faced a similar kind of issue, for ip based setup you need to >> specify the domain name as the ip that you are going to use, this error is >> showing up because the ssl is ip based but the fqdns seems to be >> undercloud.com or overcloud.example.com. >> I think for undercloud you can change the undercloud.conf. >> >> And will it work if we specify clouddomain parameter to the IP address >> for overcloud? because it seems he has not specified the clouddomain >> parameter and overcloud.example.com is the default domain for >> overcloud.example.com. >> >> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >> wrote: >> >>> What is the domain name you have specified in the undercloud.conf file? >>> And what is the fqdn name used for the generation of the SSL cert? >>> >>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, >>> wrote: >>> >>>> Hi Team, >>>> We were trying to install overcloud with SSL enabled for which the UC >>>> is installed, but OC install is getting failed at step 4: >>>> >>>> ERROR >>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries >>>> exceeded with url: / (Caused by SSLError(CertificateError(\"hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\",),))\n", >>>> "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the >>>> exact error", "rc": 1} >>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>> 600, in urlopen\n chunked=chunked)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>> in _make_request\n self._validate_conn(conn)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>> in _validate_conn\n conn.connect()\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>> connect\n _match_hostname(cert, self.assert_hostname or >>>> server_hostname)\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>> handling of the above exception, another exception occurred:\n\nTraceback >>>> (most recent call last):\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>> send\n timeout=timeout\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>> increment\n raise MaxRetryError(_pool, url, error or >>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>> send\n r = adapter.send(request, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>> in get_discovery\n disc = Discover(session, url, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>> in __init__\n authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>> in get_version_data\n resp = session.get(url, headers=headers, >>>> authenticated=authenticated)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>> request\n resp = send(**kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>> in _send_request\n raise >>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>> exception, another exception occurred:\n\nTraceback (most recent call >>>> last):\n File \"\", line 102, in \n File \"\", line >>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>> run_globals)\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 185, in \n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 181, in main\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>> line 407, in __call__\n File >>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>> line 141, in run\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 517, in search_services\n services = self.list_services()\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>> File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>> line 32, in _identity_client\n 'identity', min_version=2, >>>> max_version='3.latest')\n File >>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>> **kwargs)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 271, in get_endpoint_data\n service_catalog = >>>> self.get_access(session).service_catalog\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 206, in get_auth_ref\n self._plugin = >>>> self._do_create_plugin(session)\n File >>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>> versioned identity endpoints when attempting to authenticate. Please check >>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>> retries exceeded with url: / (Caused by >>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.271914 | 2.47s >>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:01.273659 | 2.47s >>>> >>>> PLAY RECAP >>>> ********************************************************************* >>>> localhost : ok=0 changed=0 unreachable=0 >>>> failed=0 skipped=2 rescued=0 ignored=0 >>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>> failed=0 skipped=198 rescued=0 ignored=0 >>>> undercloud : ok=28 changed=7 unreachable=0 >>>> failed=1 skipped=3 rescued=0 ignored=0 >>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary >>>> Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>> >>>> >>>> in the deploy.sh: >>>> >>>> openstack overcloud deploy --templates \ >>>> -r /home/stack/templates/roles_data.yaml \ >>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>> --baremetal-deployment >>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>> --network-config \ >>>> -e /home/stack/templates/environment.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>> \ >>>> -e /home/stack/templates/ironic-config.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>> \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>> -e >>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>> -e /home/stack/containers-prepare-parameter.yaml >>>> >>>> Addition lines as highlighted in yellow were passed with modifications: >>>> tls-endpoints-public-ip.yaml: >>>> Passed as is in the defaults. >>>> enable-tls.yaml: >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Enable SSL on OpenStack Public Endpoints >>>> # description: | >>>> # Use this environment to pass in certificates for SSL deployments. >>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>> # environments must also be used. >>>> parameter_defaults: >>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>> # Type: boolean >>>> HorizonSecureCookies: True >>>> >>>> # Specifies the default CA cert to use if TLS is used for services in >>>> the public network. >>>> # Type: string >>>> PublicTLSCAFile: >>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>> >>>> # The content of the SSL certificate (without Key) in PEM format. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> SSLCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> # The content of an SSL intermediate CA certificate in PEM format. >>>> # Type: string >>>> SSLIntermediateCertificate: '' >>>> >>>> # The content of the SSL Key in PEM format. >>>> # Type: string >>>> SSLKey: | >>>> -----BEGIN PRIVATE KEY----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END PRIVATE KEY----- >>>> >>>> # ****************************************************** >>>> # Static parameters - these are values that must be >>>> # included in the environment but should not be changed. >>>> # ****************************************************** >>>> # The filepath of the certificate as it will be stored in the >>>> controller. >>>> # Type: string >>>> DeployedSSLCertificatePath: >>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>> >>>> # ********************* >>>> # End static parameters >>>> # ********************* >>>> >>>> inject-trust-anchor.yaml >>>> >>>> # ******************************************************************* >>>> # This file was created automatically by the sample environment >>>> # generator. Developers should use `tox -e genconfig` to update it. >>>> # Users are recommended to make changes to a copy of the file instead >>>> # of the original, if any customizations are needed. >>>> # ******************************************************************* >>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>> # description: | >>>> # When using an SSL certificate signed by a CA that is not in the >>>> default >>>> # list of CAs, this environment allows adding a custom CA certificate >>>> to >>>> # the overcloud nodes. >>>> parameter_defaults: >>>> # The content of a CA's SSL certificate file in PEM format. This is >>>> evaluated on the client side. >>>> # Mandatory. This parameter must be set by the user. >>>> # Type: string >>>> SSLRootCertificate: | >>>> -----BEGIN CERTIFICATE----- >>>> ----*** CERTICATELINES TRIMMED ** >>>> -----END CERTIFICATE----- >>>> >>>> resource_registry: >>>> OS::TripleO::NodeTLSCAData: >>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>> >>>> >>>> >>>> >>>> The procedure to create such files was followed using: >>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>> >>>> >>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed IP-based >>>> certificate, without DNS. * >>>> >>>> Any idea around this error would be of great help. >>>> >>>> -- >>>> skype: lokendrarathour >>>> >>>> >>>> > > > -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Wed Jul 13 12:55:07 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Wed, 13 Jul 2022 08:55:07 -0400 Subject: [OpenStack-Ansible][Neutron] Guide for Openstack installation with HA, OVN ? Message-ID: Hello. I would like to extend Park's question with a specific question of my own that could be used as an example the 'first time deployer' experience. My question is about Neutron deployment as it is specified in the various openstack_user_config.yml examples. openstack_user_config.yml.prod.example declares network-infra_hosts and network-agent_hosts, whereas openstack_user_config.yml.singlebond.example only declares network_hosts. For the novice trying to choose which example file to customize for our deployment, the following concerns arise: - If I use network_hosts, does that deploy both the infra and the agent on each network_host? - If I use network-infra_hosts and network-agent_hosts, but give both the same set of host names/IPs, will it deploy correctly and produce a working Neutron service? - If I successfully deploy a smaller cluster using network_hosts, and then grow my cluster, what criteria indicate the point at which I need to switch to infra_hosts and agent_hosts, and how many of each should I deploy? In reading through the Neutron documentation, I see some 'what' and some 'how', but very little 'why'. Perhaps a section titled 'Practical Deployment Considerations'? At this point I just want to know how to make Neutron deployment work right and how to be able to tell, after deployment, that it is working right. Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu On Wed, Jul 13, 2022 at 1:46 AM ??? wrote: > Hello > > > I've tried minimal installation of openstack following official > documentation > > Now I want to install openstack with > > > 1. High availability configuration - keystone, placement, neutron, glance, > cinder, ... > > 2. Open Virtual Network for neutron driver > > > But it's hard to find documentation online > > Could you provide some links for the material? > > > Thank you! > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jul 13 13:02:48 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 13 Jul 2022 14:02:48 +0100 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: <20220713122021.kscrifp3l5pe7od3@yuggoth.org> References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> <8c0a1bed6e4ed9b8550cfeefa756f313624ca231.camel@redhat.com> <20220713122021.kscrifp3l5pe7od3@yuggoth.org> Message-ID: On Wed, 2022-07-13 at 12:20 +0000, Jeremy Stanley wrote: > On 2022-07-13 12:48:23 +0100 (+0100), Sean Mooney wrote: > [...] > > thierry maybe the foundation could help get opendev added to > > https://www.jetbrains.com/community/opensource/#partner > [...] > > Maybe you meant OpenInfra (the foundation)? > > Given the OpenDev Collaboratory's strong stance in favor of purely > free/libre open source developer tools, OpenDev partnering with a > proprietary software vendor in order to get developers gratis access > to closed-source tools would definitely send the wrong signal. sorry yes i ment OpenInfra but i dont really think it needed jsut that was the only thing i saw that the foundation could do that user could not do themselve by reaching out to jetbrains directly for there personal licences. From dmellado at redhat.com Wed Jul 13 14:43:55 2022 From: dmellado at redhat.com (Daniel Mellado) Date: Wed, 13 Jul 2022 16:43:55 +0200 Subject: [all] PyCharm Licenses Renewed till July 2021 In-Reply-To: References: <88b78f27-f1e8-d896-26f7-5363b8a87687@redhat.com> <898acbf6-3e71-8a8b-3b31-10f5536630f7@redhat.com> <3f8be3b1-0000-b6b2-d451-1c6571d613b1@redhat.com> Message-ID: <03861c73-b7b4-d011-39e6-184f0e1587a0@redhat.com> Hi all, besides all the comments which I really appreciate, I've gotten a response from JetBrains and the licenses should be active again for one more year (2023). Any volunteer to handle the JetBrains relationship from the community side? Best! Daniel On 13/7/22 11:04, Thierry Carrez wrote: > Daniel Mellado wrote: >> Well... they expired... Is this a matter of just contacting JetBrains? >> I assume coolsvap won't be able to do this any longer, but IMHO this >> may just be handled by someone from the foundation/TC. >> >> mnaser maybe? xD >> >> CC'ing you just in case you'd like to step up, otherwise I'll try >> contacting JetBrains on my own ;) > > I'd recommend that someone relying on JetBrains handles the > relationship. This is why Swapnil was handling it before :) > From noonedeadpunk at gmail.com Wed Jul 13 14:58:42 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 13 Jul 2022 16:58:42 +0200 Subject: [OpenStack-Ansible][Neutron] Guide for Openstack installation with HA, OVN ? In-Reply-To: References: Message-ID: Hey, Well, documentation and examples you're referring to considers mostly using lxb or ovs for deployment. We have more network scenarios described in os_netron documentation. Specifically OVN-related doc is placed here: https://docs.openstack.org/openstack-ansible-os_neutron/latest/app-ovn.html It also mentions commands on how to verify the state of the deployment. For OSA default setup you can also check some diagrams here that might be helpful in understanding deployment description and would actually answer on question "Why": https://docs.openstack.org/openstack-ansible/latest/reference/architecture/container-networking.html https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html ??, 13 ???. 2022 ?. ? 15:01, Dave Hall : > Hello. > > I would like to extend Park's question with a specific question of my own > that could be used as an example the 'first time deployer' experience. > > My question is about Neutron deployment as it is specified in the various > openstack_user_config.yml examples. > > openstack_user_config.yml.prod.example declares network-infra_hosts and > network-agent_hosts, whereas openstack_user_config.yml.singlebond.example > only declares network_hosts. > > For the novice trying to choose which example file to customize for our > deployment, the following concerns arise: > > - If I use network_hosts, does that deploy both the infra and the > agent on each network_host? > - If I use network-infra_hosts and network-agent_hosts, but give both > the same set of host names/IPs, will it deploy correctly and produce a > working Neutron service? > - If I successfully deploy a smaller cluster using network_hosts, and > then grow my cluster, what criteria indicate the point at which I need to > switch to infra_hosts and agent_hosts, and how many of each should I deploy? > > In reading through the Neutron documentation, I see some 'what' and some > 'how', but very little 'why'. Perhaps a section titled 'Practical > Deployment Considerations'? > > At this point I just want to know how to make Neutron deployment work > right and how to be able to tell, after deployment, that it is working > right. > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > On Wed, Jul 13, 2022 at 1:46 AM ??? wrote: > >> Hello >> >> >> I've tried minimal installation of openstack following official >> documentation >> >> Now I want to install openstack with >> >> >> 1. High availability configuration - keystone, placement, neutron, >> glance, cinder, ... >> >> 2. Open Virtual Network for neutron driver >> >> >> But it's hard to find documentation online >> >> Could you provide some links for the material? >> >> >> Thank you! >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.castro.leon at cern.ch Wed Jul 13 15:05:26 2022 From: jose.castro.leon at cern.ch (Jose Castro Leon) Date: Wed, 13 Jul 2022 17:05:26 +0200 Subject: [infra] Tarballs are not accessible anymore Message-ID: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Hi, I don't know if someone noticed already but the tarballs are not accessible anymore, is that expected? https://tarballs.opendev.org/openstack/ Cheers Jose Castro Leon CERN Cloud Infrastructure From noonedeadpunk at gmail.com Wed Jul 13 15:14:21 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 13 Jul 2022 17:14:21 +0200 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> References: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Message-ID: Infra folks are already aware of the issue and working on the service recovery. Issue happened due to an incident in a hosting provider where tarballs are located. So a bit of patience would be appreciated. ??, 13 ???. 2022 ?. ? 17:12, Jose Castro Leon : > > Hi, > I don't know if someone noticed already but the tarballs are not > accessible anymore, is that expected? > > https://tarballs.opendev.org/openstack/ > > Cheers > > Jose Castro Leon > CERN Cloud Infrastructure > From cboylan at sapwetik.org Wed Jul 13 15:53:38 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Jul 2022 08:53:38 -0700 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: References: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Message-ID: <5d292115-bb05-4b58-9810-95d9b7d4e7b8@www.fastmail.com> On Wed, Jul 13, 2022, at 8:14 AM, Dmitriy Rabotyagov wrote: > Infra folks are already aware of the issue and working on the service > recovery. Issue happened due to an incident in a hosting provider > where tarballs are located. So a bit of patience would be appreciated. > > ??, 13 ???. 2022 ?. ? 17:12, Jose Castro Leon : >> >> Hi, >> I don't know if someone noticed already but the tarballs are not >> accessible anymore, is that expected? >> >> https://tarballs.opendev.org/openstack/ >> >> Cheers >> >> Jose Castro Leon >> CERN Cloud Infrastructure >> Tarballs appear to be accessible now. We are serving them from our openafs read only replica on the second openafs fileserver. Note, that any publishing to openafs is likely to fail until we get the read write volumes online again. I would avoid making releases until given the all clear. From zigo at debian.org Wed Jul 13 16:21:33 2022 From: zigo at debian.org (Thomas Goirand) Date: Wed, 13 Jul 2022 18:21:33 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> Message-ID: On 7/12/22 14:14, Stephen Finucane wrote: > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: >> Hi Stephen, >> >> I hope you don't mind I ping and up this thread. >> >> Thanks a lot for this work. Any more progress here? > > We've uncapped warlock in openstack/requirements [1]. We just need the glance > folks to remove their own cap now [2] so that we can raise the version in upper > constraint. > > Stephen > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 Hi ! I see these 2 are now merged, so it's job (well) done, right? Cheers, Thomas Goirand (zigo) From jose.castro.leon at cern.ch Wed Jul 13 16:46:14 2022 From: jose.castro.leon at cern.ch (Jose Castro Leon) Date: Wed, 13 Jul 2022 18:46:14 +0200 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: <5d292115-bb05-4b58-9810-95d9b7d4e7b8@www.fastmail.com> Message-ID: <2addbbe3-12c1-4650-bb34-6af831e70513@email.android.com> An HTML attachment was scrubbed... URL: From andr.kurilin at gmail.com Wed Jul 13 17:53:10 2022 From: andr.kurilin at gmail.com (Andrey Kurilin) Date: Wed, 13 Jul 2022 20:53:10 +0300 Subject: CRITICAL rally [-] Unhandled error: KeyError: 'openstack' Edit | Openstack wallaby tripleo | centos 8 stream In-Reply-To: References: Message-ID: hi! Try to use `rally env create --from-sysenv --name existing` instead. ??, 12 ???. 2022 ?. ? 22:40, Swogat Pradhan : > Hi, > > I am using openstack wallaby in tripleo architecture. > I googled around and found i can use openstack-rally for testing the > openstack deployment and will be able to genrate report also. > so i tried installing openstack rally using "yum install openstack-rally", > "pip3 install openstack-rally" and finally cloned the git repo and ran > "python3 setup.py install" but no matter what i do i am getting the error > 'unhandled error : keyerror 'openstack'' > > (overcloud) [root at hkg2director ~]# rally deployment create --fromenv > --name=existing > > +--------------------------------------+----------------------------+----------+------------------+--------+ > | uuid | created_at | name | status | active | > > +--------------------------------------+----------------------------+----------+------------------+--------+ > | 484aae52-a690-4163-828b-16adcaa0d8fb | 2022-06-07T05:48:39.039296 | > existing | deploy->finished | | > > +--------------------------------------+----------------------------+----------+------------------+--------+ > Using deployment: 484aae52-a690-4163-828b-16adcaa0d8fb > (overcloud) [root at hkg2director ~]# rally deployment show > 484aae52-a690-4163-828b-16adcaa0d8fb > Command failed, please check log for more info > 2022-06-07 13:48:58.651 482053 CRITICAL rally [-] Unhandled error: > KeyError: 'openstack' > 2022-06-07 13:48:58.651 482053 ERROR rally Traceback (most recent call > last): > 2022-06-07 13:48:58.651 482053 ERROR rally File "/bin/rally", line 10, in > > 2022-06-07 13:48:58.651 482053 ERROR rally sys.exit(main()) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/main.py", line 40, in main > 2022-06-07 13:48:58.651 482053 ERROR rally return cliutils.run(sys.argv, > categories) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/cliutils.py", line 669, > in run > 2022-06-07 13:48:58.651 482053 ERROR rally ret = fn(*fn_args, **fn_kwargs) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/envutils.py", line 142, > in inner > 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/plugins/__init__.py", line > 59, in wrapper > 2022-06-07 13:48:58.651 482053 ERROR rally return func(*args, **kwargs) > 2022-06-07 13:48:58.651 482053 ERROR rally File > "/usr/local/lib/python3.6/site-packages/rally/cli/commands/deployment.py", > line 205, in show > 2022-06-07 13:48:58.651 482053 ERROR rally creds = > deployment["credentials"]["openstack"][0] > 2022-06-07 13:48:58.651 482053 ERROR rally KeyError: 'openstack' > 2022-06-07 13:48:58.651 482053 ERROR rally > > can someone please help me in fixing this issue or give any suggestion on > which tool to use to test the openstack deployment and benchmark. > > Also is tempest available for wallaby?? i checked the opendev and github > repos last tags available are for victoria. > > > With regards, > > Swogat pradhan > -- Best regards, Andrey Kurilin. -------------- next part -------------- An HTML attachment was scrubbed... URL: From przemyslaw.basa at redge.com Wed Jul 13 14:06:33 2022 From: przemyslaw.basa at redge.com (Przemyslaw Basa) Date: Wed, 13 Jul 2022 16:06:33 +0200 Subject: [placement] running out of VCPU resource Message-ID: Hello I have a fresh Xena deployment. I'm able to spawn one VM per project per compute node then I get placement errors like Jul 13 15:32:27 g-os-controller-placement-container-a796c019 placement-api[1821]: 2022-07-13 15:32:27.683 1821 WARNING placement.objects.allocation [req-314a1352-457e-4166-8d8c-ef58e6d926ad 81c8738a7d4e46b3a0ae270eccf852c9 36e66b27e5144df5ba4a2270695fea34 - default default] Over capacity for VCPU on resource provider 16f620c0-8c6f-4984-8d58-e2c00d1b32da. Needed: 1, Used: 13318, Capacity: 256.0 Number in Used field seemed strange, it looked to me more like memory sum not used VCPU count. root at os-install:~# openstack resource provider show 16f620c0-8c6f-4984-8d58-e2c00d1b32da --allocations -c allocations -f value {'b6da8a02-a96c-464e-a6c4-19c96c83dd44': {'resources': {'MEMORY_MB': 12288, 'VCPU': 4}}, '212798a3-6753-443d-8e7c-5c3be3f4ab54': {'resources': {'DISK_GB': 1, 'MEMORY_MB': 1024, 'VCPU': 1}}} I've done some digging in sources and found SQL (in placement/objects/allocation.py) that is supposed to generate these values. MariaDB [placement]> SELECT rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, inv.reserved, inv.allocation_ratio, allocs.used FROM resource_providers AS rp JOIN inventories AS inv ON rp.id = inv.resource_provider_id LEFT JOIN ( SELECT resource_provider_id, resource_class_id, SUM(used) AS used FROM allocations WHERE resource_class_id IN (0, 1, 2) AND resource_provider_id IN (5) GROUP BY resource_provider_id, resource_class_id ) AS allocs ON inv.resource_provider_id = allocs.resource_provider_id AND inv.resource_class_id = allocs.resource_class_id WHERE rp.id IN (5) AND inv.resource_class_id IN (0, 1, 2); +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | used | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 37 | 0 | 128 | 0 | 2 | 13318 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 37 | 1 | 1031723 | 2048 | 1 | NULL | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 37 | 2 | 901965 | 2 | 1 | NULL | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ Individual parts shows correct data, join messing up numbers: MariaDB [placement]> SELECT resource_provider_id, resource_class_id, SUM(used) AS used FROM allocations WHERE resource_class_id IN (0, 1, 2) AND resource_provider_id IN (5) GROUP BY resource_provider_id, resource_class_id; +----------------------+-------------------+-------+ | resource_provider_id | resource_class_id | used | +----------------------+-------------------+-------+ | 5 | 0 | 5 | | 5 | 1 | 13312 | | 5 | 2 | 1 | +----------------------+-------------------+-------+ MariaDB [placement]> SELECT rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, inv.reserved, inv.allocation_ratio, inv.resource_provider_id, inv.resource_class_id FROM resource_providers AS rp JOIN inventories AS inv ON rp.id = inv.resource_provider_id WHERE rp.id IN (5) AND inv.resource_class_id IN (0, 1, 2); +----+--------------------------------------+------------+-------------------+---------+----------+------------------+----------------------+-------------------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | resource_provider_id | resource_class_id | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+----------------------+-------------------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 38 | 0 | 128 | 0 | 2 | 5 | 0 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 38 | 1 | 1031723 | 2048 | 1 | 5 | 1 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 38 | 2 | 901965 | 2 | 1 | 5 | 2 | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+----------------------+-------------------+ Query behaves differently when there is more than one resource_provider_id, it shows correct values then. Any tips how to fix this situation? I'm not brave enough to tinker with this query myself. Regards, Przemyslaw Basa From adam at adampankow.com Wed Jul 13 14:44:13 2022 From: adam at adampankow.com (Adam Pankow) Date: Wed, 13 Jul 2022 14:44:13 +0000 Subject: Changing Ubuntu Cloud Repo On Instance Message-ID: Ubuntu instance images seem to utilize "nova.clouds.archive.ubuntu.com" as their default repository. It seems that either this server does not efficiently route to an alternate mirror, or it itself is a mirror. This results in quite abysmal download speeds that I have seen. Would there be any downside to picking any other Ubuntu mirror, that is definitively more geographically close to me, but not explicitly labeled a Nova/Cloud mirror? i.e. would there be issues encountered, or features lost, by not using Ubuntu's designated Nova/Cloud repo? From vikarnatathe at gmail.com Wed Jul 13 16:30:26 2022 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Wed, 13 Jul 2022 22:00:26 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Lokendra, Are you able to access all the tabs in the OpenStack dashboard without any error? If not, please retry generating the certificate. Also, share the openssl.cnf or server.cnf. On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour wrote: > Hi Team, > Any input on this case raised. > > Thanks, > Lokendra > > > On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Shephard/Swogat, >> I tried changing the setting as suggested and it looks like it has failed >> at step 4 with error: >> >> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >> tripleo_keystone_resources : Create identity public endpoint | undercloud | >> 0:24:47.736198 | 2.21s >> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >> TASK | Create identity internal endpoint >> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >> FATAL | Create identity internal endpoint | undercloud | error={"changed": >> false, "extra_data": {"data": null, "details": "The request you have made >> requires authentication.", "response": >> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >> The request you have made requires authentication."} >> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >> >> >> Checking further the endpoint list: >> I see only one endpoint for keystone is gettin created. >> >> DeprecationWarning >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >> | ID | Region | Service Name | Service >> Type | Enabled | Interface | URL | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity >> | True | admin | http://30.30.30.173:35357 | >> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity >> | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | >> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity >> | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >> >> >> it looks like something related to the SSL, we have also verified that >> the GUI login screen shows that Certificates are applied. >> exploring more in logs, meanwhile any suggestions or know observation >> would be of great help. >> thanks again for the support. >> >> Best Regards, >> Lokendra >> >> >> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan >> wrote: >> >>> I had faced a similar kind of issue, for ip based setup you need to >>> specify the domain name as the ip that you are going to use, this error is >>> showing up because the ssl is ip based but the fqdns seems to be >>> undercloud.com or overcloud.example.com. >>> I think for undercloud you can change the undercloud.conf. >>> >>> And will it work if we specify clouddomain parameter to the IP address >>> for overcloud? because it seems he has not specified the clouddomain >>> parameter and overcloud.example.com is the default domain for >>> overcloud.example.com. >>> >>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >>> wrote: >>> >>>> What is the domain name you have specified in the undercloud.conf file? >>>> And what is the fqdn name used for the generation of the SSL cert? >>>> >>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Team, >>>>> We were trying to install overcloud with SSL enabled for which the UC >>>>> is installed, but OC install is getting failed at step 4: >>>>> >>>>> ERROR >>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": "MODULE >>>>> FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>> 600, in urlopen\n chunked=chunked)\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>> in _make_request\n self._validate_conn(conn)\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>> in _validate_conn\n conn.connect()\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>> server_hostname)\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>> (most recent call last):\n File >>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>> send\n timeout=timeout\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>> in get_discovery\n disc = Discover(session, url, >>>>> authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>> in __init__\n authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>> authenticated=authenticated)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>> request\n resp = send(**kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>> in _send_request\n raise >>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>> run_globals)\n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>> line 185, in \n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>> line 181, in main\n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>> line 407, in __call__\n File >>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>> line 141, in run\n File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>> 517, in search_services\n services = self.list_services()\n File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>> File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>> max_version='3.latest')\n File >>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>> **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>> **kwargs)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 271, in get_endpoint_data\n service_catalog = >>>>> self.get_access(session).service_catalog\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>> line 206, in get_auth_ref\n self._plugin = >>>>> self._do_create_plugin(session)\n File >>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>> retries exceeded with url: / (Caused by >>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:01.271914 | 2.47s >>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:01.273659 | 2.47s >>>>> >>>>> PLAY RECAP >>>>> ********************************************************************* >>>>> localhost : ok=0 changed=0 unreachable=0 >>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>> >>>>> >>>>> in the deploy.sh: >>>>> >>>>> openstack overcloud deploy --templates \ >>>>> -r /home/stack/templates/roles_data.yaml \ >>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>> --baremetal-deployment >>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>> --network-config \ >>>>> -e /home/stack/templates/environment.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>> \ >>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>> \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>> -e >>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>> >>>>> Addition lines as highlighted in yellow were passed with modifications: >>>>> tls-endpoints-public-ip.yaml: >>>>> Passed as is in the defaults. >>>>> enable-tls.yaml: >>>>> >>>>> # ******************************************************************* >>>>> # This file was created automatically by the sample environment >>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>> # Users are recommended to make changes to a copy of the file instead >>>>> # of the original, if any customizations are needed. >>>>> # ******************************************************************* >>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>> # description: | >>>>> # Use this environment to pass in certificates for SSL deployments. >>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>> # environments must also be used. >>>>> parameter_defaults: >>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>> # Type: boolean >>>>> HorizonSecureCookies: True >>>>> >>>>> # Specifies the default CA cert to use if TLS is used for services >>>>> in the public network. >>>>> # Type: string >>>>> PublicTLSCAFile: >>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>> >>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>> # Type: string >>>>> SSLRootCertificate: | >>>>> -----BEGIN CERTIFICATE----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END CERTIFICATE----- >>>>> >>>>> SSLCertificate: | >>>>> -----BEGIN CERTIFICATE----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END CERTIFICATE----- >>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>> # Type: string >>>>> SSLIntermediateCertificate: '' >>>>> >>>>> # The content of the SSL Key in PEM format. >>>>> # Type: string >>>>> SSLKey: | >>>>> -----BEGIN PRIVATE KEY----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END PRIVATE KEY----- >>>>> >>>>> # ****************************************************** >>>>> # Static parameters - these are values that must be >>>>> # included in the environment but should not be changed. >>>>> # ****************************************************** >>>>> # The filepath of the certificate as it will be stored in the >>>>> controller. >>>>> # Type: string >>>>> DeployedSSLCertificatePath: >>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>> >>>>> # ********************* >>>>> # End static parameters >>>>> # ********************* >>>>> >>>>> inject-trust-anchor.yaml >>>>> >>>>> # ******************************************************************* >>>>> # This file was created automatically by the sample environment >>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>> # Users are recommended to make changes to a copy of the file instead >>>>> # of the original, if any customizations are needed. >>>>> # ******************************************************************* >>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>> # description: | >>>>> # When using an SSL certificate signed by a CA that is not in the >>>>> default >>>>> # list of CAs, this environment allows adding a custom CA >>>>> certificate to >>>>> # the overcloud nodes. >>>>> parameter_defaults: >>>>> # The content of a CA's SSL certificate file in PEM format. This is >>>>> evaluated on the client side. >>>>> # Mandatory. This parameter must be set by the user. >>>>> # Type: string >>>>> SSLRootCertificate: | >>>>> -----BEGIN CERTIFICATE----- >>>>> ----*** CERTICATELINES TRIMMED ** >>>>> -----END CERTIFICATE----- >>>>> >>>>> resource_registry: >>>>> OS::TripleO::NodeTLSCAData: >>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>> >>>>> >>>>> >>>>> >>>>> The procedure to create such files was followed using: >>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>> >>>>> >>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>> IP-based certificate, without DNS. * >>>>> >>>>> Any idea around this error would be of great help. >>>>> >>>>> -- >>>>> skype: lokendrarathour >>>>> >>>>> >>>>> >> >> >> > > -- > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Wed Jul 13 17:32:42 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 13 Jul 2022 23:02:42 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: HI Vikarna, Thanks for the inputs. I am note able to access any tabs in GUI. [image: image.png] to re-state, we are failing at the time of deployment at step4 : PLAY [External deployment step 4] ********************************************** 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | TASK | External deployment step 4 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | OK | External deployment step 4 | undercloud -> localhost | result={ "changed": false, "msg": "Use --start-at-task 'External deployment step 4' to resume from this task" } [WARNING]: ('undercloud -> localhost', '525400ae-089b-870a-fab6-0000000000d7') missing from stats 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | INCLUDED | /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml | undercloud 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | TASK | Clean up legacy Cinder keystone catalog entries 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | OK | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv2', 'service_type': 'volumev2'} 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:24.204562 | 2.48s 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | OK | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:26.122584 | 4.40s 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:26.124296 | 4.40s 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | TASK | Manage Keystone resources for OpenStack services 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | TIMING | Manage Keystone resources for OpenStack services | undercloud | 0:11:26.169842 | 0.03s 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | TASK | Gather variables for each operating system 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | TIMING | tripleo_keystone_resources : Gather variables for each operating system | undercloud | 0:11:26.253383 | 0.04s 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | TASK | Create Keystone Admin resources 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | TIMING | tripleo_keystone_resources : Create Keystone Admin resources | undercloud | 0:11:26.299608 | 0.03s 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | INCLUDED | /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | undercloud 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | TASK | Create default domain 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | OK | Create default domain | undercloud 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | TIMING | tripleo_keystone_resources : Create default domain | undercloud | 0:11:28.437360 | 2.09s 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | TASK | Create admin and service projects 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | TIMING | tripleo_keystone_resources : Create admin and service projects | undercloud | 0:11:28.483468 | 0.03s 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | INCLUDED | /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | undercloud 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | TASK | Async creation of Keystone project 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | CHANGED | Async creation of Keystone project | undercloud | item=admin 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.238078 | 0.72s 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | CHANGED | Async creation of Keystone project | undercloud | item=service 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.586587 | 1.06s 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.587916 | 1.07s 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | TASK | Check Keystone project status 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | WAITING | Check Keystone project status | undercloud | 30 retries left 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | OK | Check Keystone project status | undercloud | item=admin 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.260666 | 5.66s 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | OK | Check Keystone project status | undercloud | item=service 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.494729 | 5.89s 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.498771 | 5.89s 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | TASK | Create admin role 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | OK | Create admin role | undercloud 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | TIMING | tripleo_keystone_resources : Create admin role | undercloud | 0:11:37.725949 | 2.20s 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | TASK | Create _member_ role 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | SKIPPED | Create _member_ role | undercloud 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | 0:11:37.783369 | 0.04s 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | TASK | Create admin user 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | CHANGED | Create admin user | undercloud 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | TIMING | tripleo_keystone_resources : Create admin user | undercloud | 0:11:41.145472 | 3.34s 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | TASK | Assign admin role to admin project for admin user 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | OK | Assign admin role to admin project for admin user | undercloud 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | TIMING | tripleo_keystone_resources : Assign admin role to admin project for admin user | undercloud | 0:11:44.288848 | 3.13s 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | TASK | Assign _member_ role to admin project for admin user 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | SKIPPED | Assign _member_ role to admin project for admin user | undercloud 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | TIMING | tripleo_keystone_resources : Assign _member_ role to admin project for admin user | undercloud | 0:11:44.346479 | 0.04s 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | TASK | Create identity service 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | OK | Create identity service | undercloud 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | TIMING | tripleo_keystone_resources : Create identity service | undercloud | 0:11:46.022362 | 1.66s 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | TASK | Create identity public endpoint 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | OK | Create identity public endpoint | undercloud 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:11:48.233349 | 2.19s 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | TASK | Create identity internal endpoint 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request you have made requires authentication."} 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | TIMING | tripleo_keystone_resources : Create identity internal endpoint | undercloud | 0:11:50.660654 | 2.41s PLAY RECAP ********************************************************************* localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 overcloud-controller-0 : ok=437 changed=103 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-1 : ok=435 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-controller-2 : ok=432 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 undercloud : ok=39 changed=7 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0 Also : (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf [req] default_bits = 2048 prompt = no default_md = sha256 distinguished_name = dn [dn] C=IN ST=UTTAR PRADESH L=NOIDA O=HSC OU=HSC emailAddress=demo at demo.com v3.ext: (undercloud) [stack at undercloud oc-cert]$ cat v3.ext authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] IP.1=fd00:fd00:fd00:9900::81 Using these files we create other certificates. Please check and let me know in case we need anything else. On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe wrote: > Hi Lokendra, > > Are you able to access all the tabs in the OpenStack dashboard without any > error? If not, please retry generating the certificate. Also, share the > openssl.cnf or server.cnf. > > On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour > wrote: > >> Hi Team, >> Any input on this case raised. >> >> Thanks, >> Lokendra >> >> >> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Shephard/Swogat, >>> I tried changing the setting as suggested and it looks like it has >>> failed at step 4 with error: >>> >>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>> 0:24:47.736198 | 2.21s >>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>> TASK | Create identity internal endpoint >>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>> FATAL | Create identity internal endpoint | undercloud | error={"changed": >>> false, "extra_data": {"data": null, "details": "The request you have made >>> requires authentication.", "response": >>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>> The request you have made requires authentication."} >>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>> >>> >>> Checking further the endpoint list: >>> I see only one endpoint for keystone is gettin created. >>> >>> DeprecationWarning >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>> | ID | Region | Service Name | Service >>> Type | Enabled | Interface | URL | >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity >>> | True | admin | http://30.30.30.173:35357 | >>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity >>> | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | >>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity >>> | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>> >>> >>> it looks like something related to the SSL, we have also verified that >>> the GUI login screen shows that Certificates are applied. >>> exploring more in logs, meanwhile any suggestions or know observation >>> would be of great help. >>> thanks again for the support. >>> >>> Best Regards, >>> Lokendra >>> >>> >>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>> swogatpradhan22 at gmail.com> wrote: >>> >>>> I had faced a similar kind of issue, for ip based setup you need to >>>> specify the domain name as the ip that you are going to use, this error is >>>> showing up because the ssl is ip based but the fqdns seems to be >>>> undercloud.com or overcloud.example.com. >>>> I think for undercloud you can change the undercloud.conf. >>>> >>>> And will it work if we specify clouddomain parameter to the IP address >>>> for overcloud? because it seems he has not specified the clouddomain >>>> parameter and overcloud.example.com is the default domain for >>>> overcloud.example.com. >>>> >>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >>>> wrote: >>>> >>>>> What is the domain name you have specified in the undercloud.conf file? >>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>> >>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Team, >>>>>> We were trying to install overcloud with SSL enabled for which the UC >>>>>> is installed, but OC install is getting failed at step 4: >>>>>> >>>>>> ERROR >>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>> in _validate_conn\n conn.connect()\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>> server_hostname)\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>> (most recent call last):\n File >>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>> send\n timeout=timeout\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>> in get_discovery\n disc = Discover(session, url, >>>>>> authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>> in __init__\n authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>> authenticated=authenticated)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>> request\n resp = send(**kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>> in _send_request\n raise >>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>> run_globals)\n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>> line 185, in \n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>> line 181, in main\n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>> line 407, in __call__\n File >>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>> line 141, in run\n File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>> File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>> max_version='3.latest')\n File >>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>> **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>> **kwargs)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>> self.get_access(session).service_catalog\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>> self._do_create_plugin(session)\n File >>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>> retries exceeded with url: / (Caused by >>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", "msg": >>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:01.271914 | 2.47s >>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:01.273659 | 2.47s >>>>>> >>>>>> PLAY RECAP >>>>>> ********************************************************************* >>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>> >>>>>> >>>>>> in the deploy.sh: >>>>>> >>>>>> openstack overcloud deploy --templates \ >>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>> --baremetal-deployment >>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>> --network-config \ >>>>>> -e /home/stack/templates/environment.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>> \ >>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>> \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>> -e >>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>> >>>>>> Addition lines as highlighted in yellow were passed with >>>>>> modifications: >>>>>> tls-endpoints-public-ip.yaml: >>>>>> Passed as is in the defaults. >>>>>> enable-tls.yaml: >>>>>> >>>>>> # ******************************************************************* >>>>>> # This file was created automatically by the sample environment >>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>> # of the original, if any customizations are needed. >>>>>> # ******************************************************************* >>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>> # description: | >>>>>> # Use this environment to pass in certificates for SSL deployments. >>>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>>> # environments must also be used. >>>>>> parameter_defaults: >>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>> # Type: boolean >>>>>> HorizonSecureCookies: True >>>>>> >>>>>> # Specifies the default CA cert to use if TLS is used for services >>>>>> in the public network. >>>>>> # Type: string >>>>>> PublicTLSCAFile: >>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>> >>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>> # Type: string >>>>>> SSLRootCertificate: | >>>>>> -----BEGIN CERTIFICATE----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END CERTIFICATE----- >>>>>> >>>>>> SSLCertificate: | >>>>>> -----BEGIN CERTIFICATE----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END CERTIFICATE----- >>>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>>> # Type: string >>>>>> SSLIntermediateCertificate: '' >>>>>> >>>>>> # The content of the SSL Key in PEM format. >>>>>> # Type: string >>>>>> SSLKey: | >>>>>> -----BEGIN PRIVATE KEY----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END PRIVATE KEY----- >>>>>> >>>>>> # ****************************************************** >>>>>> # Static parameters - these are values that must be >>>>>> # included in the environment but should not be changed. >>>>>> # ****************************************************** >>>>>> # The filepath of the certificate as it will be stored in the >>>>>> controller. >>>>>> # Type: string >>>>>> DeployedSSLCertificatePath: >>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>> >>>>>> # ********************* >>>>>> # End static parameters >>>>>> # ********************* >>>>>> >>>>>> inject-trust-anchor.yaml >>>>>> >>>>>> # ******************************************************************* >>>>>> # This file was created automatically by the sample environment >>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>> # of the original, if any customizations are needed. >>>>>> # ******************************************************************* >>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>> # description: | >>>>>> # When using an SSL certificate signed by a CA that is not in the >>>>>> default >>>>>> # list of CAs, this environment allows adding a custom CA >>>>>> certificate to >>>>>> # the overcloud nodes. >>>>>> parameter_defaults: >>>>>> # The content of a CA's SSL certificate file in PEM format. This is >>>>>> evaluated on the client side. >>>>>> # Mandatory. This parameter must be set by the user. >>>>>> # Type: string >>>>>> SSLRootCertificate: | >>>>>> -----BEGIN CERTIFICATE----- >>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>> -----END CERTIFICATE----- >>>>>> >>>>>> resource_registry: >>>>>> OS::TripleO::NodeTLSCAData: >>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> The procedure to create such files was followed using: >>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>> >>>>>> >>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>> IP-based certificate, without DNS. * >>>>>> >>>>>> Any idea around this error would be of great help. >>>>>> >>>>>> -- >>>>>> skype: lokendrarathour >>>>>> >>>>>> >>>>>> >>> >>> >>> >> >> -- >> > -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From cboylan at sapwetik.org Wed Jul 13 19:41:37 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Jul 2022 12:41:37 -0700 Subject: Changing Ubuntu Cloud Repo On Instance In-Reply-To: References: Message-ID: <493a1c2c-0aff-4704-ba74-7fcfdc2d900f@www.fastmail.com> On Wed, Jul 13, 2022, at 7:44 AM, Adam Pankow wrote: > Ubuntu instance images seem to utilize "nova.clouds.archive.ubuntu.com" > as their default repository. It seems that either this server does not > efficiently route to an alternate mirror, or it itself is a mirror. > This results in quite abysmal download speeds that I have seen. Would > there be any downside to picking any other Ubuntu mirror, that is > definitively more geographically close to me, but not explicitly > labeled a Nova/Cloud mirror? i.e. would there be issues encountered, or > features lost, by not using Ubuntu's designated Nova/Cloud repo? The setting of the repo location is likely to be baked into whatever image you grabbed. Running some DNS queries I suspect that *.clouds.archive.ubuntu.com is used to track requests from various cloud providers to provide insight into things like popularity of Ubuntu on different clouds. I wouldn't expect there to be any problems using a different mirror as the packages should all be kept roughly in sync. The biggest thing to keep in mind is probably reliability and adjusting if necessary. From gael.therond at bitswalk.com Wed Jul 13 20:06:57 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Wed, 13 Jul 2022 22:06:57 +0200 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: Hi Julia! Thanks a lot for those explanations :-) Most of it confirm my understanding, I now have a clearer point of view that will let me select our test users for the service. Regarding aruba switches, those are pretty cool, even if as you pointed it, this feature can actually lead you to some weird if not dangerous situations x) Ok noticed about the horizon issue, it can be a little bit tricky for our end users to understand that tbh as they will for sure expect the IP selected by neutron and display on the dashboard to be the one used by the node even on a full flat network such as the provisioning network, but for now we will deal with it by explaining them. Regarding my point 2, yeah yeah I knew the purpose of direct deploy I just explicited it I don?t know why, my point was rather: At first, when I configured our ironic deployment I had that weird issue where if I don?t put the pxe_filter option to noop but dnsmasq, deploying anything is failing as the conductor doesn?t correctly erase the ?ignore ? part of the string on the dhcp_host_filter file of dnsmasq. If I make this filter as noop then obviously I don?t need neutron to provide the ironic-provision-network anymore as anyone plugged on ports with my VLAN101 set as native VLAN will be able to get an ip from the PXE dnsmasq. I?m still having hard time to map how ironic needs both PXE dedicated dnsmasq for introspection and then can use neutron dnsmasq dhcp once you want to provision a host? Is that because neutron (kinda) lack for dhcp options support on its managed subnets ? All in all it?s pretty clearer to me about the multi tenancy networking requirements now thanks to you! Le mar. 12 juil. 2022 ? 00:13, Julia Kreger a ?crit : > Greetings! Hopefully these answers help! > > On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND > wrote: > > > > I everyone, I?m currently working back again with Ironic and it?s > amazing! > > > > However, during our demo session to our users few questions arise. > > > > We?re currently deploying nodes using a private vlan that can?t be > reached from outside of the Openstack network fabric (vlan 101 - > 192.168.101.0/24) and everything is fine with this provisioning network > as our ToR switch all know about it and other Control plan VLANs such as > the internal APIs VLAN which allow the IPA Ramdisk to correctly and > seamlessly be able to contact the internal IRONIC APIs. > > Nice, I've had my lab configured like this in the past. > > > > > (When you declare a port as a trunk allowed all vlan on a aruba switch > it seems it automatically analyse the CIDR your host try to reach from your > VLAN and route everything to the corresponding VLAN that match the > destination IP). > > > > Ugh, that... could be fun :\ > > > So know, I still get few tiny issues: > > > > 1?/- When I spawn a nova instance on a ironic host that is set to use > flat network (From horizon as a user), why does the nova wizard still ask > for a neutron network if it?s not set on the provisioned host by the IPA > ramdisk right after the whole disk image copy? Is that some missing > development on horizon or did I missed something? > > Horizon just is not aware... and you can actually have entirely > different DHCP pools on the same flat network, so that neutron network > is intended for the instance's addressing to utilize. > > Ironic does just ask from an allocation from a provisioning network, > which can and *should* be a different network than the tenant network. > > > > > 2?/- In a flat network layout deployment using direct deploy scenario > for images, am I still supposed to create a ironic provisioning network in > neutron? > > > > From my understanding (and actually my tests) we don?t, as any host > booting on the provisioning vlan will catch up an IP and initiate the bootp > sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, > but it?s a bit confusing as many documentation explicitly require for this > network to exist on neutron. > > Yes. Direct is short hand for "Copy it over the network and write it > directly to disk". It still needs an IP address on the provisioning > network (think, subnet instead of distinct L2 broadcast domain). > > When you ask nova for an instance, it sends over what the machine > should use as a "VIF" (neutron port), however that is never actually > bound configuration wise into neutron until after the deployment > completes. > > It *could* be that your neutron config is such that it just works > anyway, but I suspect upstream contributors would be a bit confused if > you reported an issue and had no provisioning network defined. > > > > > 3?/- My whole Openstack network setup is using Openvswitch and vxlan > tunnels on top of a spine/leaf architecture using aruba CX8360 switches > (for both spine and leafs), am I required to use either the > networking-generic-switch driver or a vendor neutron driver ? If that?s > right, how will this driver be able to instruct the switch to assign the > host port the correct openvswitch vlan id and register the correct vxlan to > openvswitch from this port? I mean, ok neutron know the vxlan and > openvswitch the tunnel vlan id/interface but what is the glue of all that? > > If your happy with flat networks, no. > > If you want tenant isolation networking wise, yes. > > NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port > level local link configuration (well, Ironic includes the port > information (local link connection, physical network, and some other > details) to Neutron with the port binding request. > > Those ML2 drivers, then either request the switch configuration be > updated, or take locally configured credentials to modify port > configuration in Neutron, and logs into the switch to toggle the > access port's configuration which the baremetal node is attached to. > > Generally, they are not vxlan network aware, and at least with > networking-generic-switch vlan ID numbers are expected and allocated > via neutron. > > Sort of like the software is logging into the switch and running > something along the lines of "conf t;int gi0/21;switchport mode > access;switchport access vlan 391 ; wri mem" > > > > > 4?/- I?ve successfully used openstack cloud oriented CentOS and debian > images or snapshot of VMs to provision my hosts, this is an awesome > feature, but I?m wondering if there is a way to let those host cloud-init > instance to request for neutron metadata endpoint? > > > > Generally yes, you *can* use network attached metadata with neutron > *as long as* your switches know to direct the traffic for the metadata > IP to the Neutron metadata service(s). > > We know of operators who ahve done it without issues, but often that > additional switch configured route is not always the best hting. > Generally we recommend enabling and using configuration drives, so the > metadata is able to be picked up by cloud-init. > > > > I was a bit surprised about the ironic networking part as I was > expecting the IPA ramdisk to at least be able to set the host os with the > appropriate network configuration file for whole disk images that do not > use encryption by injecting those information from the neutron api into the > host disk while mounted (right after the image dd). > > > > IPA has no knowledge of how to modify the host OS in this regard. > modifying the host OS has generally been something the ironic > community has avoided since it is not exactly cloudy to have to do so. > Generally most clouds are running with DHCP, so as long as that is > enabled and configured, things should generally "just work". > > Hopefully that provides a little more context. Nothing prevents you > from writing your own hardware manager that does exactly this, for > what it is worth. > > > All in all I really like the ironic approach of the baremetal > provisioning process, and I?m pretty sure that I?m just missing a bit of > understanding of the networking part but it?s really the most confusing > part of it to me as I feel like if there is a missing link in between > neutron and the host HW or the switches. > > > > Thanks! It is definitely one of the more complex parts given there are > many moving parts, and everyone wants (or needs) to have their > networking configured just a little differently. > > Hopefully I've kind of put some of the details out there, if you need > more information, please feel free to reach out, and also please feel > free to ask questions in #openstack-ironic on irc.oftc.net. > > > Thanks a lot anyone that will take time to explain me this :-) > > :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jul 13 20:35:03 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 13 Jul 2022 20:35:03 +0000 Subject: [infra] Tarballs are not accessible anymore In-Reply-To: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> References: <02e43370-23bf-cc68-9d12-7dfcfb803c3d@cern.ch> Message-ID: <20220713203503.4dzsfmykc4inbwht@yuggoth.org> On 2022-07-13 17:05:26 +0200 (+0200), Jose Castro Leon wrote: > I don't know if someone noticed already but the tarballs are not accessible > anymore, is that expected? [...] Just to wrap this up (hopefully), the primary server was brought back into service by the provider at 18:08 UTC and we don't see evidence to indicate any of our volumes were in a degraded (non-redundant) state after that time. Writes should be back also as of roughly 19:20 UTC, so as far as I'm aware we're in the clear for the past hour or so. If you notice anything out of the ordinary (again) please do let us know! We always appreciate the info, since we don't really operate like a traditional service provider (our infrastructure collaborators don't "carry pagers" as it were). -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rlandy at redhat.com Wed Jul 13 20:39:34 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Wed, 13 Jul 2022 16:39:34 -0400 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs Message-ID: Hello All, We have a new gate blocker impacting tripleo-ci-centos-9-scenario010-standalone and ovn-provider master jobs. The deployment fails and the error in the nova_libvirt_init_secret/stdout.log log shows: Error: /etc/ceph/ceph.conf contained an empty fsid definition Check your ceph configuration Details of the failure are in https://bugs.launchpad.net/tripleo/+bug/1981634. Please hold rechecks until we can touch base with the Ceph team to discuss. Thank you, TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Jul 13 21:04:37 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 13 Jul 2022 14:04:37 -0700 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: References: Message-ID: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: > Hello All, > > We have a new gate blocker impacting > tripleo-ci-centos-9-scenario010-standalone and ovn-provider master > jobs. The deployment fails and the error in the > nova_libvirt_init_secret/stdout.log log shows: > > Error: /etc/ceph/ceph.conf contained an empty fsid definition > Check your ceph configuration This message appears to originate from: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 The fsid is in fact unset/empty in the file: https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf Seems that this template, https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, should set the value based on the tripleo_ceph_client_fsid variable value. > > Details of the failure are in https://bugs.launchpad.net/tripleo/+bug/1981634. > > Please hold rechecks until we can touch base with the Ceph team to discuss. > > Thank you, > > TripleO CI From gfidente at redhat.com Wed Jul 13 21:31:41 2022 From: gfidente at redhat.com (Giulio Fidente) Date: Wed, 13 Jul 2022 23:31:41 +0200 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> Message-ID: <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> On 7/13/22 23:04, Clark Boylan wrote: > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: >> Hello All, >> >> We have a new gate blocker impacting >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master >> jobs. The deployment fails and the error in the >> nova_libvirt_init_secret/stdout.log log shows: >> >> Error: /etc/ceph/ceph.conf contained an empty fsid definition >> Check your ceph configuration > > This message appears to originate from: https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 > > The fsid is in fact unset/empty in the file: https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf > > Seems that this template, https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, should set the value based on the tripleo_ceph_client_fsid variable value. hi Clark, that's my understanding as well that variable appears to be set by the client role [1] as expected; I am unclear why it doesn't show up in the template do we know if this is happening only for scenario010 or for the other scenarios deploying ceph? (001 and 004) 1. https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml -- Giulio Fidente GPG KEY: 08D733BA From melwittt at gmail.com Wed Jul 13 23:16:58 2022 From: melwittt at gmail.com (melanie witt) Date: Wed, 13 Jul 2022 16:16:58 -0700 Subject: [nova][ops] seeking input about local/ephemeral disk encryption feature naming Message-ID: Hi everyone, A potential issue regarding naming has come up during review of the ephemeral storage encryption feature [1][2] patch series [3] and we're looking for input before moving forward with any naming/terminology changes across the specs and the entire patch series. The concern that has been raised is around use of the term "ephemeral" for the name of this feature including traits, extra specs, and image properties [4]. For context, the objective of this feature is to provide users with the ability to specify that all local disks for the instance be encrypted. This includes the root disk and any other local disks. The initial concern is around use of the word "ephemeral" for the root disk. My general interpretation of the word "ephemeral" for storage in nova has been that it means attached storage that only persists for the lifetime of the instance and is destroyed if and when the instance is destroyed. This is in contrast to attached cinder volumes which can persist after instance deletion. But should "ephemeral" ever be used to describe a root disk? Is it incorrect and/or ambiguous to refer to it as such? This is part of what is being discussed in [4]. During discussion, I also realized there is a separate gap in the above interpretation of "ephemeral" in nova. When cinder volumes are attached to an instance, their persistence after the instance is deleted depends on whether the 'delete_on_termination' attribute is set to true in the request payload when the instance is created [5] or when attaching a volume to the instance [6] or updating a volume attached to the instance [7]. This means that in the currently proposed patches, if a user specifies hw:ephemeral_encryption in the extra_specs, for example, and they also have a volume with delete_on_termination=True attached, only the root disk will be encrypted via the extra spec -- the volume would not be encrypted. Encryption of the volume has to be requested in cinder. Could this mislead a user into thinking both the root disk and cinder volume are encrypted when only the root disk is? Because of the above issues, we are considering whether we should change the terminology used in this feature at this stage. Some ideas include "local encryption", "local disk encryption", "disk encryption". IMHO "disk_encryption" is ambiguous in its own way because an attached cinder volume also has a disk. Changing the naming will be a non-trivial amount of work, so we wanted to get additional input before going ahead with such a change. Another thing noted in a comment on another patch in the series [8] is that the os-traits for this feature have already been merged [9]. If we decide to change the naming, should we go ahead and use these traits as-is and have them not match the naming in nova or should we deprecate them and add new traits that match the new name and use those? I hope this makes sense and your input would be much appreciated. Cheers, -melwitt [1] https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-storage-encryption.html [2] https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-encryption-libvirt.html [3] https://review.opendev.org/q/topic:specs%252Fyoga%252Fapproved%252Fephemeral-encryption-libvirt [4] https://review.opendev.org/c/openstack/nova/+/764486/10/nova/api/validation/extra_specs/hw.py#516 [5] https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server [6] https://docs.openstack.org/api-ref/compute/?expanded=attach-a-volume-to-an-instance-detail [7] https://docs.openstack.org/api-ref/compute/?expanded=update-a-volume-attachment-detail [8] https://review.opendev.org/c/openstack/nova/+/760456/10/nova/scheduler/request_filter.py#425 [9] https://github.com/openstack/os-traits/blob/f64d50e4dd2f21558fb73dd4b59cd1d4b121b707/os_traits/compute/ephemeral.py From gmann at ghanshyammann.com Thu Jul 14 04:17:15 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Wed, 13 Jul 2022 23:17:15 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 14 July 2022 at 1500 UTC In-Reply-To: <181ee8c6a9e.b95207cb391423.5781004865867855521@ghanshyammann.com> References: <181ee8c6a9e.b95207cb391423.5781004865867855521@ghanshyammann.com> Message-ID: <181faecda9d.df712f3146717.261372769673175857@ghanshyammann.com> Hello Everyone, Below is the agenda for Today's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for tomorrow's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * RBAC feedback in ops meetup ** https://etherpad.opendev.org/p/rbac-zed-ptg#L171 ** https://review.opendev.org/c/openstack/governance/+/847418 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Mon, 11 Jul 2022 13:36:28 -0500 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 14 July 2022, at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, 13 July at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > > From ykarel at redhat.com Thu Jul 14 05:53:16 2022 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 14 Jul 2022 11:23:16 +0530 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> Message-ID: On Thu, Jul 14, 2022 at 3:20 AM Giulio Fidente wrote: > On 7/13/22 23:04, Clark Boylan wrote: > > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: > >> Hello All, > >> > >> We have a new gate blocker impacting > >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master > >> jobs. The deployment fails and the error in the > >> nova_libvirt_init_secret/stdout.log log shows: > >> > >> Error: /etc/ceph/ceph.conf contained an empty fsid definition > >> Check your ceph configuration > > > > This message appears to originate from: > https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 > > > > The fsid is in fact unset/empty in the file: > https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf > > > > Seems that this template, > https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, > should set the value based on the tripleo_ceph_client_fsid variable value. > > hi Clark, that's my understanding as well > > that variable appears to be set by the client role [1] as expected; I am > unclear why it doesn't show up in the template > > do we know if this is happening only for scenario010 or for the other > scenarios deploying ceph? (001 and 004) > > Scenario001 and 004 are green[2]. [2] https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-scenario001-standalone&job_name=tripleo-ci-centos-9-scenario004-standalone&skip=0 1. > > https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml > > -- > Giulio Fidente > GPG KEY: 08D733BA > > > Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykarel at redhat.com Thu Jul 14 06:22:23 2022 From: ykarel at redhat.com (Yatin Karel) Date: Thu, 14 Jul 2022 11:52:23 +0530 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> Message-ID: On Thu, Jul 14, 2022 at 11:23 AM Yatin Karel wrote: > > > On Thu, Jul 14, 2022 at 3:20 AM Giulio Fidente > wrote: > >> On 7/13/22 23:04, Clark Boylan wrote: >> > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: >> >> Hello All, >> >> >> >> We have a new gate blocker impacting >> >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master >> >> jobs. The deployment fails and the error in the >> >> nova_libvirt_init_secret/stdout.log log shows: >> >> >> >> Error: /etc/ceph/ceph.conf contained an empty fsid definition >> >> Check your ceph configuration >> > >> > This message appears to originate from: >> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 >> > >> > The fsid is in fact unset/empty in the file: >> https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf >> > >> > Seems that this template, >> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, >> should set the value based on the tripleo_ceph_client_fsid variable value. >> >> hi Clark, that's my understanding as well >> >> that variable appears to be set by the client role [1] as expected; I am >> unclear why it doesn't show up in the template >> >> do we know if this is happening only for scenario010 or for the other >> scenarios deploying ceph? (001 and 004) >> >> Scenario001 and 004 are green[2]. > > [2] > https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-scenario001-standalone&job_name=tripleo-ci-centos-9-scenario004-standalone&skip=0 > > Sorry for mislooking, scenario004 is also impacted, even it also failed in the test patch of the original patch https://review.opendev.org/c/openstack/tripleo-heat-templates/+/849580/. For now it's being reverted to unblock gate https://review.opendev.org/c/openstack/tripleo-ansible/+/849732 > 1. >> >> https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml >> >> -- >> Giulio Fidente >> GPG KEY: 08D733BA >> >> >> Regards > Yatin Karel > Regards Yatin Karel -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Thu Jul 14 06:38:00 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 14 Jul 2022 12:08:00 +0530 Subject: [cinder] Extending Driver Merge Deadline Message-ID: Hello Argonauts, As discussed in yesterday's Cinder meeting[1], given the number of drivers proposed for Zed cycle (8 new Drivers[2]) and the limited review bandwidth (cores are working on development tasks), we've decided to extend the driver merge deadline from R-12 to R-10 i.e. from 15th July to 29th July. R-10 is also the deadline for Manila driver merge deadline[3]. [1] https://meetings.opendev.org/meetings/cinder/2022/cinder.2022-07-13-14.00.log.html#l-129 [2] https://etherpad.opendev.org/p/cinder-zed-new-drivers [3] https://releases.openstack.org/zed/schedule.html#z-manila-new-driver-deadline Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jul 14 09:02:37 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Jul 2022 10:02:37 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> Message-ID: On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: > On 7/12/22 14:14, Stephen Finucane wrote: > > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > > > Hi Stephen, > > > > > > I hope you don't mind I ping and up this thread. > > > > > > Thanks a lot for this work. Any more progress here? > > > > We've uncapped warlock in openstack/requirements [1]. We just need the glance > > folks to remove their own cap now [2] so that we can raise the version in upper > > constraint. > > > > Stephen > > > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > > [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > Hi ! > > I see these 2 are now merged, so it's job (well) done, right? I'd assume so, yes. We just need to wait for the machinery to do its job and bump the upper constraint now. Stephen > > Cheers, > > Thomas Goirand (zigo) > From rlandy at redhat.com Thu Jul 14 10:41:43 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Thu, 14 Jul 2022 06:41:43 -0400 Subject: [tripleo] new gate blocker: tripleo-ci-centos-9-scenario010-standalone and ovn master jobs In-Reply-To: References: <4e1bec39-8f30-4ab9-b342-250ff772c0b9@www.fastmail.com> <316b8394-b68c-7b6b-f0ac-63b129b64be8@redhat.com> Message-ID: On Thu, Jul 14, 2022 at 2:28 AM Yatin Karel wrote: > On Thu, Jul 14, 2022 at 11:23 AM Yatin Karel wrote: > >> >> >> On Thu, Jul 14, 2022 at 3:20 AM Giulio Fidente >> wrote: >> >>> On 7/13/22 23:04, Clark Boylan wrote: >>> > On Wed, Jul 13, 2022, at 1:39 PM, Ronelle Landy wrote: >>> >> Hello All, >>> >> >>> >> We have a new gate blocker impacting >>> >> tripleo-ci-centos-9-scenario010-standalone and ovn-provider master >>> >> jobs. The deployment fails and the error in the >>> >> nova_libvirt_init_secret/stdout.log log shows: >>> >> >>> >> Error: /etc/ceph/ceph.conf contained an empty fsid definition >>> >> Check your ceph configuration >>> > >>> > This message appears to originate from: >>> https://opendev.org/openstack/tripleo-heat-templates/src/branch/master/container_config_scripts/nova_libvirt_init_secret.sh#L22-L27 >>> > >>> > The fsid is in fact unset/empty in the file: >>> https://a9f1aef221b9e8d1cf76-922433d163de5a07cac84d974d42345f.ssl.cf1.rackcdn.com/849688/2/check/tripleo-ci-centos-9-scenario010-ovn-provider-standalone/e93e550/logs/undercloud/etc/ceph/ceph.conf >>> > >>> > Seems that this template, >>> https://opendev.org/openstack/tripleo-ansible/src/branch/master/tripleo_ansible/roles/tripleo_ceph_client/templates/ceph_conf.j2, >>> should set the value based on the tripleo_ceph_client_fsid variable value. >>> >>> hi Clark, that's my understanding as well >>> >>> that variable appears to be set by the client role [1] as expected; I am >>> unclear why it doesn't show up in the template >>> >>> do we know if this is happening only for scenario010 or for the other >>> scenarios deploying ceph? (001 and 004) >>> >>> Scenario001 and 004 are green[2]. >> >> [2] >> https://zuul.openstack.org/builds?job_name=tripleo-ci-centos-9-scenario001-standalone&job_name=tripleo-ci-centos-9-scenario004-standalone&skip=0 >> >> Sorry for mislooking, scenario004 is also impacted, even it also failed > in the test patch of the original patch > https://review.opendev.org/c/openstack/tripleo-heat-templates/+/849580/. > For now it's being reverted to unblock gate > https://review.opendev.org/c/openstack/tripleo-ansible/+/849732 > The revert is merged - and the gate should be cleared now. Thank you > 1. >>> >>> https://4cec70d2de8d73a9678a-a966260cdcfda0650aae15fc442adef2.ssl.cf1.rackcdn.com/776942/1/check/tripleo-ci-centos-9-scenario010-standalone/8ceddd7/logs/undercloud/home/zuul/tripleo-deploy/standalone-ansible-lakkrdon/cephadm/ceph_client.yml >>> >>> -- >>> Giulio Fidente >>> GPG KEY: 08D733BA >>> >>> >>> Regards >> Yatin Karel >> > > Regards > Yatin Karel > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Thu Jul 14 12:51:30 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Jul 2022 13:51:30 +0100 Subject: Changing Ubuntu Cloud Repo On Instance In-Reply-To: References: Message-ID: <07b119fe6c2478512a5c6e7248b791bd695ba8b2.camel@redhat.com> On Wed, 2022-07-13 at 14:44 +0000, Adam Pankow wrote: > Ubuntu instance images seem to utilize "nova.clouds.archive.ubuntu.com" as their default repository. It seems that either this server does not efficiently route to an alternate mirror, or it itself is a mirror. This results in quite abysmal download speeds that I have seen. Would there be any downside to picking any other Ubuntu mirror, that is definitively more geographically close to me, but not explicitly labeled a Nova/Cloud mirror? i.e. would there be issues encountered, or features lost, by not using Ubuntu's designated Nova/Cloud repo? > not that im aware off. i think canonical are probly using this fqdn to track what installs are cloud based and what are native i highly droubt changing it would have a negitive effectr on your users expericne and using a geogravically colocated mirror would likely improve it. this is not to my knolage anything that nova or openstack ever had any discussions with teh ubuntu comunity about and it is not done at our request so i would just test it and deploy what works best for you and your users. if you have storage capasity and want to conserve external bandwith you might even consider hosting your own caching proxy/mirror to use as a default in the openstack cloud itself. that is quite common for ci enviornments. From zigo at debian.org Thu Jul 14 13:38:34 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 14 Jul 2022 15:38:34 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. Message-ID: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> Hi everyone! I know the usual answer: "we don't do that", or "this is unsupported in current version", however, as always, that's the way things are: OpenStack isn't alone living in Debian, and Debian Unstable now has Python 3.11, which isn't going to change simply because the OpenStack community decides that "it's not supported". So it'd be nice if we could support it. I'm quite sure that, as usual, the upload of a new Python interpreter version will break my world. I'll try to summit patches when I can, but I also expect help if possible. The challenge is: Debian Bookworm will be shipping Zed and Python 3.11, most likely... Cheers, Thomas Goirand (zigo) From zigo at debian.org Thu Jul 14 13:51:39 2022 From: zigo at debian.org (Thomas Goirand) Date: Thu, 14 Jul 2022 15:51:39 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> Message-ID: <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> On 7/14/22 11:02, Stephen Finucane wrote: > On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: >> On 7/12/22 14:14, Stephen Finucane wrote: >>> On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: >>>> Hi Stephen, >>>> >>>> I hope you don't mind I ping and up this thread. >>>> >>>> Thanks a lot for this work. Any more progress here? >>> >>> We've uncapped warlock in openstack/requirements [1]. We just need the glance >>> folks to remove their own cap now [2] so that we can raise the version in upper >>> constraint. >>> >>> Stephen >>> >>> [1] https://review.opendev.org/c/openstack/requirements/+/849284 >>> [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 >> >> Hi ! >> >> I see these 2 are now merged, so it's job (well) done, right? > > I'd assume so, yes. We just need to wait for the machinery to do its job and > bump the upper constraint now. > > Stephen Hi Stephen, I uploaded a patched version of warlock to Unstable with the test fixed for the new jsonschema. However, when looking at the python-jsonschema pseudo-excuse, I can see that version 4.6.0 is breaking a bunch of other OpenStack projects: https://release.debian.org/britney/pseudo-excuses-experimental.html#python-jsonschema This includes: - designate - ironic - nova - sahara I'll try to see what I can do to fix these, maybe some of the failures are unrelated (I haven't investigated yet). Cheers, Thomas Goirand (zigo) From smooney at redhat.com Thu Jul 14 14:01:14 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 14 Jul 2022 15:01:14 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> Message-ID: <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> On Thu, 2022-07-14 at 15:38 +0200, Thomas Goirand wrote: > Hi everyone! > > I know the usual answer: "we don't do that", or "this is unsupported in > current version", however, as always, that's the way things are: > OpenStack isn't alone living in Debian, and Debian Unstable now has > Python 3.11, which isn't going to change simply because the OpenStack > community decides that "it's not supported". > > So it'd be nice if we could support it. I'm quite sure that, as usual, > the upload of a new Python interpreter version will break my world. I'll > try to summit patches when I can, but I also expect help if possible. > > The challenge is: Debian Bookworm will be shipping Zed and Python 3.11, > most likely... thanks for the heads up. last time i tried using bookworm i had to force 3.9 via a virual enve to fully deploy a working openstack. i know that you and the eventlet folk have actuly resolved the issue i hit already so 3.10 shoudl work now but i suspect evently will be the most fragil thing with getting 3.11 to work. in terms of offical testing runtime you are correct that "this is unsupported in current version" since we determin the supported/tested interperters for a release at the start fo the cycle. and currenlty we only test 3.8 and 3.9 with experimatal support for 3.10 im not against reviewing/merging patches that are needed for 3.11 as long as we do not break 3.8 for next cycle im hoping we will drop ubuntu 20.04 in favor of ubuntu 22.04 and we can rais our min version to 3.9 and add 3.10 and 3.11 support formally to our testing runtimes if those are aviable to test with in lts releases such as debian 11 or ubuntu 22.04 so making a start on that in zed on a best effort baisis i think makes sense. the only thing is that if we have to choose betten 3.8 support and 3.11 we need to ensure we maintian the agreed runtime supprot but i doubt we will need to make that choice. do we currently have 3.11 aviable in any of the ci images? i belive we have 22.04 image aviable is it installbale there or do we have debian bookworm images we can use to add a non voting tox py311 job to the relevent project repos? > > Cheers, > > Thomas Goirand (zigo) > From fungi at yuggoth.org Thu Jul 14 14:30:49 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Jul 2022 14:30:49 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: <20220714143048.gxznifh7oeaaqldi@yuggoth.org> On 2022-07-14 15:01:14 +0100 (+0100), Sean Mooney wrote: [...] > do we currently have 3.11 aviable in any of the ci images? i > belive we have 22.04 image aviable is it installbale there or do > we have debian bookworm images we can use to add a non voting tox > py311 job to the relevent project repos? Not to my knowledge, no. Ubuntu inherits most of their packages from Debian, which has only just added a Python 3.11 pre-release, so it will take time to end up even in Ubuntu under development (Ubuntu Kinetic which is slated to become 22.10 still only has python3.10 packages for the moment). It's probable they'll backport a python3.11 package to Jammy once available, though there's no guarantee, and based on historical backports it probably won't be until upstream 3.11.1 is tagged at the very earliest. Keep in mind that what Debian has at the moment is a package of Python 3.11.0b4, since 3.11.0 isn't even scheduled for an upstream release until October (two days before we're planning to release OpenStack Zed). Further, it's not even in Debian bookworm yet, and it's hard to predict how soon it will be able to transition out of unstable either. Let's be clear, what's being asked here is that OpenStack not just test against the newest available Python release, but in fact to continually test against pre-releases of the next Python while it's still being developed. While I understand that this would be nice, I hardly think it's a reasonable thing to expect. We have a hard enough time just keeping up with actual releases of Python which are current at the time we start a development cycle. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From rdhasman at redhat.com Thu Jul 14 14:47:00 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Thu, 14 Jul 2022 20:17:00 +0530 Subject: [cinder] Festival of XS reviews Message-ID: Hello Argonauts, We will be having our monthly festival of XS reviews tomorrow i.e. 15th July (Friday) from 1400-1600 UTC. Following are some additional details: Date: 15th July, 2022 Time: 1400-1600 UTC Meeting link: https://meetpad.opendev.org/cinder-festival-of-reviews etherpad: https://etherpad.opendev.org/p/cinder-festival-of-reviews Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From vikarnatathe at gmail.com Thu Jul 14 03:19:11 2022 From: vikarnatathe at gmail.com (Vikarna Tathe) Date: Thu, 14 Jul 2022 08:49:11 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Lokendra, The CN field is missing. Can you add that and generate the certificate again. CN=ipaddress Also add dns.1=ipaddress under alt_names for precaution. Vikarna On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, wrote: > HI Vikarna, > Thanks for the inputs. > I am note able to access any tabs in GUI. > [image: image.png] > > to re-state, we are failing at the time of deployment at step4 : > > > PLAY [External deployment step 4] > ********************************************** > 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | > TASK | External deployment step 4 > 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | > OK | External deployment step 4 | undercloud -> localhost | result={ > "changed": false, > "msg": "Use --start-at-task 'External deployment step 4' to resume > from this task" > } > [WARNING]: ('undercloud -> localhost', > '525400ae-089b-870a-fab6-0000000000d7') > missing from stats > 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | > TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s > 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | > INCLUDED | > /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml > | undercloud > 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | > TASK | Clean up legacy Cinder keystone catalog entries > 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | > OK | Clean up legacy Cinder keystone catalog entries | undercloud | > item={'service_name': 'cinderv2', 'service_type': 'volumev2'} > 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:24.204562 | 2.48s > 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | > OK | Clean up legacy Cinder keystone catalog entries | undercloud | > item={'service_name': 'cinderv3', 'service_type': 'volume'} > 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:26.122584 | 4.40s > 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | > TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | > 0:11:26.124296 | 4.40s > 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | > TASK | Manage Keystone resources for OpenStack services > 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | > TIMING | Manage Keystone resources for OpenStack services | undercloud | > 0:11:26.169842 | 0.03s > 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | > TASK | Gather variables for each operating system > 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | > TIMING | tripleo_keystone_resources : Gather variables for each operating > system | undercloud | 0:11:26.253383 | 0.04s > 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | > TASK | Create Keystone Admin resources > 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | > TIMING | tripleo_keystone_resources : Create Keystone Admin resources | > undercloud | 0:11:26.299608 | 0.03s > 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | > INCLUDED | > /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | > undercloud > 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | > TASK | Create default domain > 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | > OK | Create default domain | undercloud > 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | > TIMING | tripleo_keystone_resources : Create default domain | undercloud | > 0:11:28.437360 | 2.09s > 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | > TASK | Create admin and service projects > 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | > TIMING | tripleo_keystone_resources : Create admin and service projects | > undercloud | 0:11:28.483468 | 0.03s > 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | > INCLUDED | > /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | > undercloud > 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | > TASK | Async creation of Keystone project > 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | > CHANGED | Async creation of Keystone project | undercloud | item=admin > 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | > TIMING | tripleo_keystone_resources : Async creation of Keystone project | > undercloud | 0:11:29.238078 | 0.72s > 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | > CHANGED | Async creation of Keystone project | undercloud | item=service > 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | > TIMING | tripleo_keystone_resources : Async creation of Keystone project | > undercloud | 0:11:29.586587 | 1.06s > 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | > TIMING | tripleo_keystone_resources : Async creation of Keystone project | > undercloud | 0:11:29.587916 | 1.07s > 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | > TASK | Check Keystone project status > 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | > WAITING | Check Keystone project status | undercloud | 30 retries left > 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | > OK | Check Keystone project status | undercloud | item=admin > 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | > TIMING | tripleo_keystone_resources : Check Keystone project status | > undercloud | 0:11:35.260666 | 5.66s > 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | > OK | Check Keystone project status | undercloud | item=service > 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | > TIMING | tripleo_keystone_resources : Check Keystone project status | > undercloud | 0:11:35.494729 | 5.89s > 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | > TIMING | tripleo_keystone_resources : Check Keystone project status | > undercloud | 0:11:35.498771 | 5.89s > 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | > TASK | Create admin role > 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | > OK | Create admin role | undercloud > 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | > TIMING | tripleo_keystone_resources : Create admin role | undercloud | > 0:11:37.725949 | 2.20s > 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | > TASK | Create _member_ role > 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | > SKIPPED | Create _member_ role | undercloud > 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | > TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | > 0:11:37.783369 | 0.04s > 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | > TASK | Create admin user > 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | > CHANGED | Create admin user | undercloud > 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | > TIMING | tripleo_keystone_resources : Create admin user | undercloud | > 0:11:41.145472 | 3.34s > 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | > TASK | Assign admin role to admin project for admin user > 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | > OK | Assign admin role to admin project for admin user | undercloud > 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | > TIMING | tripleo_keystone_resources : Assign admin role to admin project > for admin user | undercloud | 0:11:44.288848 | 3.13s > 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | > TASK | Assign _member_ role to admin project for admin user > 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | > SKIPPED | Assign _member_ role to admin project for admin user | undercloud > 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | > TIMING | tripleo_keystone_resources : Assign _member_ role to admin project > for admin user | undercloud | 0:11:44.346479 | 0.04s > 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | > TASK | Create identity service > 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | > OK | Create identity service | undercloud > 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | > TIMING | tripleo_keystone_resources : Create identity service | undercloud > | 0:11:46.022362 | 1.66s > 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | > TASK | Create identity public endpoint > 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | > OK | Create identity public endpoint | undercloud > 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | > TIMING | tripleo_keystone_resources : Create identity public endpoint | > undercloud | 0:11:48.233349 | 2.19s > 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | > TASK | Create identity internal endpoint > 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | > FATAL | Create identity internal endpoint | undercloud | error={"changed": > false, "extra_data": {"data": null, "details": "The request you have made > requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list > services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, > The request you have made requires authentication."} > 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | > TIMING | tripleo_keystone_resources : Create identity internal endpoint | > undercloud | 0:11:50.660654 | 2.41s > > PLAY RECAP > ********************************************************************* > localhost : ok=1 changed=0 unreachable=0 > failed=0 skipped=2 rescued=0 ignored=0 > overcloud-controller-0 : ok=437 changed=103 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-1 : ok=435 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-2 : ok=432 changed=101 unreachable=0 > failed=0 skipped=214 rescued=0 ignored=0 > overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 > failed=0 skipped=198 rescued=0 ignored=0 > undercloud : ok=39 changed=7 unreachable=0 > failed=1 skipped=6 rescued=0 ignored=0 > > Also : > (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf > [req] > default_bits = 2048 > prompt = no > default_md = sha256 > distinguished_name = dn > [dn] > C=IN > ST=UTTAR PRADESH > L=NOIDA > O=HSC > OU=HSC > emailAddress=demo at demo.com > > v3.ext: > (undercloud) [stack at undercloud oc-cert]$ cat v3.ext > authorityKeyIdentifier=keyid,issuer > basicConstraints=CA:FALSE > keyUsage = digitalSignature, nonRepudiation, keyEncipherment, > dataEncipherment > subjectAltName = @alt_names > [alt_names] > IP.1=fd00:fd00:fd00:9900::81 > > Using these files we create other certificates. > Please check and let me know in case we need anything else. > > > On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe > wrote: > >> Hi Lokendra, >> >> Are you able to access all the tabs in the OpenStack dashboard without >> any error? If not, please retry generating the certificate. Also, share the >> openssl.cnf or server.cnf. >> >> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour >> wrote: >> >>> Hi Team, >>> Any input on this case raised. >>> >>> Thanks, >>> Lokendra >>> >>> >>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Shephard/Swogat, >>>> I tried changing the setting as suggested and it looks like it has >>>> failed at step 4 with error: >>>> >>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>> 0:24:47.736198 | 2.21s >>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>> TASK | Create identity internal endpoint >>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>> FATAL | Create identity internal endpoint | undercloud | error={"changed": >>>> false, "extra_data": {"data": null, "details": "The request you have made >>>> requires authentication.", "response": >>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>> The request you have made requires authentication."} >>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>> >>>> >>>> Checking further the endpoint list: >>>> I see only one endpoint for keystone is gettin created. >>>> >>>> DeprecationWarning >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>> | ID | Region | Service Name | Service >>>> Type | Enabled | Interface | URL | >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>> identity | True | admin | http://30.30.30.173:35357 >>>> | >>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>> | >>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>> | >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>> >>>> >>>> it looks like something related to the SSL, we have also verified that >>>> the GUI login screen shows that Certificates are applied. >>>> exploring more in logs, meanwhile any suggestions or know observation >>>> would be of great help. >>>> thanks again for the support. >>>> >>>> Best Regards, >>>> Lokendra >>>> >>>> >>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> I had faced a similar kind of issue, for ip based setup you need to >>>>> specify the domain name as the ip that you are going to use, this error is >>>>> showing up because the ssl is ip based but the fqdns seems to be >>>>> undercloud.com or overcloud.example.com. >>>>> I think for undercloud you can change the undercloud.conf. >>>>> >>>>> And will it work if we specify clouddomain parameter to the IP address >>>>> for overcloud? because it seems he has not specified the clouddomain >>>>> parameter and overcloud.example.com is the default domain for >>>>> overcloud.example.com. >>>>> >>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, >>>>> wrote: >>>>> >>>>>> What is the domain name you have specified in the undercloud.conf >>>>>> file? >>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>> >>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Team, >>>>>>> We were trying to install overcloud with SSL enabled for which the >>>>>>> UC is installed, but OC install is getting failed at step 4: >>>>>>> >>>>>>> ERROR >>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>> server_hostname)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>> (most recent call last):\n File >>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>> send\n timeout=timeout\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>> authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>> authenticated=authenticated)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>> request\n resp = send(**kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>> in _send_request\n raise >>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>> run_globals)\n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>> line 185, in \n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>> line 181, in main\n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>> line 407, in __call__\n File >>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>> line 141, in run\n File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>> File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>> max_version='3.latest')\n File >>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>> **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>> **kwargs)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>> self.get_access(session).service_catalog\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>> self._do_create_plugin(session)\n File >>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>> retries exceeded with url: / (Caused by >>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:01.271914 | 2.47s >>>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:01.273659 | 2.47s >>>>>>> >>>>>>> PLAY RECAP >>>>>>> ********************************************************************* >>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>> >>>>>>> >>>>>>> in the deploy.sh: >>>>>>> >>>>>>> openstack overcloud deploy --templates \ >>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>> --baremetal-deployment >>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>> --network-config \ >>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>> \ >>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>> \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>> -e >>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>> >>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>> modifications: >>>>>>> tls-endpoints-public-ip.yaml: >>>>>>> Passed as is in the defaults. >>>>>>> enable-tls.yaml: >>>>>>> >>>>>>> # ******************************************************************* >>>>>>> # This file was created automatically by the sample environment >>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>>> # of the original, if any customizations are needed. >>>>>>> # ******************************************************************* >>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>> # description: | >>>>>>> # Use this environment to pass in certificates for SSL deployments. >>>>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>>>> # environments must also be used. >>>>>>> parameter_defaults: >>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>> # Type: boolean >>>>>>> HorizonSecureCookies: True >>>>>>> >>>>>>> # Specifies the default CA cert to use if TLS is used for services >>>>>>> in the public network. >>>>>>> # Type: string >>>>>>> PublicTLSCAFile: >>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>> >>>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>>> # Type: string >>>>>>> SSLRootCertificate: | >>>>>>> -----BEGIN CERTIFICATE----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END CERTIFICATE----- >>>>>>> >>>>>>> SSLCertificate: | >>>>>>> -----BEGIN CERTIFICATE----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END CERTIFICATE----- >>>>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>>>> # Type: string >>>>>>> SSLIntermediateCertificate: '' >>>>>>> >>>>>>> # The content of the SSL Key in PEM format. >>>>>>> # Type: string >>>>>>> SSLKey: | >>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END PRIVATE KEY----- >>>>>>> >>>>>>> # ****************************************************** >>>>>>> # Static parameters - these are values that must be >>>>>>> # included in the environment but should not be changed. >>>>>>> # ****************************************************** >>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>> controller. >>>>>>> # Type: string >>>>>>> DeployedSSLCertificatePath: >>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>> >>>>>>> # ********************* >>>>>>> # End static parameters >>>>>>> # ********************* >>>>>>> >>>>>>> inject-trust-anchor.yaml >>>>>>> >>>>>>> # ******************************************************************* >>>>>>> # This file was created automatically by the sample environment >>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>> # Users are recommended to make changes to a copy of the file instead >>>>>>> # of the original, if any customizations are needed. >>>>>>> # ******************************************************************* >>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>> # description: | >>>>>>> # When using an SSL certificate signed by a CA that is not in the >>>>>>> default >>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>> certificate to >>>>>>> # the overcloud nodes. >>>>>>> parameter_defaults: >>>>>>> # The content of a CA's SSL certificate file in PEM format. This >>>>>>> is evaluated on the client side. >>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>> # Type: string >>>>>>> SSLRootCertificate: | >>>>>>> -----BEGIN CERTIFICATE----- >>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>> -----END CERTIFICATE----- >>>>>>> >>>>>>> resource_registry: >>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>>> The procedure to create such files was followed using: >>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>> >>>>>>> >>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>> IP-based certificate, without DNS. * >>>>>>> >>>>>>> Any idea around this error would be of great help. >>>>>>> >>>>>>> -- >>>>>>> skype: lokendrarathour >>>>>>> >>>>>>> >>>>>>> >>>> >>>> >>>> >>> >>> -- >>> >> > > -- > ~ Lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From renliang at uniontech.com Thu Jul 14 06:48:14 2022 From: renliang at uniontech.com (=?utf-8?B?5Lu75Lqu?=) Date: Thu, 14 Jul 2022 14:48:14 +0800 Subject: [skyline]A problem with skyline packaging RPM Message-ID: Hello, a series of packages for skyline are provided at https://pypi.org/user/99cloud/, but only the package of skyline-apiserver. skyline-apiserver requires dependencies of other packages when building. For example, skyline-config,skyline-log,skyline-policy-manager. These packages do not provide the corresponding source packages, we want to package these packages into rpm. Please provide source packages or other better ways to build. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Thu Jul 14 14:54:58 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Thu, 14 Jul 2022 15:54:58 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> Message-ID: <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> On Thu, 2022-07-14 at 15:51 +0200, Thomas Goirand wrote: > On 7/14/22 11:02, Stephen Finucane wrote: > > On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: > > > On 7/12/22 14:14, Stephen Finucane wrote: > > > > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > > > > > Hi Stephen, > > > > > > > > > > I hope you don't mind I ping and up this thread. > > > > > > > > > > Thanks a lot for this work. Any more progress here? > > > > > > > > We've uncapped warlock in openstack/requirements [1]. We just need the glance > > > > folks to remove their own cap now [2] so that we can raise the version in upper > > > > constraint. > > > > > > > > Stephen > > > > > > > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > > > > [2] https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > > > > > Hi ! > > > > > > I see these 2 are now merged, so it's job (well) done, right? > > > > I'd assume so, yes. We just need to wait for the machinery to do its job and > > bump the upper constraint now. > > > > Stephen > > Hi Stephen, > > I uploaded a patched version of warlock to Unstable with the test fixed > for the new jsonschema. However, when looking at the python-jsonschema > pseudo-excuse, I can see that version 4.6.0 is breaking a bunch of other > OpenStack projects: > > https://release.debian.org/britney/pseudo-excuses-experimental.html#python-jsonschema > > This includes: > - designate > - ironic > - nova The nova fix was trivial enough: https://review.opendev.org/c/openstack/nova/+/849867 I'll have to let someone else fix the other projects. We'll see these flagged as soon as a patch to bump the upper-constraint of jsonschema hits openstack/requirements (which will happen once the python-glanceclient upper constraint bump to same merges). Stephen > - sahara > > I'll try to see what I can do to fix these, maybe some of the failures > are unrelated (I haven't investigated yet). > > Cheers, > > Thomas Goirand (zigo) > From katonalala at gmail.com Thu Jul 14 15:05:36 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 14 Jul 2022 17:05:36 +0200 Subject: [neutron] Drivers meeting - Friday 14.7.2022 - cancelled Message-ID: Hi Neutron Drivers! Due to the lack of agenda, let's cancel tomorrow's drivers meeting. See You on the meeting next week. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtantsur at redhat.com Thu Jul 14 16:09:09 2022 From: dtantsur at redhat.com (Dmitry Tantsur) Date: Thu, 14 Jul 2022 18:09:09 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> Message-ID: Ironic was not too bad either: https://review.opendev.org/c/openstack/ironic/+/849882 Similar for Nova: https://review.opendev.org/c/openstack/nova/+/849881 On Thu, Jul 14, 2022 at 5:08 PM Stephen Finucane wrote: > On Thu, 2022-07-14 at 15:51 +0200, Thomas Goirand wrote: > > On 7/14/22 11:02, Stephen Finucane wrote: > > > On Wed, 2022-07-13 at 18:21 +0200, Thomas Goirand wrote: > > > > On 7/12/22 14:14, Stephen Finucane wrote: > > > > > On Mon, 2022-07-11 at 18:33 +0200, Thomas Goirand wrote: > > > > > > Hi Stephen, > > > > > > > > > > > > I hope you don't mind I ping and up this thread. > > > > > > > > > > > > Thanks a lot for this work. Any more progress here? > > > > > > > > > > We've uncapped warlock in openstack/requirements [1]. We just need > the glance > > > > > folks to remove their own cap now [2] so that we can raise the > version in upper > > > > > constraint. > > > > > > > > > > Stephen > > > > > > > > > > [1] https://review.opendev.org/c/openstack/requirements/+/849284 > > > > > [2] > https://review.opendev.org/c/openstack/python-glanceclient/+/849285 > > > > > > > > Hi ! > > > > > > > > I see these 2 are now merged, so it's job (well) done, right? > > > > > > I'd assume so, yes. We just need to wait for the machinery to do its > job and > > > bump the upper constraint now. > > > > > > Stephen > > > > Hi Stephen, > > > > I uploaded a patched version of warlock to Unstable with the test fixed > > for the new jsonschema. However, when looking at the python-jsonschema > > pseudo-excuse, I can see that version 4.6.0 is breaking a bunch of other > > OpenStack projects: > > > > > https://release.debian.org/britney/pseudo-excuses-experimental.html#python-jsonschema > > > > This includes: > > - designate > > - ironic > > - nova > > The nova fix was trivial enough: > > https://review.opendev.org/c/openstack/nova/+/849867 > > I'll have to let someone else fix the other projects. We'll see these > flagged as > soon as a patch to bump the upper-constraint of jsonschema hits > openstack/requirements (which will happen once the python-glanceclient > upper > constraint bump to same merges). > > Stephen > > > - sahara > > > > I'll try to see what I can do to fix these, maybe some of the failures > > are unrelated (I haven't investigated yet). > > > > Cheers, > > > > Thomas Goirand (zigo) > > > > > -- Red Hat GmbH , Registered seat: Werner von Siemens Ring 14, D-85630 Grasbrunn, Germany Commercial register: Amtsgericht Muenchen/Munich, HRB 153243,Managing Directors: Ryan Barnhart, Charles Cachera, Michael O'Neill, Amy Ross -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdemaced at redhat.com Thu Jul 14 16:31:54 2022 From: mdemaced at redhat.com (Maysa De Macedo Souza) Date: Thu, 14 Jul 2022 18:31:54 +0200 Subject: [kuryr] Proposal to clean up Kuryr core reviewers Message-ID: Hello, We went through the list of current Kuryr core reviewers on the last PTG session and we noticed a couple of people that are not active Kuryr contributors anymore. I would to propose removing the following contributors from the Kuryr core team: - Irena Berezovsky - Gal Sagie - Liping Mao I take this opportunity to thank all of them for their contributions to the Kuryr project. I will wait one week for any feedback before proceeding with the removal. Thank you, Maysa Macedo -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Thu Jul 14 16:45:31 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Thu, 14 Jul 2022 09:45:31 -0700 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: On Wed, Jul 13, 2022 at 1:07 PM Ga?l THEROND wrote: > > Hi Julia! > > Thanks a lot for those explanations :-) Most of it confirm my understanding, I now have a clearer point of view that will let me select our test users for the service. > > Regarding aruba switches, those are pretty cool, even if as you pointed it, this feature can actually lead you to some weird if not dangerous situations x) > > Ok noticed about the horizon issue, it can be a little bit tricky for our end users to understand that tbh as they will for sure expect the IP selected by neutron and display on the dashboard to be the one used by the node even on a full flat network such as the provisioning network, but for now we will deal with it by explaining them. A challenging point here is there is no true way to hint that this is the case upfront. Nova acts as an abstraction layer in between and it really needs that networking information piece of the puzzle to generate metadata for an instance. I think, embracing it and also supporting an ML2 integrated configuration where individual switch ports are changed, is ultimately the most powerful configuration, but the challenge we hear from operators upstream is generally network operations groups don't want software toggling switchport vlan assignments. I get why as I've worked in NetOps in the past, it is largely a trust issue, I've just not figured out concrete ways to build the trust needed there. :( > > Regarding my point 2, yeah yeah I knew the purpose of direct deploy I just explicited it I don?t know why, my point was rather: > > At first, when I configured our ironic deployment I had that weird issue where if I don?t put the pxe_filter option to noop but dnsmasq, deploying anything is failing as the conductor doesn?t correctly erase the ?ignore ? part of the string on the dhcp_host_filter file of dnsmasq. If I make this filter as noop then obviously I don?t need neutron to provide the ironic-provision-network anymore as anyone plugged on ports with my VLAN101 set as native VLAN will be able to get an ip from the PXE dnsmasq. I was wondering how you were making it work! This explains a lot, and is really not the intended pattern of use. But it is a pattern upstream generally sees in more "standalone", or cases of direct interaction with Ironic's API. > > I?m still having hard time to map how ironic needs both PXE dedicated dnsmasq for introspection and then can use neutron dnsmasq dhcp once you want to provision a host? Is that because neutron (kinda) lack for dhcp options support on its managed subnets ? > At this point, dnsmasq for introspection is *largely* for the purposes of discovering hardware you don't know about and supporting the oldest introspection workflow where inspection is directly triggered with the introspection service. Depending on the version of Ironic, and if you have a mac address already known to Ironic, you can trigger the inspection workflow directly with ironic directly with the state machine, and it will populate network configuration in neutron to perform introspection on the node. Neutron doesn't really lack dhcp options support on it's subnets, although it is very dnsmasq focused. The challenge we tend to see here is getting things properly aligned host configuration and networking wise for PXE boot operations doesn't always align perfectly, so it becomes just easier to get things to initially work as you did. > All in all it?s pretty clearer to me about the multi tenancy networking requirements now thanks to you! Excellent to hear! If you feel like anything is missing in our documentation, we do welcome patches! I do suspect the whole bit about introspection dnsmasq might need to be further highlighted or delineated in the documentation. -Julia > > Le mar. 12 juil. 2022 ? 00:13, Julia Kreger a ?crit : >> >> Greetings! Hopefully these answers help! >> >> On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND wrote: >> > >> > I everyone, I?m currently working back again with Ironic and it?s amazing! >> > >> > However, during our demo session to our users few questions arise. >> > >> > We?re currently deploying nodes using a private vlan that can?t be reached from outside of the Openstack network fabric (vlan 101 - 192.168.101.0/24) and everything is fine with this provisioning network as our ToR switch all know about it and other Control plan VLANs such as the internal APIs VLAN which allow the IPA Ramdisk to correctly and seamlessly be able to contact the internal IRONIC APIs. >> >> Nice, I've had my lab configured like this in the past. >> >> > >> > (When you declare a port as a trunk allowed all vlan on a aruba switch it seems it automatically analyse the CIDR your host try to reach from your VLAN and route everything to the corresponding VLAN that match the destination IP). >> > >> >> Ugh, that... could be fun :\ >> >> > So know, I still get few tiny issues: >> > >> > 1?/- When I spawn a nova instance on a ironic host that is set to use flat network (From horizon as a user), why does the nova wizard still ask for a neutron network if it?s not set on the provisioned host by the IPA ramdisk right after the whole disk image copy? Is that some missing development on horizon or did I missed something? >> >> Horizon just is not aware... and you can actually have entirely >> different DHCP pools on the same flat network, so that neutron network >> is intended for the instance's addressing to utilize. >> >> Ironic does just ask from an allocation from a provisioning network, >> which can and *should* be a different network than the tenant network. >> >> > >> > 2?/- In a flat network layout deployment using direct deploy scenario for images, am I still supposed to create a ironic provisioning network in neutron? >> > >> > From my understanding (and actually my tests) we don?t, as any host booting on the provisioning vlan will catch up an IP and initiate the bootp sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, but it?s a bit confusing as many documentation explicitly require for this network to exist on neutron. >> >> Yes. Direct is short hand for "Copy it over the network and write it >> directly to disk". It still needs an IP address on the provisioning >> network (think, subnet instead of distinct L2 broadcast domain). >> >> When you ask nova for an instance, it sends over what the machine >> should use as a "VIF" (neutron port), however that is never actually >> bound configuration wise into neutron until after the deployment >> completes. >> >> It *could* be that your neutron config is such that it just works >> anyway, but I suspect upstream contributors would be a bit confused if >> you reported an issue and had no provisioning network defined. >> >> > >> > 3?/- My whole Openstack network setup is using Openvswitch and vxlan tunnels on top of a spine/leaf architecture using aruba CX8360 switches (for both spine and leafs), am I required to use either the networking-generic-switch driver or a vendor neutron driver ? If that?s right, how will this driver be able to instruct the switch to assign the host port the correct openvswitch vlan id and register the correct vxlan to openvswitch from this port? I mean, ok neutron know the vxlan and openvswitch the tunnel vlan id/interface but what is the glue of all that? >> >> If your happy with flat networks, no. >> >> If you want tenant isolation networking wise, yes. >> >> NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port >> level local link configuration (well, Ironic includes the port >> information (local link connection, physical network, and some other >> details) to Neutron with the port binding request. >> >> Those ML2 drivers, then either request the switch configuration be >> updated, or take locally configured credentials to modify port >> configuration in Neutron, and logs into the switch to toggle the >> access port's configuration which the baremetal node is attached to. >> >> Generally, they are not vxlan network aware, and at least with >> networking-generic-switch vlan ID numbers are expected and allocated >> via neutron. >> >> Sort of like the software is logging into the switch and running >> something along the lines of "conf t;int gi0/21;switchport mode >> access;switchport access vlan 391 ; wri mem" >> >> > >> > 4?/- I?ve successfully used openstack cloud oriented CentOS and debian images or snapshot of VMs to provision my hosts, this is an awesome feature, but I?m wondering if there is a way to let those host cloud-init instance to request for neutron metadata endpoint? >> > >> >> Generally yes, you *can* use network attached metadata with neutron >> *as long as* your switches know to direct the traffic for the metadata >> IP to the Neutron metadata service(s). >> >> We know of operators who ahve done it without issues, but often that >> additional switch configured route is not always the best hting. >> Generally we recommend enabling and using configuration drives, so the >> metadata is able to be picked up by cloud-init. >> >> >> > I was a bit surprised about the ironic networking part as I was expecting the IPA ramdisk to at least be able to set the host os with the appropriate network configuration file for whole disk images that do not use encryption by injecting those information from the neutron api into the host disk while mounted (right after the image dd). >> > >> >> IPA has no knowledge of how to modify the host OS in this regard. >> modifying the host OS has generally been something the ironic >> community has avoided since it is not exactly cloudy to have to do so. >> Generally most clouds are running with DHCP, so as long as that is >> enabled and configured, things should generally "just work". >> >> Hopefully that provides a little more context. Nothing prevents you >> from writing your own hardware manager that does exactly this, for >> what it is worth. >> >> > All in all I really like the ironic approach of the baremetal provisioning process, and I?m pretty sure that I?m just missing a bit of understanding of the networking part but it?s really the most confusing part of it to me as I feel like if there is a missing link in between neutron and the host HW or the switches. >> > >> >> Thanks! It is definitely one of the more complex parts given there are >> many moving parts, and everyone wants (or needs) to have their >> networking configured just a little differently. >> >> Hopefully I've kind of put some of the details out there, if you need >> more information, please feel free to reach out, and also please feel >> free to ask questions in #openstack-ironic on irc.oftc.net. >> >> > Thanks a lot anyone that will take time to explain me this :-) >> >> :) From arnaud.morin at gmail.com Thu Jul 14 17:31:15 2022 From: arnaud.morin at gmail.com (Arnaud) Date: Thu, 14 Jul 2022 19:31:15 +0200 Subject: =?US-ASCII?Q?Re=3A_=5Bnova=5D=5Bops=5D_seeking_input_about_local?= =?US-ASCII?Q?/ephemeral_disk_encryption_feature_naming?= In-Reply-To: References: Message-ID: <5DA30C81-328C-4F99-A2F0-9EB40457E68B@gmail.com> Hi, Very good point about the naming. My quick opinion is that ephemeral is not perfect but was used for years so some users are used to it anyway. Renaming now should be for a very understandable naming. Anyway, I'll forward this to some of my colleagues that could have a much stronger opinion on this. Cheers, Arnaud Le 14 juillet 2022 01:16:58 GMT+02:00, melanie witt a ?crit?: >Hi everyone, > >A potential issue regarding naming has come up during review of the >ephemeral storage encryption feature [1][2] patch series [3] and we're >looking for input before moving forward with any naming/terminology >changes across the specs and the entire patch series. > >The concern that has been raised is around use of the term "ephemeral" >for the name of this feature including traits, extra specs, and image >properties [4]. > >For context, the objective of this feature is to provide users with the >ability to specify that all local disks for the instance be encrypted. >This includes the root disk and any other local disks. > >The initial concern is around use of the word "ephemeral" for the root disk. > >My general interpretation of the word "ephemeral" for storage in nova >has been that it means attached storage that only persists for the >lifetime of the instance and is destroyed if and when the instance is >destroyed. This is in contrast to attached cinder volumes which can >persist after instance deletion. > >But should "ephemeral" ever be used to describe a root disk? Is it >incorrect and/or ambiguous to refer to it as such? > >This is part of what is being discussed in [4]. > >During discussion, I also realized there is a separate gap in the above >interpretation of "ephemeral" in nova. When cinder volumes are attached >to an instance, their persistence after the instance is deleted depends >on whether the 'delete_on_termination' attribute is set to true in the >request payload when the instance is created [5] or when attaching a >volume to the instance [6] or updating a volume attached to the instance >[7]. > >This means that in the currently proposed patches, if a user specifies >hw:ephemeral_encryption in the extra_specs, for example, and they also >have a volume with delete_on_termination=True attached, only the root >disk will be encrypted via the extra spec -- the volume would not be >encrypted. Encryption of the volume has to be requested in cinder. > >Could this mislead a user into thinking both the root disk and cinder >volume are encrypted when only the root disk is? > >Because of the above issues, we are considering whether we should change >the terminology used in this feature at this stage. Some ideas include >"local encryption", "local disk encryption", "disk encryption". IMHO >"disk_encryption" is ambiguous in its own way because an attached cinder >volume also has a disk. > >Changing the naming will be a non-trivial amount of work, so we wanted >to get additional input before going ahead with such a change. > >Another thing noted in a comment on another patch in the series [8] is >that the os-traits for this feature have already been merged [9]. If we >decide to change the naming, should we go ahead and use these traits >as-is and have them not match the naming in nova or should we deprecate >them and add new traits that match the new name and use those? > >I hope this makes sense and your input would be much appreciated. > >Cheers, >-melwitt > >[1] >https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-storage-encryption.html >[2] >https://specs.openstack.org/openstack/nova-specs/specs/yoga/approved/ephemeral-encryption-libvirt.html >[3] >https://review.opendev.org/q/topic:specs%252Fyoga%252Fapproved%252Fephemeral-encryption-libvirt >[4] >https://review.opendev.org/c/openstack/nova/+/764486/10/nova/api/validation/extra_specs/hw.py#516 >[5] >https://docs.openstack.org/api-ref/compute/?expanded=create-server-detail#create-server >[6] >https://docs.openstack.org/api-ref/compute/?expanded=attach-a-volume-to-an-instance-detail >[7] >https://docs.openstack.org/api-ref/compute/?expanded=update-a-volume-attachment-detail >[8] >https://review.opendev.org/c/openstack/nova/+/760456/10/nova/scheduler/request_filter.py#425 >[9] >https://github.com/openstack/os-traits/blob/f64d50e4dd2f21558fb73dd4b59cd1d4b121b707/os_traits/compute/ephemeral.py > -------------- next part -------------- An HTML attachment was scrubbed... URL: From przemyslaw.basa at redge.com Thu Jul 14 18:37:05 2022 From: przemyslaw.basa at redge.com (Przemyslaw Basa) Date: Thu, 14 Jul 2022 20:37:05 +0200 Subject: [placement] running out of VCPU resource In-Reply-To: References: Message-ID: <7dbedb95-0af7-97dc-3f76-bb308aaf52f2@redge.com> Hi, Well i think i figured it out. Following Xena deployment instructions mariadb was installed in version 10.6.5 and it seems to be some kind of bug in this version. Upgrading to 10.6.8 fixed this particular issue for me. I've checked some older and newer versions (10.5.6, 10.8.3) and problematic query behaves there like in 10.6.8. Here's how I've done my tests if someone is interested: % docker run --rm --detach --name mariadb-10.6.5 --env MYSQL_ROOT_PASSWORD=test mariadb:10.6.5 % docker run --rm --detach --name mariadb-10.6.8 --env MYSQL_ROOT_PASSWORD=test mariadb:10.6.8 % docker exec -i mariadb-10.6.5 mysql -u root -ptest < tables_dump.sql % docker exec -i mariadb-10.6.8 mysql -u root -ptest < tables_dump.sql % docker exec -i mariadb-10.6.5 mysql -u root -ptest -t test < test.sql +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | used | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 0 | 128 | 0 | 2 | 13318 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 1 | 1031723 | 2048 | 1 | NULL | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 2 | 901965 | 2 | 1 | NULL | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ % docker exec -i mariadb-10.6.8 mysql -u root -ptest -t test < test.sql +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | id | uuid | generation | resource_class_id | total | reserved | allocation_ratio | used | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 0 | 128 | 0 | 2 | 5 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 1 | 1031723 | 2048 | 1 | 13312 | | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | 2 | 901965 | 2 | 1 | 1 | +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ % cat tables_dump.sql create database test; connect test; CREATE TABLE `allocations` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `resource_provider_id` int(11) NOT NULL, `consumer_id` varchar(36) NOT NULL, `resource_class_id` int(11) NOT NULL, `used` int(11) NOT NULL, PRIMARY KEY (`id`), KEY `allocations_resource_provider_class_used_idx` (`resource_provider_id`,`resource_class_id`,`used`), KEY `allocations_resource_class_id_idx` (`resource_class_id`), KEY `allocations_consumer_id_idx` (`consumer_id`) ) ENGINE=InnoDB AUTO_INCREMENT=547 DEFAULT CHARSET=utf8mb3; CREATE TABLE `inventories` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `resource_provider_id` int(11) NOT NULL, `resource_class_id` int(11) NOT NULL, `total` int(11) NOT NULL, `reserved` int(11) NOT NULL, `min_unit` int(11) NOT NULL, `max_unit` int(11) NOT NULL, `step_size` int(11) NOT NULL, `allocation_ratio` float NOT NULL, PRIMARY KEY (`id`), UNIQUE KEY `uniq_inventories0resource_provider_resource_class` (`resource_provider_id`,`resource_class_id`), KEY `inventories_resource_class_id_idx` (`resource_class_id`), KEY `inventories_resource_provider_id_idx` (`resource_provider_id`), KEY `inventories_resource_provider_resource_class_idx` (`resource_provider_id`,`resource_class_id`) ) ENGINE=InnoDB AUTO_INCREMENT=24 DEFAULT CHARSET=utf8mb3; CREATE TABLE `resource_providers` ( `created_at` datetime DEFAULT NULL, `updated_at` datetime DEFAULT NULL, `id` int(11) NOT NULL AUTO_INCREMENT, `uuid` varchar(36) NOT NULL, `name` varchar(200) DEFAULT NULL, `generation` int(11) DEFAULT NULL, `root_provider_id` int(11) DEFAULT NULL, `parent_provider_id` int(11) DEFAULT NULL, PRIMARY KEY (`id`), UNIQUE KEY `uniq_resource_providers0uuid` (`uuid`), UNIQUE KEY `uniq_resource_providers0name` (`name`), KEY `resource_providers_name_idx` (`name`), KEY `resource_providers_parent_provider_id_idx` (`parent_provider_id`), KEY `resource_providers_root_provider_id_idx` (`root_provider_id`), KEY `resource_providers_uuid_idx` (`uuid`), CONSTRAINT `resource_providers_ibfk_1` FOREIGN KEY (`parent_provider_id`) REFERENCES `resource_providers` (`id`), CONSTRAINT `resource_providers_ibfk_2` FOREIGN KEY (`root_provider_id`) REFERENCES `resource_providers` (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb3; INSERT INTO `allocations` VALUES ('2022-07-07 23:08:10',NULL,329,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',1,12288),('2022-07-07 23:08:10',NULL,332,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',0,4),('2022-07-08 06:26:28',NULL,335,4,'aec7aaea-10df-451b-b2ce-847099ee0110',1,2048),('2022-07-08 06:26:28',NULL,338,4,'aec7aaea-10df-451b-b2ce-847099ee0110',0,2),('2022-07-12 08:53:21',NULL,400,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',1,16384),('2022-07-12 08:53:21',NULL,403,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',0,2),('2022-07-14 08:24:27',NULL,538,5,'9681447d-57ec-45c7-af48-63be3c7201da',2,1),('2022-07-14 08:24:27',NULL,541,5,'9681447d-57ec-45c7-af48-63be3c7201da',1,1024),('2022-07-14 08:24:27',NULL,544,5,'9681447d-57ec-45c7-af48-63be3c7201da',0,1); INSERT INTO `resource_providers` VALUES ('2022-07-04 11:59:49','2022-07-13 13:03:08',1,'6ac81bb4-50ef-4784-8a64-9031afeaaa9d','p-os-compute01.openstack.local',50,1,NULL),('2022-07-04 12:00:49','2022-07-13 13:03:07',4,'a324b3b9-f8c8-4279-bf63-a27163fcf792','g-os-compute01.openstack.local',42,4,NULL),('2022-07-04 12:03:57','2022-07-14 08:24:27',5,'16f620c0-8c6f-4984-8d58-e2c00d1b32da','t-os-compute01.openstack.local',50,5,NULL); INSERT INTO `inventories` VALUES ('2022-07-04 11:59:50','2022-07-11 09:24:04',1,1,0,128,0,1,128,1,2),('2022-07-04 11:59:50','2022-07-11 09:24:04',4,1,1,1031723,2048,1,1031723,1,1),('2022-07-04 11:59:50','2022-07-11 09:24:04',7,1,2,901965,2,1,901965,1,1),('2022-07-04 12:01:53','2022-07-04 14:59:53',10,4,0,128,0,1,128,1,2),('2022-07-04 12:01:53','2022-07-04 14:59:53',13,4,1,1031723,2048,1,1031723,1,1),('2022-07-04 12:01:53','2022-07-04 14:59:53',16,4,2,901965,2,1,901965,1,1),('2022-07-04 12:03:57','2022-07-14 07:16:08',17,5,0,128,0,1,128,1,2),('2022-07-04 12:03:57','2022-07-14 07:09:11',20,5,1,1031723,2048,1,1031723,1,1),('2022-07-04 12:03:57','2022-07-14 07:09:11',23,5,2,901965,2,1,901965,1,1); % cat test.sql SELECT rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, inv.reserved, inv.allocation_ratio, allocs.used FROM resource_providers AS rp JOIN inventories AS inv ON rp.id = inv.resource_provider_id LEFT JOIN ( SELECT resource_provider_id, resource_class_id, SUM(used) AS used FROM allocations WHERE resource_class_id IN (0, 1, 2) AND resource_provider_id IN (5) GROUP BY resource_provider_id, resource_class_id ) AS allocs ON inv.resource_provider_id = allocs.resource_provider_id AND inv.resource_class_id = allocs.resource_class_id WHERE rp.id IN (5) AND inv.resource_class_id IN (0,1,2) ; Regards, Przemyslaw Basa From cboylan at sapwetik.org Thu Jul 14 19:10:57 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 14 Jul 2022 12:10:57 -0700 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <20220714143048.gxznifh7oeaaqldi@yuggoth.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> Message-ID: <9ce7dcbf-8ece-407c-ad6f-e3aea7db58f8@www.fastmail.com> On Thu, Jul 14, 2022, at 7:30 AM, Jeremy Stanley wrote: > On 2022-07-14 15:01:14 +0100 (+0100), Sean Mooney wrote: > [...] >> do we currently have 3.11 aviable in any of the ci images? i >> belive we have 22.04 image aviable is it installbale there or do >> we have debian bookworm images we can use to add a non voting tox >> py311 job to the relevent project repos? > > Not to my knowledge, no. Ubuntu inherits most of their packages from > Debian, which has only just added a Python 3.11 pre-release, so it > will take time to end up even in Ubuntu under development (Ubuntu > Kinetic which is slated to become 22.10 still only has python3.10 > packages for the moment). It's probable they'll backport a > python3.11 package to Jammy once available, though there's no > guarantee, and based on historical backports it probably won't be > until upstream 3.11.1 is tagged at the very earliest. > > Keep in mind that what Debian has at the moment is a package of > Python 3.11.0b4, since 3.11.0 isn't even scheduled for an upstream > release until October (two days before we're planning to release > OpenStack Zed). Further, it's not even in Debian bookworm yet, and > it's hard to predict how soon it will be able to transition out of > unstable either. > > Let's be clear, what's being asked here is that OpenStack not just > test against the newest available Python release, but in fact to > continually test against pre-releases of the next Python while it's > still being developed. While I understand that this would be nice, I > hardly think it's a reasonable thing to expect. We have a hard > enough time just keeping up with actual releases of Python which are > current at the time we start a development cycle. Big ++ on this last point. I think right now the most important thing you can do to ensure the transition to python3.11 goes as smoothly as possible is to shore up and improve the current python3.10 testing. > -- > Jeremy Stanley > > Attachments: > * signature.asc From ozzzo at yahoo.com Thu Jul 14 20:22:55 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Thu, 14 Jul 2022 20:22:55 +0000 (UTC) Subject: [adjutant][tc][all] Call for volunteers to be a PTL and maintainers In-Reply-To: <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> References: <4381995.LvFx2qVVIh@p1> <1915566590.650011.1646837917079@mail.yahoo.com> <180530d387f.12325e74512727.6650321884236044968@ghanshyammann.com> <181e01036c5.1034d9b3b288532.6706280049142595390@ghanshyammann.com> <479542002.91408.1657570253503@mail.yahoo.com> <5e6d4df2-a1d0-80f5-f755-1563a1152f24@catalystcloud.nz> Message-ID: <702797503.1744449.1657830175373@mail.yahoo.com> Fantastic! Thank you for stepping up Dale! On Tuesday, July 12, 2022, 05:30:59 PM EDT, Dale Smith wrote: Hi gmann and Albert, I'd like to put my hand up for PTL of Adjutant if you are unable, Albert. Catalyst Cloud continue to have an interest in keeping this project active and maintained, and I am an early contributor/reviewer of Adjutant codebase alongside Adrian Turjak in 2015/2016. cheers, Dale Smith On 12/07/22 08:10, Albert Braden wrote: Unfortunately I was not able to get permission to be Adjutant PTL. They didn't say no, but the decision makers are too busy to address the issue. As I settle into this new position, I am realizing that I don't have time to do it anyway, so I will have to regretfully agree to placing Adjutant on the "inactive" list. If circumstances change, I will ask about resurrecting the project. Albert On Friday, July 8, 2022, 07:14:19 PM EDT, Ghanshyam Mann wrote: ---- On Fri, 22 Apr 2022 15:53:37 -0500? Ghanshyam Mann wrote --- > Hi Braden, > > Please let us know about the status of your company's permission to maintain the project. > As we are in Zed cycle development and there is no one to maintain/lead this project we > need to start thinking about the next steps mentioned in the leaderless project etherpad > Hi Braden, We have not heard back from you if you can help in maintaining the Adjutant. As it has no PTL and no patches for the last 250 days, I am adding it to the 'Inactive' project list - https://review.opendev.org/c/openstack/governance/+/849153/1 -gmann > - https://etherpad.opendev.org/p/zed-leaderless > > -gmann > >? ---- On Wed, 09 Mar 2022 08:58:37 -0600 Albert Braden wrote ---- >? >? ? ? ? ? ? ? ? I'm still waiting for permission to work on Adjutant. My contract ends this month and I'm taking 2 months off before I start fulltime. I have hope that permission will be granted while I'm out. I expect that I will be able to start working on Adjutant in June. >? >? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? On Saturday, March 5, 2022, 01:32:13 PM EST, Slawek Kaplonski wrote:? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? >? >? ? ? ? ? ? ? ? Hi, >? > >? > After last PTL elections [1] Adjutant project don't have any PTL. It also didn't had PTL in the Yoga cycle already. >? > So this is call for maintainters for Adjutant. If You are using it or interested in it, and if You are willing to help maintaining this project, please contact TC members through this mailing list or directly on the #openstack-tc channel @OFTC. We can talk possibilities to make someone a PTL of the project or going with this project to the Distributed Project Leadership [2] model. >? > >? > [1] http://lists.openstack.org/pipermail/openstack-discuss/2022-February/027411.html >? > [2] https://governance.openstack.org/tc/resolutions/20200803-distributed-project-leadership.html >? > >? > -- >? > Slawek Kaplonski >? > Principal Software Engineer >? > Red Hat? ? ? ? ? ? ? ? ? ? ? ? ? ? > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jul 14 21:45:48 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 14 Jul 2022 21:45:48 +0000 Subject: [tc] August 2022 OpenInfra Board Sync In-Reply-To: <20220630142207.rwtyc3apyhd2gyjv@yuggoth.org> References: <20220630142207.rwtyc3apyhd2gyjv@yuggoth.org> Message-ID: <20220714214548.ywqabxb2gg25qfb2@yuggoth.org> On 2022-06-30 14:22:08 +0000 (+0000), Jeremy Stanley wrote: [...] > I've also created a Framadate poll has in order to identify a few > preferred dates and times. Responses must be submitted by Friday, > 2022-07-15, in order to provide the board members with time to pick > from the available options. The poll can be found here: > https://framadate.org/atdFRM8YeUtauSgC [...] If you're interested in participating, please remember to mark your preferred/available times at the above URL. I plan on closing it tomorrow so I can try to pick a few convenient dates and times to suggest to the OpenInfra Board of Directors. Thanks again! -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mdulko at redhat.com Fri Jul 15 09:17:48 2022 From: mdulko at redhat.com (=?UTF-8?Q?Micha=C5=82?= Dulko) Date: Fri, 15 Jul 2022 11:17:48 +0200 Subject: [kuryr] Proposal to clean up Kuryr core reviewers In-Reply-To: References: Message-ID: On Thu, 2022-07-14 at 18:31 +0200, Maysa De Macedo Souza wrote: > Hello, > > We went through the list of current Kuryr core reviewers on the last > PTG session and we noticed a couple of people that are not active > Kuryr contributors anymore. I would to propose removing the following > contributors from the Kuryr core team: > > - Irena Berezovsky > - Gal Sagie > - Liping Mao > > I take this opportunity to thank all of them for their contributions > to the Kuryr project. > > I will wait one week for any feedback before proceeding with the > removal. Thanks for raising this Maysa, I believe it makes sense to keep the list of core reviewers up to date. > Thank you, > Maysa Macedo > From zigo at debian.org Fri Jul 15 09:56:25 2022 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Jul 2022 11:56:25 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: On 7/14/22 16:01, Sean Mooney wrote: > do we currently have 3.11 aviable in any of the ci images? i belive we have 22.04 image aviable is it installbale there > or do we have debian bookworm images we can use to add a non voting tox py311 job to the relevent project repos? Hi, Currently, we only have Python 3.11 beta 4 (ie: 3.11.0~b4-1) available in Debian Unstable. It wont be available in Bookworm until the python 3.10 -> 3.11 transition is over in Debian Unstable. During this process, Python 3.11 will only be an available Python version, but not the default. It will then become the default Python 3, and then, Python 3.10 will be removed from Unstable. THEN Python 3.11 will be fully the Bookworm version. This probably will take a few months. FYI, I very much know the patches will be done on a best effort basis only. I'm fine with that, and I'm used to discuss it with the community, and do backports of patches that land in master. My mail was just a call to the community so that we keep in mind that it's coming. I have no idea what the breakages will be (yet), but I'm looking forward figuring it out. Over the years, I kind of have fun doing so, even if I still think breaking the world every few months is a terrible idea. In a more general way, I am convince that it's always best for all of us if we can find a way to test with the latest everything, including the interpreter. Waiting for Ubuntu to have the latest interpreter is IMO broken by design, because the Python version transition always happen in Debian Unstable first (and made by the same person that maintains the Python interpreter in both Debian and Ubuntu: Matthias Klose, aka doko). Not only for the interpreter, if we could find a way to test things in Debian Unstable, always, as non-voting jobs, we would see the failures early. I'd love we he had such a non-voting job, that would also use the latest packages from PyPi, just so that we could at least know what will happen in a near future. Your thoughts everyone? Cheers, Thomas Goirand (zigo) From zigo at debian.org Fri Jul 15 10:05:31 2022 From: zigo at debian.org (Thomas Goirand) Date: Fri, 15 Jul 2022 12:05:31 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <20220714143048.gxznifh7oeaaqldi@yuggoth.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> Message-ID: <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> On 7/14/22 16:30, Jeremy Stanley wrote: > Let's be clear, what's being asked here is that OpenStack not just > test against the newest available Python release, but in fact to > continually test against pre-releases of the next Python while it's > still being developed. I'm not asking for that. :) All what I'm asking, is that when Python RC releases are out, and I report a bug, the community has the intention to fix it as early as possible, at least in master (and maybe help with backports if it's very tricky: I can manage trivial backporting by myself). That's enough for me, really (at least it has been enough in the past...). > We have a hard > enough time just keeping up with actual releases of Python which are > current at the time we start a development cycle. Yeah, though it'd be nice if we could have the latest interpreter in use in Unstable for a non-voting job, starting when the interpreter is released (or at least when the first RCs are out). We discussed this already, didn't we? I know it's a "would be nice thing" and that nobody has time to work on this... :/ Cheers, Thomas Goirand (zigo) From bence.romsics at gmail.com Fri Jul 15 10:51:21 2022 From: bence.romsics at gmail.com (Bence Romsics) Date: Fri, 15 Jul 2022 12:51:21 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: References: Message-ID: Hi, Uploaded the same content to github for long term storage: https://github.com/rubasov/neutron-rally -- Bence From fungi at yuggoth.org Fri Jul 15 11:50:49 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jul 2022 11:50:49 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: <20220715115049.3casmovqq3qgq2ag@yuggoth.org> On 2022-07-15 11:56:25 +0200 (+0200), Thomas Goirand wrote: [...] > Not only for the interpreter, if we could find a way to test things in > Debian Unstable, always, as non-voting jobs, we would see the failures > early. I'd love we he had such a non-voting job, that would also use the > latest packages from PyPi, just so that we could at least know what will > happen in a near future. > > Your thoughts everyone? My thought is that there is some irony in the timing of your question, since just yesterday[*] the OpenStack Technical Committee seems to have reached a consensus that CentOS is no longer a stable enough platform for pre-merge testing. Not necessarily anything to do with running OpenStack on it specifically, but just that generally it's the place where minimally-tested things go to see if there are any problems with them before they wind up in RHEL. That's a pretty close parallel to Debian's unstable/testing distributions as the place where problems get worked out before they get into the next stable release. Taking things from an OpenDev Collaboratory perspective, we've struggled to keep images for frequently-changing distros like Fedora, Gentoo, or OpenSUSE Tumbleweed working at all because things frequently change which break our ability to build or boot those images. More static distributions like Debian stable, Ubuntu LTS, or CentOS (before it became Stream), have a much less frequent update cycle and so are far easier for us to plan for and stay on top of. [*] https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-14-15.00.log.html#l-31 -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From fungi at yuggoth.org Fri Jul 15 12:01:39 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 15 Jul 2022 12:01:39 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> Message-ID: <20220715120139.5vi563hsbznjimbk@yuggoth.org> On 2022-07-15 12:05:31 +0200 (+0200), Thomas Goirand wrote: [...] > All what I'm asking, is that when Python RC releases are out, and > I report a bug, the community has the intention to fix it as early > as possible, at least in master (and maybe help with backports if > it's very tricky: I can manage trivial backporting by myself). > That's enough for me, really (at least it has been enough in the > past...). [...] I don't think fixes have been refused in the past simply because they address a problem observed with a newer Python interpreter than we test on. Just be aware that when it comes to pre-merge testing of proposed changes for OpenStack, the time to decide what platforms and interpreter versions we'll test with is at the end of the previous cycle. For Zed that's Python 3.8 and 3.9, though projects are encouraged to also try 3.10 if they can (we got the ability to test that partway into the development cycle). We have to set these expectations before we begin work on a new version of OpenStack, so that we don't change our testing goals for developers while they're in the middle of trying to write the software. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From swogatpradhan22 at gmail.com Thu Jul 14 20:47:33 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Fri, 15 Jul 2022 02:17:33 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: I was facing a similar kind of issue. https://bugzilla.redhat.com/show_bug.cgi?id=2089442 Here is the solution that helped me fix it. Also make sure the cn that you will use is reachable from undercloud (maybe) script should take care of it. Also please follow Mr. Tathe's mail to add the cn first. With regards Swogat Pradhan On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe wrote: > Hi Lokendra, > > The CN field is missing. Can you add that and generate the certificate > again. > > CN=ipaddress > > Also add dns.1=ipaddress under alt_names for precaution. > > Vikarna > > On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, > wrote: > >> HI Vikarna, >> Thanks for the inputs. >> I am note able to access any tabs in GUI. >> [image: image.png] >> >> to re-state, we are failing at the time of deployment at step4 : >> >> >> PLAY [External deployment step 4] >> ********************************************** >> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >> TASK | External deployment step 4 >> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >> OK | External deployment step 4 | undercloud -> localhost | result={ >> "changed": false, >> "msg": "Use --start-at-task 'External deployment step 4' to resume >> from this task" >> } >> [WARNING]: ('undercloud -> localhost', >> '525400ae-089b-870a-fab6-0000000000d7') >> missing from stats >> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >> INCLUDED | >> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >> | undercloud >> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >> TASK | Clean up legacy Cinder keystone catalog entries >> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:24.204562 | 2.48s >> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >> item={'service_name': 'cinderv3', 'service_type': 'volume'} >> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:26.122584 | 4.40s >> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >> 0:11:26.124296 | 4.40s >> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >> TASK | Manage Keystone resources for OpenStack services >> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >> TIMING | Manage Keystone resources for OpenStack services | undercloud | >> 0:11:26.169842 | 0.03s >> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >> TASK | Gather variables for each operating system >> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >> TIMING | tripleo_keystone_resources : Gather variables for each operating >> system | undercloud | 0:11:26.253383 | 0.04s >> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >> TASK | Create Keystone Admin resources >> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >> undercloud | 0:11:26.299608 | 0.03s >> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >> INCLUDED | >> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >> undercloud >> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >> TASK | Create default domain >> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >> OK | Create default domain | undercloud >> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >> TIMING | tripleo_keystone_resources : Create default domain | undercloud | >> 0:11:28.437360 | 2.09s >> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >> TASK | Create admin and service projects >> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >> TIMING | tripleo_keystone_resources : Create admin and service projects | >> undercloud | 0:11:28.483468 | 0.03s >> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >> INCLUDED | >> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >> undercloud >> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >> TASK | Async creation of Keystone project >> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >> CHANGED | Async creation of Keystone project | undercloud | item=admin >> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >> undercloud | 0:11:29.238078 | 0.72s >> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >> CHANGED | Async creation of Keystone project | undercloud | item=service >> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >> undercloud | 0:11:29.586587 | 1.06s >> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >> undercloud | 0:11:29.587916 | 1.07s >> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >> TASK | Check Keystone project status >> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >> WAITING | Check Keystone project status | undercloud | 30 retries left >> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >> OK | Check Keystone project status | undercloud | item=admin >> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >> TIMING | tripleo_keystone_resources : Check Keystone project status | >> undercloud | 0:11:35.260666 | 5.66s >> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >> OK | Check Keystone project status | undercloud | item=service >> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >> TIMING | tripleo_keystone_resources : Check Keystone project status | >> undercloud | 0:11:35.494729 | 5.89s >> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >> TIMING | tripleo_keystone_resources : Check Keystone project status | >> undercloud | 0:11:35.498771 | 5.89s >> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >> TASK | Create admin role >> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >> OK | Create admin role | undercloud >> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >> 0:11:37.725949 | 2.20s >> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >> TASK | Create _member_ role >> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >> SKIPPED | Create _member_ role | undercloud >> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | >> 0:11:37.783369 | 0.04s >> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >> TASK | Create admin user >> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >> CHANGED | Create admin user | undercloud >> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >> 0:11:41.145472 | 3.34s >> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >> TASK | Assign admin role to admin project for admin user >> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >> OK | Assign admin role to admin project for admin user | undercloud >> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >> TIMING | tripleo_keystone_resources : Assign admin role to admin project >> for admin user | undercloud | 0:11:44.288848 | 3.13s >> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >> TASK | Assign _member_ role to admin project for admin user >> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >> SKIPPED | Assign _member_ role to admin project for admin user | undercloud >> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >> TIMING | tripleo_keystone_resources : Assign _member_ role to admin project >> for admin user | undercloud | 0:11:44.346479 | 0.04s >> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >> TASK | Create identity service >> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >> OK | Create identity service | undercloud >> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >> TIMING | tripleo_keystone_resources : Create identity service | undercloud >> | 0:11:46.022362 | 1.66s >> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >> TASK | Create identity public endpoint >> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >> OK | Create identity public endpoint | undercloud >> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >> TIMING | tripleo_keystone_resources : Create identity public endpoint | >> undercloud | 0:11:48.233349 | 2.19s >> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >> TASK | Create identity internal endpoint >> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >> FATAL | Create identity internal endpoint | undercloud | error={"changed": >> false, "extra_data": {"data": null, "details": "The request you have made >> requires authentication.", "response": >> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >> The request you have made requires authentication."} >> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >> undercloud | 0:11:50.660654 | 2.41s >> >> PLAY RECAP >> ********************************************************************* >> localhost : ok=1 changed=0 unreachable=0 >> failed=0 skipped=2 rescued=0 ignored=0 >> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >> failed=0 skipped=214 rescued=0 ignored=0 >> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >> failed=0 skipped=198 rescued=0 ignored=0 >> undercloud : ok=39 changed=7 unreachable=0 >> failed=1 skipped=6 rescued=0 ignored=0 >> >> Also : >> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >> [req] >> default_bits = 2048 >> prompt = no >> default_md = sha256 >> distinguished_name = dn >> [dn] >> C=IN >> ST=UTTAR PRADESH >> L=NOIDA >> O=HSC >> OU=HSC >> emailAddress=demo at demo.com >> >> v3.ext: >> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >> authorityKeyIdentifier=keyid,issuer >> basicConstraints=CA:FALSE >> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >> dataEncipherment >> subjectAltName = @alt_names >> [alt_names] >> IP.1=fd00:fd00:fd00:9900::81 >> >> Using these files we create other certificates. >> Please check and let me know in case we need anything else. >> >> >> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe >> wrote: >> >>> Hi Lokendra, >>> >>> Are you able to access all the tabs in the OpenStack dashboard without >>> any error? If not, please retry generating the certificate. Also, share the >>> openssl.cnf or server.cnf. >>> >>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Team, >>>> Any input on this case raised. >>>> >>>> Thanks, >>>> Lokendra >>>> >>>> >>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Shephard/Swogat, >>>>> I tried changing the setting as suggested and it looks like it has >>>>> failed at step 4 with error: >>>>> >>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>>> 0:24:47.736198 | 2.21s >>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>> TASK | Create identity internal endpoint >>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>> FATAL | Create identity internal endpoint | undercloud | >>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>> request you have made requires authentication.", "response": >>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>> The request you have made requires authentication."} >>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>> >>>>> >>>>> Checking further the endpoint list: >>>>> I see only one endpoint for keystone is gettin created. >>>>> >>>>> DeprecationWarning >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>> | ID | Region | Service Name | >>>>> Service Type | Enabled | Interface | URL >>>>> | >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>> identity | True | admin | http://30.30.30.173:35357 >>>>> | >>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>> | >>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>> | >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>> >>>>> >>>>> it looks like something related to the SSL, we have also verified that >>>>> the GUI login screen shows that Certificates are applied. >>>>> exploring more in logs, meanwhile any suggestions or know observation >>>>> would be of great help. >>>>> thanks again for the support. >>>>> >>>>> Best Regards, >>>>> Lokendra >>>>> >>>>> >>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> I had faced a similar kind of issue, for ip based setup you need to >>>>>> specify the domain name as the ip that you are going to use, this error is >>>>>> showing up because the ssl is ip based but the fqdns seems to be >>>>>> undercloud.com or overcloud.example.com. >>>>>> I think for undercloud you can change the undercloud.conf. >>>>>> >>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>> address for overcloud? because it seems he has not specified the >>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>> domain for overcloud.example.com. >>>>>> >>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> What is the domain name you have specified in the undercloud.conf >>>>>>> file? >>>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>>> >>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Team, >>>>>>>> We were trying to install overcloud with SSL enabled for which the >>>>>>>> UC is installed, but OC install is getting failed at step 4: >>>>>>>> >>>>>>>> ERROR >>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>>> FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>> server_hostname)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>> (most recent call last):\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>> send\n timeout=timeout\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>> last):\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>> last):\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>> authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>> authenticated=authenticated)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>> in _send_request\n raise >>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>> run_globals)\n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>> line 185, in \n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>> line 181, in main\n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>> line 407, in __call__\n File >>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>> line 141, in run\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>> File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>> max_version='3.latest')\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>> **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>> **kwargs)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>> self._do_create_plugin(session)\n File >>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>> retries exceeded with url: / (Caused by >>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:01.271914 | 2.47s >>>>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:01.273659 | 2.47s >>>>>>>> >>>>>>>> PLAY RECAP >>>>>>>> ********************************************************************* >>>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total >>>>>>>> Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>> >>>>>>>> >>>>>>>> in the deploy.sh: >>>>>>>> >>>>>>>> openstack overcloud deploy --templates \ >>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>> --networks-file /home/stack/templates/custom_network_data.yaml \ >>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>> --baremetal-deployment >>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>> --network-config \ >>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>> \ >>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>> \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>> -e >>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>> >>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>> modifications: >>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>> Passed as is in the defaults. >>>>>>>> enable-tls.yaml: >>>>>>>> >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # This file was created automatically by the sample environment >>>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>> instead >>>>>>>> # of the original, if any customizations are needed. >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>> # description: | >>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>> deployments. >>>>>>>> # For these values to take effect, one of the tls-endpoints-*.yaml >>>>>>>> # environments must also be used. >>>>>>>> parameter_defaults: >>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>> # Type: boolean >>>>>>>> HorizonSecureCookies: True >>>>>>>> >>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>> services in the public network. >>>>>>>> # Type: string >>>>>>>> PublicTLSCAFile: >>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>> >>>>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>>>> # Type: string >>>>>>>> SSLRootCertificate: | >>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END CERTIFICATE----- >>>>>>>> >>>>>>>> SSLCertificate: | >>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END CERTIFICATE----- >>>>>>>> # The content of an SSL intermediate CA certificate in PEM format. >>>>>>>> # Type: string >>>>>>>> SSLIntermediateCertificate: '' >>>>>>>> >>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>> # Type: string >>>>>>>> SSLKey: | >>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END PRIVATE KEY----- >>>>>>>> >>>>>>>> # ****************************************************** >>>>>>>> # Static parameters - these are values that must be >>>>>>>> # included in the environment but should not be changed. >>>>>>>> # ****************************************************** >>>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>>> controller. >>>>>>>> # Type: string >>>>>>>> DeployedSSLCertificatePath: >>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>> >>>>>>>> # ********************* >>>>>>>> # End static parameters >>>>>>>> # ********************* >>>>>>>> >>>>>>>> inject-trust-anchor.yaml >>>>>>>> >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # This file was created automatically by the sample environment >>>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>> instead >>>>>>>> # of the original, if any customizations are needed. >>>>>>>> # >>>>>>>> ******************************************************************* >>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>> # description: | >>>>>>>> # When using an SSL certificate signed by a CA that is not in the >>>>>>>> default >>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>> certificate to >>>>>>>> # the overcloud nodes. >>>>>>>> parameter_defaults: >>>>>>>> # The content of a CA's SSL certificate file in PEM format. This >>>>>>>> is evaluated on the client side. >>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>> # Type: string >>>>>>>> SSLRootCertificate: | >>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>> -----END CERTIFICATE----- >>>>>>>> >>>>>>>> resource_registry: >>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> The procedure to create such files was followed using: >>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>>> >>>>>>>> >>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>> IP-based certificate, without DNS. * >>>>>>>> >>>>>>>> Any idea around this error would be of great help. >>>>>>>> >>>>>>>> -- >>>>>>>> skype: lokendrarathour >>>>>>>> >>>>>>>> >>>>>>>> >>>>> >>>>> >>>>> >>>> >>>> -- >>>> >>> >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From radoslaw.piliszek at gmail.com Fri Jul 15 12:44:22 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Fri, 15 Jul 2022 14:44:22 +0200 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: On Fri, 15 Jul 2022 at 11:59, Thomas Goirand wrote: > Not only for the interpreter, if we could find a way to test things in > Debian Unstable, always, as non-voting jobs, we would see the failures > early. I'd love we he had such a non-voting job, that would also use the > latest packages from PyPi, just so that we could at least know what will > happen in a near future. Well, we can have periodic and experimental, master-only jobs to test things on Debian unstable because it's always interesting to see the upcoming breakage (or better yet - be able to pinpoint it to a certain change happening in Debian unstable that caused it). The job would only utilise the interpreter and the helper binaries (like ovs) - all targets I can think of are capable only of using pip-installed deps and not Debian packages so that part we cannot really cover reliably at all. If that makes sense to you, we can work towards that direction. The biggest issue will still be the bootability/usability of the infra image though. -yoctozepto From hberaud at redhat.com Fri Jul 15 13:52:15 2022 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jul 2022 15:52:15 +0200 Subject: Propose to add Takashi Kajinami as Oslo core reviewer In-Reply-To: <42c2a184499470bdaa62a16b5f59def2a59e08dd.camel@redhat.com> References: <42c2a184499470bdaa62a16b5f59def2a59e08dd.camel@redhat.com> Message-ID: Hello, Thank you to all the people who shared feedback. Because we haven't heard any objections for more than a week, I'll move forward to add Takashi to oslo core. Thank you, Takashi, again for your continuous great work! Cheers! Le mar. 12 juil. 2022 ? 17:50, Stephen Finucane a ?crit : > On Thu, 2022-06-30 at 15:39 +0200, Herve Beraud wrote: > > Hello everybody, > > It is my pleasure to propose Takashi Kajinami (tkajinam) as a new member > of the oslo core team. > > During the last months Takashi has been a significant contributor to the > oslo projects. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > > +1 from me. It would be great to have tkajinam onboard. > > Stephen > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From hberaud at redhat.com Fri Jul 15 13:53:09 2022 From: hberaud at redhat.com (Herve Beraud) Date: Fri, 15 Jul 2022 15:53:09 +0200 Subject: Propose to add Tobias Urdin as Tooz core reviewer In-Reply-To: <573e57d95ca7553239e576f8b41f07b006dab513.camel@redhat.com> References: <573e57d95ca7553239e576f8b41f07b006dab513.camel@redhat.com> Message-ID: Hello, Thank you to all the people who shared feedback. Because we haven't heard any objections for more than a week, I'll move forward to add Tobias to the tooz core members. Thank you, Tobias, again for your continuous great work! Cheers! Le mar. 12 juil. 2022 ? 17:50, Stephen Finucane a ?crit : > On Thu, 2022-06-30 at 15:43 +0200, Herve Beraud wrote: > > Hello everybody, > > It is my pleasure to propose Tobias Urdin (tobias-urdin) as a new member > of the Tooz project core team. > > During the last months Tobias has been a significant contributor to the > Tooz project. > > Obviously we think he'd make a good addition to the core team. If there > are no objections, I'll make that happen in a week. > > Thanks. > > > +1 from me! > > Stephen > -- Herv? Beraud Senior Software Engineer at Red Hat irc: hberaud https://github.com/4383/ https://twitter.com/4383hberaud -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Fri Jul 15 14:14:04 2022 From: smooney at redhat.com (Sean Mooney) Date: Fri, 15 Jul 2022 15:14:04 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> Message-ID: <8a89850f144f27e82dddc33b61087c9c74caead6.camel@redhat.com> On Fri, 2022-07-15 at 14:44 +0200, Rados?aw Piliszek wrote: > On Fri, 15 Jul 2022 at 11:59, Thomas Goirand wrote: > > Not only for the interpreter, if we could find a way to test things in > > Debian Unstable, always, as non-voting jobs, we would see the failures > > early. I'd love we he had such a non-voting job, that would also use the > > latest packages from PyPi, just so that we could at least know what will > > happen in a near future. > > Well, we can have periodic and experimental, master-only jobs to test > things on Debian unstable because it's always interesting to see the > upcoming breakage (or better yet - be able to pinpoint it to a certain > change happening in Debian unstable that caused it). > for the nova ci we have moved centos 9 stream jobs to the periodic-weekly pipeline and we are going to monitor it in our weekly meeting. i dont really have any objection to adding a debian testing or debian stable based job there too. be it in the form of a tempest job or tox job. we dont really have the capsity either in review bandwith or ci resouce to have enstable distros in a voting or non voting capsity on every patch i.e. check and gate pipelines. but a weekly periodic its proably doable. one thing we have to be carful of however is a concern i raised last cycle with extending 3.6 support. some of the 3.6 deprecated fature were drop in 3.10 and more feature that were depfreced in later releases are droped in 3.11. while we can try and support the newr releases 3.9 compatiable will be needed for a long time. so if 3.11 or 3.12 is fundementaly in compatibel with 3.9 due to a libary change ectra that will be problematic sicne cento 9 derived distro will be on 3.9 for several years to come. i dont actully know the exact lifecycle uels for how often new centos/rhel majory reseas happen but its usuarly aroud the .6 release of the current release that the .0 release of the next majory version happen so every 5 years or so. i really dont know the plans for rhel 10 but for rhel9/centos 9 lifetime 3.9 will be the python version used so we need to balnce fast moving distors like debian and slow moving distors like centos and enable both. that might mean we will ahve to drop some deps, avoid some features or utilise compatablity libs in some cases similar to six or mock the lib vs unittest.mock so when fixing any 3.11 incompatiablity we need to still maintain 3.8 compatiablity for zed and 3.9 compatiblity in AA+ i woudl guess it will be the CC or DD release before we coudl consdier droping 3.9 suppot but i think we could drop 3.8 in AA. > The job would > only utilise the interpreter and the helper binaries (like ovs) - all > targets I can think of are capable only of using pip-installed deps > and not Debian packages so that part we cannot really cover reliably > at all. If that makes sense to you, we can work towards that > direction. The biggest issue will still be the bootability/usability > of the infra image though. > > -yoctozepto > From katonalala at gmail.com Fri Jul 15 14:59:54 2022 From: katonalala at gmail.com (Lajos Katona) Date: Fri, 15 Jul 2022 16:59:54 +0200 Subject: [neutron] change of API performance from Pike to Yoga In-Reply-To: References: Message-ID: Thanks Bence, Really appreciated. Bence Romsics ezt ?rta (id?pont: 2022. j?l. 15., P, 13:02): > Hi, > > Uploaded the same content to github for long term storage: > > https://github.com/rubasov/neutron-rally > > -- > Bence > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Fri Jul 15 15:09:35 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Fri, 15 Jul 2022 17:09:35 +0200 Subject: [release] Release countdown for week R-11, Jul 18 - 22 Message-ID: <6129e077-53f0-37c0-0085-76d2a44a286d@est.tech> Development Focus ----------------- We are now past the Zed-2 milestone, and entering the last development phase of the cycle. Teams should be focused on implementing planned work for the cycle. Now is a good time to review those plans and reprioritize anything if needed based on the what progress has been made and what looks realistic to complete in the next few weeks. General Information ------------------- Looking ahead to the end of the release cycle, please be aware of the feature freeze dates. Those vary depending on deliverable type: * General libraries (except client libraries) need to have their last ? feature release before Non-client library freeze (August 26th, 2022). ? Their stable branches are cut early. * Client libraries (think python-*client libraries) need to have their ? last feature release before Client library freeze (September 1st, 2022) * Deliverables following a cycle-with-rc model (that would be most ? services) observe a Feature freeze on that same date, September 1st, 2022. ? Any feature addition beyond that date should be discussed on the ? mailing-list and get PTL approval. After feature freeze, cycle-with-rc ? deliverables need to produce a first release candidate (and a stable ? branch) before RC1 deadline (September12th, 2022) * Deliverables following cycle-with-intermediary model can release as ? necessary, but in all cases before Final RC deadline (September 29th, 2022) Upcoming Deadlines & Dates -------------------------- Non-client library freeze: August 26th, 2022 (R-6 week) Client library freeze: September 1st, 2022 (R-5 week) Zed-3 milestone: September 1st, 2022 (R-5 week) El?d Ill?s irc: elodilles From gmann at ghanshyammann.com Fri Jul 15 17:34:33 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Fri, 15 Jul 2022 12:34:33 -0500 Subject: [all][tc] What's happening in Technical Committee: summary 15 July 2022: Reading: 5 min Message-ID: <18202ed28dd.11ffe868e170056.2005547301817008828@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on 14 July. Most of the meeting discussions are summarized in this email. Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-14-15.00.log.html * Next TC weekly meeting will be on 21 July Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by 20 July. 2. What we completed this week: ========================= * We continued the CentOS steam testing discussion and by considering all the point we agreed to make CentOS stream jobs testing periodic but keep it in testing runtime. Monitor, debug, and report the failure to CentOS stream team. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[2], Two are completed and others items are in-progress. Open Reviews ----------------- * Seven open reviews for ongoing activities[3]. Consistent and Secure Default RBAC ------------------------------------------- We have a good amount of discussion and review in the goal document updates[4] and I have updated the patch by resolving the review comments. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[5]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- Dale Smith volunteer to be PTL for Adjutant project [6] Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[7][8]. Project updates ------------------- * Add charmed k8s operators to OpenStack Charms[9] * Adding Skyline as Emerging Technology[10] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [12] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847413 [3] https://review.opendev.org/q/projects:openstack/governance+status:open [4] https://review.opendev.org/c/openstack/governance/+/847418 [5] https://review.opendev.org/c/openstack/governance/+/836888 [6] https://review.opendev.org/c/openstack/governance/+/849606 [7] https://etherpad.opendev.org/p/zuul-config-error-openstack [8] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [9] https://review.opendev.org/c/openstack/governance/+/849997 [10] https://review.opendev.org/c/openstack/governance/+/849155 [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From kdhall at binghamton.edu Fri Jul 15 21:10:53 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Fri, 15 Jul 2022 17:10:53 -0400 Subject: [OpenStack-Ansible][Nova] OSA install Yoga on Debian Bullseye Backports Message-ID: Hello, I've just worked most of the way through a fresh install of Yoga on 5 Debian Bullseye systems. All systems are updated to include the Bullseye backports. The documentation doesn't mention backports, but I always install Debian this way - almost without thinking. The specific config is based on openstack-user-config.yaml.prod.example with 3 infrastructure host, two compute hosts, and cinder/glance running on the infrastructure hosts as in the prod example file. First, one surprise: For version of GlusterFS in Bullseye Backports, /usr/sbin/gluster as been moved to a separate package - glusterfs-cli. I installed this manually in the repo containers to get through setup-infrastructure.yml. In setup-openstack.yml, I'm stopped at "TASK [os_nova : Run nova-status upgrade check to validate a healthy configuration]". "nova-status upgrade check" is failing. "nova-manage cell_v2 list_hosts" is not showing any. Oh, and there are warnings about eventlet monkey patching and urllib3. So I'm not quite sure how to dig into this. The nova-api container seems to be running on the infra hosts, and nova-compute.service is up on both compute hosts, although there are warnings about "Timed out waiting for nova-conductor" The nova-api containers are able to ing the compute hosts on br-mgmt. I do have to wonder if this has anything to do with being upgraded to backpors. Any hints on how to analyse this (or how to fix it)? Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Jul 15 21:35:12 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 15 Jul 2022 14:35:12 -0700 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <8a89850f144f27e82dddc33b61087c9c74caead6.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <8a89850f144f27e82dddc33b61087c9c74caead6.camel@redhat.com> Message-ID: <06bc311c-3b7d-4ee5-bb2b-abf0ba8b516c@www.fastmail.com> On Fri, Jul 15, 2022, at 7:14 AM, Sean Mooney wrote: > On Fri, 2022-07-15 at 14:44 +0200, Rados?aw Piliszek wrote: >> On Fri, 15 Jul 2022 at 11:59, Thomas Goirand wrote: >> > Not only for the interpreter, if we could find a way to test things in >> > Debian Unstable, always, as non-voting jobs, we would see the failures >> > early. I'd love we he had such a non-voting job, that would also use the >> > latest packages from PyPi, just so that we could at least know what will >> > happen in a near future. >> >> Well, we can have periodic and experimental, master-only jobs to test >> things on Debian unstable because it's always interesting to see the >> upcoming breakage (or better yet - be able to pinpoint it to a certain >> change happening in Debian unstable that caused it). >> > for the nova ci we have moved centos 9 stream jobs to the > periodic-weekly pipeline and > we are going to monitor it in our weekly meeting. > i dont really have any objection to adding a debian testing or debian > stable based job there too. > be it in the form of a tempest job or tox job. > > we dont really have the capsity either in review bandwith or ci resouce > to have enstable > distros in a voting or non voting capsity on every patch i.e. check and > gate pipelines. > but a weekly periodic its proably doable. > > > one thing we have to be carful of however is a concern i raised last > cycle with extending 3.6 support. > > some of the 3.6 deprecated fature were drop in 3.10 and more feature > that were depfreced in later releases > are droped in 3.11. while we can try and support the newr releases 3.9 > compatiable will be needed for a long > time. so if 3.11 or 3.12 is fundementaly in compatibel with 3.9 due to > a libary change ectra that will be problematic > sicne cento 9 derived distro will be on 3.9 for several years to come. Removals for 3.10 are captured here: https://docs.python.org/3/whatsnew/3.10.html#removed and for 3.11 at https://docs.python.org/3.11/whatsnew/3.11.html#removed if anyone is curious. Considering the number of projects that appear to be currently running python3.8 and 3.10 job successfully, the major problem here is likely going to be if/when our dependencies decide to stop supporting older pythons. Often times we can get away with simply having different requirements for different versions of python, but that may not always work for each dependency. > > i dont actully know the exact lifecycle uels for how often new > centos/rhel majory reseas happen but its usuarly aroud the .6 release > of the current > release that the .0 release of the next majory version happen so every > 5 years or so. i really dont know the plans for rhel 10 but > for rhel9/centos 9 lifetime 3.9 will be the python version used so we > need to balnce fast moving distors like debian and slow moving distors > like > centos and enable both. that might mean we will ahve to drop some deps, > avoid some features or utilise compatablity libs in some cases similar > to > six or mock the lib vs unittest.mock > > so when fixing any 3.11 incompatiablity we need to still maintain 3.8 > compatiablity for zed and 3.9 compatiblity in AA+ > > i woudl guess it will be the CC or DD release before we coudl consdier > droping 3.9 suppot but i think we could drop 3.8 in AA. Python 3.9 EOL is October 2025. With 3.6 we saw many libraries maintain compatibility until the EOL for that version. I'm hopeful that trend will continue and this will largely be a non issue for us. > > >> The job would >> only utilise the interpreter and the helper binaries (like ovs) - all >> targets I can think of are capable only of using pip-installed deps >> and not Debian packages so that part we cannot really cover reliably >> at all. If that makes sense to you, we can work towards that >> direction. The biggest issue will still be the bootability/usability >> of the infra image though. >> >> -yoctozepto >> From the.wade.albright at gmail.com Fri Jul 15 21:35:57 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Fri, 15 Jul 2022 14:35:57 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node Message-ID: Hi, I'm hitting a problem when trying to update the redfish_password for an existing node. I'm curious to know if anyone else has encountered this problem. I'm not sure if I'm just doing something wrong or if there is a bug. Or if the problem is unique to my setup. I have a node already added into ironic with all the driver details set, and things are working fine. I am able to run deployments. Now I need to change the redfish password on the host. So I update the password for redfish access on the host, then use an 'openstack baremetal node set --driver-info redfish_password=' command to set the new redfish_password. Once this has been done, deployment no longer works. I see redfish authentication errors in the logs and the operation fails. I waited a bit to see if there might just be a delay in updating the password, but after awhile it still didn't work. I restarted the conductor, and after that things work fine again. So it seems like the password is cached or something. Is there a way to force the password to update? I even tried removing the redfish credentials and re-adding them, but that didn't work either. Only a conductor restart seems to make the new password work. We are running Xena, using rpm installation on Oracle Linux 8.5. Thanks in advance for any help with this issue. -------------- next part -------------- An HTML attachment was scrubbed... URL: From eblock at nde.ag Fri Jul 15 22:23:05 2022 From: eblock at nde.ag (Eugen Block) Date: Fri, 15 Jul 2022 22:23:05 +0000 Subject: [placement] running out of VCPU resource In-Reply-To: <7dbedb95-0af7-97dc-3f76-bb308aaf52f2@redge.com> References: <7dbedb95-0af7-97dc-3f76-bb308aaf52f2@redge.com> Message-ID: <20220715222305.Horde.rJpH9KqD93oQ0aAofrnqtPq@webmail.nde.ag> Hi, we were facing the same issue and my colleague tracked it down: https://serverfault.com/questions/1064579/openstack-only-building-one-vm-per-machine-in-cluster-then-runs-out-of-resource We have a customized fixed for us which works nicely, but it would surely help to get this fixed in general as it seems to be a reoccurring issue. Zitat von Przemyslaw Basa : > Hi, > > Well i think i figured it out. Following Xena deployment > instructions mariadb was installed in version 10.6.5 and it seems to > be some kind of bug in this version. Upgrading to 10.6.8 fixed this > particular issue for me. > > I've checked some older and newer versions (10.5.6, 10.8.3) and > problematic query behaves there like in 10.6.8. > > Here's how I've done my tests if someone is interested: > > % docker run --rm --detach --name mariadb-10.6.5 --env > MYSQL_ROOT_PASSWORD=test mariadb:10.6.5 > % docker run --rm --detach --name mariadb-10.6.8 --env > MYSQL_ROOT_PASSWORD=test mariadb:10.6.8 > > % docker exec -i mariadb-10.6.5 mysql -u root -ptest < tables_dump.sql > % docker exec -i mariadb-10.6.8 mysql -u root -ptest < tables_dump.sql > > % docker exec -i mariadb-10.6.5 mysql -u root -ptest -t test < test.sql > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | id | uuid | generation | > resource_class_id | total | reserved | allocation_ratio | used | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 0 | 128 | 0 | 2 | 13318 | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 1 | 1031723 | 2048 | 1 | NULL | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 2 | 901965 | 2 | 1 | NULL | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > > % docker exec -i mariadb-10.6.8 mysql -u root -ptest -t test < test.sql > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | id | uuid | generation | > resource_class_id | total | reserved | allocation_ratio | used | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 0 | 128 | 0 | 2 | 5 | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 1 | 1031723 | 2048 | 1 | 13312 | > | 5 | 16f620c0-8c6f-4984-8d58-e2c00d1b32da | 50 | > 2 | 901965 | 2 | 1 | 1 | > +----+--------------------------------------+------------+-------------------+---------+----------+------------------+-------+ > > % cat tables_dump.sql > create database test; > connect test; > > CREATE TABLE `allocations` ( > `created_at` datetime DEFAULT NULL, > `updated_at` datetime DEFAULT NULL, > `id` int(11) NOT NULL AUTO_INCREMENT, > `resource_provider_id` int(11) NOT NULL, > `consumer_id` varchar(36) NOT NULL, > `resource_class_id` int(11) NOT NULL, > `used` int(11) NOT NULL, > PRIMARY KEY (`id`), > KEY `allocations_resource_provider_class_used_idx` > (`resource_provider_id`,`resource_class_id`,`used`), > KEY `allocations_resource_class_id_idx` (`resource_class_id`), > KEY `allocations_consumer_id_idx` (`consumer_id`) > ) ENGINE=InnoDB AUTO_INCREMENT=547 DEFAULT CHARSET=utf8mb3; > > CREATE TABLE `inventories` ( > `created_at` datetime DEFAULT NULL, > `updated_at` datetime DEFAULT NULL, > `id` int(11) NOT NULL AUTO_INCREMENT, > `resource_provider_id` int(11) NOT NULL, > `resource_class_id` int(11) NOT NULL, > `total` int(11) NOT NULL, > `reserved` int(11) NOT NULL, > `min_unit` int(11) NOT NULL, > `max_unit` int(11) NOT NULL, > `step_size` int(11) NOT NULL, > `allocation_ratio` float NOT NULL, > PRIMARY KEY (`id`), > UNIQUE KEY `uniq_inventories0resource_provider_resource_class` > (`resource_provider_id`,`resource_class_id`), > KEY `inventories_resource_class_id_idx` (`resource_class_id`), > KEY `inventories_resource_provider_id_idx` (`resource_provider_id`), > KEY `inventories_resource_provider_resource_class_idx` > (`resource_provider_id`,`resource_class_id`) > ) ENGINE=InnoDB AUTO_INCREMENT=24 DEFAULT CHARSET=utf8mb3; > > CREATE TABLE `resource_providers` ( > `created_at` datetime DEFAULT NULL, > `updated_at` datetime DEFAULT NULL, > `id` int(11) NOT NULL AUTO_INCREMENT, > `uuid` varchar(36) NOT NULL, > `name` varchar(200) DEFAULT NULL, > `generation` int(11) DEFAULT NULL, > `root_provider_id` int(11) DEFAULT NULL, > `parent_provider_id` int(11) DEFAULT NULL, > PRIMARY KEY (`id`), > UNIQUE KEY `uniq_resource_providers0uuid` (`uuid`), > UNIQUE KEY `uniq_resource_providers0name` (`name`), > KEY `resource_providers_name_idx` (`name`), > KEY `resource_providers_parent_provider_id_idx` (`parent_provider_id`), > KEY `resource_providers_root_provider_id_idx` (`root_provider_id`), > KEY `resource_providers_uuid_idx` (`uuid`), > CONSTRAINT `resource_providers_ibfk_1` FOREIGN KEY > (`parent_provider_id`) REFERENCES `resource_providers` (`id`), > CONSTRAINT `resource_providers_ibfk_2` FOREIGN KEY > (`root_provider_id`) REFERENCES `resource_providers` (`id`) > ) ENGINE=InnoDB AUTO_INCREMENT=6 DEFAULT CHARSET=utf8mb3; > > INSERT INTO `allocations` VALUES ('2022-07-07 > 23:08:10',NULL,329,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',1,12288),('2022-07-07 23:08:10',NULL,332,5,'b6da8a02-a96c-464e-a6c4-19c96c83dd44',0,4),('2022-07-08 06:26:28',NULL,335,4,'aec7aaea-10df-451b-b2ce-847099ee0110',1,2048),('2022-07-08 06:26:28',NULL,338,4,'aec7aaea-10df-451b-b2ce-847099ee0110',0,2),('2022-07-12 08:53:21',NULL,400,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',1,16384),('2022-07-12 08:53:21',NULL,403,1,'29cf1131-1bb3-4f06-b339-930a4bb055d4',0,2),('2022-07-14 08:24:27',NULL,538,5,'9681447d-57ec-45c7-af48-63be3c7201da',2,1),('2022-07-14 08:24:27',NULL,541,5,'9681447d-57ec-45c7-af48-63be3c7201da',1,1024),('2022-07-14 > 08:24:27',NULL,544,5,'9681447d-57ec-45c7-af48-63be3c7201da',0,1); > INSERT INTO `resource_providers` VALUES ('2022-07-04 > 11:59:49','2022-07-13 > 13:03:08',1,'6ac81bb4-50ef-4784-8a64-9031afeaaa9d','p-os-compute01.openstack.local',50,1,NULL),('2022-07-04 12:00:49','2022-07-13 13:03:07',4,'a324b3b9-f8c8-4279-bf63-a27163fcf792','g-os-compute01.openstack.local',42,4,NULL),('2022-07-04 12:03:57','2022-07-14 > 08:24:27',5,'16f620c0-8c6f-4984-8d58-e2c00d1b32da','t-os-compute01.openstack.local',50,5,NULL); > INSERT INTO `inventories` VALUES ('2022-07-04 11:59:50','2022-07-11 > 09:24:04',1,1,0,128,0,1,128,1,2),('2022-07-04 11:59:50','2022-07-11 > 09:24:04',4,1,1,1031723,2048,1,1031723,1,1),('2022-07-04 > 11:59:50','2022-07-11 > 09:24:04',7,1,2,901965,2,1,901965,1,1),('2022-07-04 > 12:01:53','2022-07-04 14:59:53',10,4,0,128,0,1,128,1,2),('2022-07-04 > 12:01:53','2022-07-04 > 14:59:53',13,4,1,1031723,2048,1,1031723,1,1),('2022-07-04 > 12:01:53','2022-07-04 > 14:59:53',16,4,2,901965,2,1,901965,1,1),('2022-07-04 > 12:03:57','2022-07-14 07:16:08',17,5,0,128,0,1,128,1,2),('2022-07-04 > 12:03:57','2022-07-14 > 07:09:11',20,5,1,1031723,2048,1,1031723,1,1),('2022-07-04 > 12:03:57','2022-07-14 07:09:11',23,5,2,901965,2,1,901965,1,1); > > % cat test.sql > SELECT > rp.id, rp.uuid, rp.generation, inv.resource_class_id, inv.total, > inv.reserved, inv.allocation_ratio, allocs.used > FROM > resource_providers AS rp > JOIN inventories AS inv ON rp.id = inv.resource_provider_id > LEFT JOIN ( > SELECT > resource_provider_id, resource_class_id, SUM(used) AS used > FROM > allocations > WHERE > resource_class_id IN (0, 1, 2) > AND resource_provider_id IN (5) > GROUP BY > resource_provider_id, resource_class_id > ) AS allocs ON > inv.resource_provider_id = allocs.resource_provider_id > AND inv.resource_class_id = allocs.resource_class_id > WHERE > rp.id IN (5) > AND inv.resource_class_id IN (0,1,2) > ; > > Regards, > Przemyslaw Basa From iurygregory at gmail.com Fri Jul 15 21:24:36 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Fri, 15 Jul 2022 22:24:36 +0100 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Hi Wade, If I understood correctly, you have a node already deployed and you want to change the redfish BMC password via Ironic, this is not possible. Ironic uses the credentials to access the machine and execute the necessary to provision the machine. If you want to change the credentials to access the BMC, you need to directly access it and change in the machine, after that you can change information in Ironic. Em sex., 15 de jul. de 2022 ?s 22:52, Wade Albright < the.wade.albright at gmail.com> escreveu: > Hi, > > I'm hitting a problem when trying to update the redfish_password for an > existing node. I'm curious to know if anyone else has encountered this > problem. I'm not sure if I'm just doing something wrong or if there is a > bug. Or if the problem is unique to my setup. > > I have a node already added into ironic with all the driver details set, > and things are working fine. I am able to run deployments. > > Now I need to change the redfish password on the host. So I update the > password for redfish access on the host, then use an 'openstack baremetal > node set --driver-info redfish_password=' command to set > the new redfish_password. > > Once this has been done, deployment no longer works. I see redfish > authentication errors in the logs and the operation fails. I waited a bit > to see if there might just be a delay in updating the password, but after > awhile it still didn't work. > > I restarted the conductor, and after that things work fine again. So it > seems like the password is cached or something. Is there a way to force the > password to update? I even tried removing the redfish credentials and > re-adding them, but that didn't work either. Only a conductor restart seems > to make the new password work. > > We are running Xena, using rpm installation on Oracle Linux 8.5. > > Thanks in advance for any help with this issue. > -- *Att[]'s* *Iury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Ironic PTL * *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Sat Jul 16 01:52:10 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 16 Jul 2022 01:52:10 +0000 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> Message-ID: <20220716015210.7pzcrwfyzcho6opc@yuggoth.org> On 2022-07-09 13:26:36 +0000 (+0000), Jeremy Stanley wrote: [...] > Apparently, ansible-runner currently depends[3] on python-daemon, > which still has a dependency on lockfile[4]. Our uses of > ansible-runner seem to be pretty much limited to TripleO > repositories (hence tagging them in the subject), so it's possible > they could find an alternative to it and solve this dilemma. > Optionally, we could try to help the ansible-runner or python-daemon > maintainers with new implementations of the problem dependencies as > a way out. [...] In the meantime, how does everyone feel about us going ahead and removing the "openstackci" account from the maintainers list for lockfile on PyPI? We haven't depended on it directly since 2015, and it didn't come back into our indirect dependency set until 2018 (presumably that's when TripleO started using ansible-runner?). The odds that we'll need to fix anything in it in the future are pretty small at this point, and if we do we're better off putting that effort into helping the ansible-runner or python-daemon maintainers move off of it instead. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From zigo at debian.org Sat Jul 16 09:12:16 2022 From: zigo at debian.org (Thomas Goirand) Date: Sat, 16 Jul 2022 11:12:16 +0200 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> Message-ID: Hi there! On 7/14/22 18:09, Dmitry Tantsur wrote: > Ironic was not too bad either: > https://review.opendev.org/c/openstack/ironic/+/849882 > > Similar for Nova: https://review.opendev.org/c/openstack/nova/+/849881 > Thanks, I was able to backport these fixes in the Yoga version of Ironic and Nova, and uploaded them to Debian Unstable (currently I'm finishing to build Ironic that had its unit tests passing already). > > - sahara > > > > I'll try to see what I can do to fix these, maybe some of the > failures > > are unrelated (I haven't investigated yet). Only Sahara is missing a fix now. I tracked it down to this: https://github.com/openstack/sahara/blob/master/sahara/utils/api_validator.py#L177 the error being: TypeError: __init__() got an unexpected keyword argument 'types' Looks like "types" has gone away from the parent class. Does anyone know what's going on, and what the replacement is? I first thought it looks like "types" should really be "type_checker", so I tried that, but it didn't work... Once Sahara is fixed, I'm done for all OpenStack packages, and only 2 other Debian packages will need a fix (but I filed Debian bugs against these packages, so we're good...). Cheers, Thomas Goirand (zigo) From the.wade.albright at gmail.com Sat Jul 16 02:26:59 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Fri, 15 Jul 2022 19:26:59 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Hi Lury, Thanks for the reply. I am not trying to use Ironic to change the BMC password. I am changing the password directly on the system, independently of Ironic. Then after that I change the password in Ironic. But it doesn't seem to update in Ironic and any operations on the node fail with a redfish authentication error. After restarting the conductor, node operations work again. On Fri, Jul 15, 2022 at 6:24 PM Iury Gregory wrote: > Hi Wade, > > If I understood correctly, you have a node already deployed and you want > to change the redfish BMC password via Ironic, this is not possible. Ironic > uses the credentials to access the machine and execute the necessary to > provision the machine. > If you want to change the credentials to access the BMC, you need to > directly access it and change in the machine, after that you can change > information in Ironic. > > > > > Em sex., 15 de jul. de 2022 ?s 22:52, Wade Albright < > the.wade.albright at gmail.com> escreveu: > >> Hi, >> >> I'm hitting a problem when trying to update the redfish_password for an >> existing node. I'm curious to know if anyone else has encountered this >> problem. I'm not sure if I'm just doing something wrong or if there is a >> bug. Or if the problem is unique to my setup. >> >> I have a node already added into ironic with all the driver details set, >> and things are working fine. I am able to run deployments. >> >> Now I need to change the redfish password on the host. So I update the >> password for redfish access on the host, then use an 'openstack baremetal >> node set --driver-info redfish_password=' command to set >> the new redfish_password. >> >> Once this has been done, deployment no longer works. I see redfish >> authentication errors in the logs and the operation fails. I waited a bit >> to see if there might just be a delay in updating the password, but after >> awhile it still didn't work. >> >> I restarted the conductor, and after that things work fine again. So it >> seems like the password is cached or something. Is there a way to force the >> password to update? I even tried removing the redfish credentials and >> re-adding them, but that didn't work either. Only a conductor restart seems >> to make the new password work. >> >> We are running Xena, using rpm installation on Oracle Linux 8.5. >> >> Thanks in advance for any help with this issue. >> > > > -- > *Att[]'s* > > *Iury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Ironic PTL * > *Senior Software Engineer at Red Hat Brazil* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Sat Jul 16 16:13:41 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Sat, 16 Jul 2022 21:43:41 +0530 Subject: Manila/NFS support for Zun Message-ID: Hi, I want to mount my Manila shares on containers managed by Zun. I can see a Fuxi project and driver for this but it is discontinued now. I want to have a shared file system to be mounted on multiple containers simultaneously, it is not possible with cinder. Is there any alternative to Fuxi? Or can it work with yoga release ? Please advise and give a suggestion. Regards, Vaibhav -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Sat Jul 16 23:36:22 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Sat, 16 Jul 2022 16:36:22 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Greetings! I believe you need two patches, one in ironic and one in sushy. Sushy: https://review.opendev.org/c/openstack/sushy/+/832860 Ironic: https://review.opendev.org/c/openstack/ironic/+/820588 I think it is variation, and the comment about working after you restart the conductor is the big signal to me. I?m on a phone on a bad data connection, if you email me on Monday I can see what versions the fixes would be in. For the record, it is a session cache issue, the bug was that the service didn?t quite know what to do when auth fails. -Julia On Fri, Jul 15, 2022 at 2:55 PM Wade Albright wrote: > Hi, > > I'm hitting a problem when trying to update the redfish_password for an > existing node. I'm curious to know if anyone else has encountered this > problem. I'm not sure if I'm just doing something wrong or if there is a > bug. Or if the problem is unique to my setup. > > I have a node already added into ironic with all the driver details set, > and things are working fine. I am able to run deployments. > > Now I need to change the redfish password on the host. So I update the > password for redfish access on the host, then use an 'openstack baremetal > node set --driver-info redfish_password=' command to set > the new redfish_password. > > Once this has been done, deployment no longer works. I see redfish > authentication errors in the logs and the operation fails. I waited a bit > to see if there might just be a delay in updating the password, but after > awhile it still didn't work. > > I restarted the conductor, and after that things work fine again. So it > seems like the password is cached or something. Is there a way to force the > password to update? I even tried removing the redfish credentials and > re-adding them, but that didn't work either. Only a conductor restart seems > to make the new password work. > > We are running Xena, using rpm installation on Oracle Linux 8.5. > > Thanks in advance for any help with this issue. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Sun Jul 17 02:14:05 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Sat, 16 Jul 2022 21:14:05 -0500 Subject: You're invited to Open Infrastructure Days Mexico 2022 Message-ID: 15 Aug 2022 You're invited to Open Infrastructure Days Mexico 2022 Hello, We?d love for you to join us for Open Infrastructure Days Mexico 2022. Using the Eventee platform, you can view all event sessions, engage with speakers, and much more. We will also inform you about all event updates, so you will always have the latest information. Open Infrastructure Days Mexico 2022 is fully accessible from both desktop and native mobile app. https://openinfradays.mx https://eventee.co/en/e/open-infrastructure-days-mexico-2022-13797 Looking forward to seeing you there, OpenInfraCDMX community. OpenInfraGDL community. -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Sun Jul 17 20:01:19 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Sun, 17 Jul 2022 21:01:19 +0100 Subject: [dev] directions to the right project team Message-ID: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Too whom it may concern, I am relatively new to contributing to OpenStack and therefore am not sure who to direct this question too. I have the following issue. The firm I am currently working at is running OpenStack at sale with circa 6000 different projects. Whenever we try to access a projects tab via horizon we experience a timeout (accessing through the API works fine). To that end we were hoping to implement something akin to pagination or filtering. I was hoping to work on this and contribute the outcome to OpenStack. Is this something worthwhile and which group should I talk to; keystone, horizon or both ? I apologise in advance if I have missed something. Best Regards, Sergey From noonedeadpunk at gmail.com Sun Jul 17 20:12:20 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Sun, 17 Jul 2022 22:12:20 +0200 Subject: [dev] directions to the right project team In-Reply-To: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: Just out of interest, have you tried using Skyline as a dashboard instead of the horizon? I believe keystone should already support pagination. The better question is what horizon uses as client - openstacksdk or python-keystoneclient. As I believe a lot of issues are solved with openstacksdk as of today, and switching to it also would match community goal. So imo that might be good thing to contribute to:) ??, 17 ???. 2022 ?., 22:07 Sergey Drozdov : > Too whom it may concern, > > I am relatively new to contributing to OpenStack and therefore am not sure > who to direct this question too. I have the following issue. The firm I am > currently working at is running OpenStack at sale with circa 6000 different > projects. Whenever we try to access a projects tab via horizon we > experience a timeout (accessing through the API works fine). To that end we > were hoping to implement something akin to pagination or filtering. I was > hoping to work on this and contribute the outcome to OpenStack. Is this > something worthwhile and which group should I talk to; keystone, horizon > or both ? I apologise in advance if I have missed something. > > > Best Regards, > Sergey > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kdhall at binghamton.edu Mon Jul 18 01:20:12 2022 From: kdhall at binghamton.edu (Dave Hall) Date: Sun, 17 Jul 2022 21:20:12 -0400 Subject: [OpenStack-Ansible][Yoga] Debian Backports? In-Reply-To: References: Message-ID: Hello, Please allow me to rephrase my previous questions: Is Debian 11 Backports supported as an installation target for the Yoga version of Openstack-Ansible? Thanks. -Dave -- Dave Hall Binghamton University kdhall at binghamton.edu On Fri, Jul 15, 2022 at 5:10 PM Dave Hall wrote: > Hello, > > I've just worked most of the way through a fresh install of Yoga on 5 > Debian Bullseye systems. All systems are updated to include the Bullseye > backports. The documentation doesn't mention backports, but I > always install Debian this way - almost without thinking. > > The specific config is based on openstack-user-config.yaml.prod.example > with 3 infrastructure host, two compute hosts, and cinder/glance running on > the infrastructure hosts as in the prod example file. > > First, one surprise: For version of GlusterFS in Bullseye Backports, > /usr/sbin/gluster as been moved to a separate package - glusterfs-cli. I > installed this manually in the repo containers to get through > setup-infrastructure.yml. > > In setup-openstack.yml, I'm stopped at "TASK [os_nova : Run nova-status > upgrade check to validate a healthy configuration]". "nova-status upgrade > check" is failing. "nova-manage cell_v2 list_hosts" is not showing any. > > Oh, and there are warnings about eventlet monkey patching and urllib3. > > So I'm not quite sure how to dig into this. The nova-api container seems > to be running on the infra hosts, and nova-compute.service is up on both > compute hosts, although there are warnings about "Timed out waiting for > nova-conductor" The nova-api containers are able to ing the compute hosts > on br-mgmt. > > I do have to wonder if this has anything to do with being upgraded to > backpors. > > Any hints on how to analyse this (or how to fix it)? > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bxzhu_5355 at 163.com Mon Jul 18 01:47:08 2022 From: bxzhu_5355 at 163.com (Boxiang Zhu) Date: Mon, 18 Jul 2022 09:47:08 +0800 (CST) Subject: =?GBK?Q?=BB=D8=B8=B4:[skyline]A_problem_with_skyline_packaging_RPM?= In-Reply-To: References: Message-ID: <62a52bae.cbc.1820efcdc1f.Coremail.bxzhu_5355@163.com> Hi renliang, You can find the packages from https://tarballs.opendev.org/openstack. We have removed all the libs and only build one package for skyline-apiserver project[0][1]. Also, you can find the skyline-console package[2][3]. Now, the skyline has no official packages from pypi. After skyline release from openstack, it will be pushed by openstackci[4]. [0] https://tarballs.opendev.org/openstack/skyline-apiserver/skyline-apiserver-master.whl [1] https://tarballs.opendev.org/openstack/skyline-apiserver/skyline-apiserver-master.tar.gz [2] https://tarballs.opendev.org/openstack/skyline-console/skyline_console-master.whl [3] https://tarballs.opendev.org/openstack/skyline-console/skyline-console-master.tar.gz [4] https://pypi.org/user/openstackci/ Thanks, Best Regards, Boxiang Zhu At 2022-07-14 14:48:14, "??" wrote: Hello, a series of packages for skyline are provided at https://pypi.org/user/99cloud/, but only the package of skyline-apiserver. skyline-apiserver requires dependencies of other packages when building. For example, skyline-config,skyline-log,skyline-policy-manager. These packages do not provide the corresponding source packages, we want to package these packages into rpm. Please provide source packages or other better ways to build. -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Mon Jul 18 06:36:04 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Mon, 18 Jul 2022 08:36:04 +0200 Subject: [OpenStack-Ansible][Yoga] Debian Backports? In-Reply-To: References: Message-ID: I think we have no idea as this was never tested neither manually by us nor in the CI, at least I never installed Backports for sure while testing. All repos that are needed for deployment are being installed with appropriate roles, and we mostly assume that target OS is vanilla image, except steps mentioned in a deployment guide (like network configuration). ??, 18 ???. 2022 ?., 03:22 Dave Hall : > Hello, > > Please allow me to rephrase my previous questions: Is Debian 11 Backports > supported as an installation target for the Yoga version of > Openstack-Ansible? > > Thanks. > > -Dave > > -- > Dave Hall > Binghamton University > kdhall at binghamton.edu > > On Fri, Jul 15, 2022 at 5:10 PM Dave Hall wrote: > >> Hello, >> >> I've just worked most of the way through a fresh install of Yoga on 5 >> Debian Bullseye systems. All systems are updated to include the Bullseye >> backports. The documentation doesn't mention backports, but I >> always install Debian this way - almost without thinking. >> >> The specific config is based on openstack-user-config.yaml.prod.example >> with 3 infrastructure host, two compute hosts, and cinder/glance running on >> the infrastructure hosts as in the prod example file. >> >> First, one surprise: For version of GlusterFS in Bullseye Backports, >> /usr/sbin/gluster as been moved to a separate package - glusterfs-cli. I >> installed this manually in the repo containers to get through >> setup-infrastructure.yml. >> >> In setup-openstack.yml, I'm stopped at "TASK [os_nova : Run nova-status >> upgrade check to validate a healthy configuration]". "nova-status upgrade >> check" is failing. "nova-manage cell_v2 list_hosts" is not showing any. >> >> Oh, and there are warnings about eventlet monkey patching and urllib3. >> >> So I'm not quite sure how to dig into this. The nova-api container seems >> to be running on the infra hosts, and nova-compute.service is up on both >> compute hosts, although there are warnings about "Timed out waiting for >> nova-conductor" The nova-api containers are able to ing the compute hosts >> on br-mgmt. >> >> I do have to wonder if this has anything to do with being upgraded to >> backpors. >> >> Any hints on how to analyse this (or how to fix it)? >> >> Thanks. >> >> -Dave >> >> -- >> Dave Hall >> Binghamton University >> kdhall at binghamton.edu >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Mon Jul 18 09:30:54 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Mon, 18 Jul 2022 11:30:54 +0200 Subject: [nova][placement] Spec appoval freeze TODAY Message-ID: Hi Compute-ers, As we agreed on the last PTG, July 14th was set as a deadline for spec approvals for the Zed cycle. We granted a few more days until today but I eventually tell that TODAY we stop accepting specs for this Zed cycle. That being said, we still have two open specs : - https://review.opendev.org/c/openstack/nova-specs/+/842015 is at the moment a discussion for knowing how to fix a Ironic issue. This is not really a spec so it won't need to be stopping to discuss this change. Please continue to provide your comments if you want, as we need to find a consensus for the fix. - https://review.opendev.org/c/openstack/nova-specs/+/849488 will have an exception until tomorrow's meeting. We'll discuss this spec in the meeting and if we have a consensus, we could accept it. If we don't, then this spec won't be accepted for Zed. Also, given we're now on Zed-3, I'll abandon all the open Yoga specs : https://review.opendev.org/q/project:openstack/nova-specs+is:open+file:%255Especs/yoga/.* Thanks, -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From cjeanner at redhat.com Mon Jul 18 12:19:34 2022 From: cjeanner at redhat.com (=?UTF-8?Q?C=c3=a9dric_Jeanneret?=) Date: Mon, 18 Jul 2022 14:19:34 +0200 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: <20220716015210.7pzcrwfyzcho6opc@yuggoth.org> References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> <20220716015210.7pzcrwfyzcho6opc@yuggoth.org> Message-ID: On 7/16/22 03:52, Jeremy Stanley wrote: > On 2022-07-09 13:26:36 +0000 (+0000), Jeremy Stanley wrote: > [...] >> Apparently, ansible-runner currently depends[3] on python-daemon, >> which still has a dependency on lockfile[4]. Our uses of >> ansible-runner seem to be pretty much limited to TripleO >> repositories (hence tagging them in the subject), so it's possible >> they could find an alternative to it and solve this dilemma. >> Optionally, we could try to help the ansible-runner or python-daemon >> maintainers with new implementations of the problem dependencies as >> a way out. > [...] > > In the meantime, how does everyone feel about us going ahead and > removing the "openstackci" account from the maintainers list for > lockfile on PyPI? We haven't depended on it directly since 2015, and > it didn't come back into our indirect dependency set until 2018 > (presumably that's when TripleO started using ansible-runner?). The > odds that we'll need to fix anything in it in the future are pretty > small at this point, and if we do we're better off putting that > effort into helping the ansible-runner or python-daemon maintainers > move off of it instead. That's a question for TripleO PTL - he's on holidays until next week (he was out also last week). It would be good to get his thoughts before doing anything, if it's possible. On my side, I don't really see any strong objection, but I may as well miss something important. Cheers, C. -- C?dric Jeanneret (He/Him/His) Sr. Software Engineer - OpenStack Platform Deployment Framework TC Red Hat EMEA https://www.redhat.com/ -------------- next part -------------- A non-text attachment was scrubbed... Name: OpenPGP_signature Type: application/pgp-signature Size: 840 bytes Desc: OpenPGP digital signature URL: From the.wade.albright at gmail.com Sun Jul 17 01:01:29 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Sat, 16 Jul 2022 18:01:29 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Hi Julia, Thank you so much for the reply! Hopefully this is the issue. I'll try out the patches next week and report back. I'll also email you on Monday about the versions, that would be very helpful to know. Thanks again, really appreciate it. Wade On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger wrote: > Greetings! > > I believe you need two patches, one in ironic and one in sushy. > > Sushy: > https://review.opendev.org/c/openstack/sushy/+/832860 > > Ironic: > https://review.opendev.org/c/openstack/ironic/+/820588 > > I think it is variation, and the comment about working after you restart > the conductor is the big signal to me. I?m on a phone on a bad data > connection, if you email me on Monday I can see what versions the fixes > would be in. > > For the record, it is a session cache issue, the bug was that the service > didn?t quite know what to do when auth fails. > > -Julia > > > On Fri, Jul 15, 2022 at 2:55 PM Wade Albright > wrote: > >> Hi, >> >> I'm hitting a problem when trying to update the redfish_password for an >> existing node. I'm curious to know if anyone else has encountered this >> problem. I'm not sure if I'm just doing something wrong or if there is a >> bug. Or if the problem is unique to my setup. >> >> I have a node already added into ironic with all the driver details set, >> and things are working fine. I am able to run deployments. >> >> Now I need to change the redfish password on the host. So I update the >> password for redfish access on the host, then use an 'openstack baremetal >> node set --driver-info redfish_password=' command to set >> the new redfish_password. >> >> Once this has been done, deployment no longer works. I see redfish >> authentication errors in the logs and the operation fails. I waited a bit >> to see if there might just be a delay in updating the password, but after >> awhile it still didn't work. >> >> I restarted the conductor, and after that things work fine again. So it >> seems like the password is cached or something. Is there a way to force the >> password to update? I even tried removing the redfish credentials and >> re-adding them, but that didn't work either. Only a conductor restart seems >> to make the new password work. >> >> We are running Xena, using rpm installation on Oracle Linux 8.5. >> >> Thanks in advance for any help with this issue. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Mon Jul 18 14:49:47 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Mon, 18 Jul 2022 20:19:47 +0530 Subject: Regarding Open Stack Internal IP not responding for compute nodes Message-ID: hi Team, I had an issue over the weekend where OpenStack Installed on the XENA version suddenly stopped responding on its internal IP. I checked the switch, Switch was never rebooted and did not see any errors, When I rebooted the Compute node , it fixed the issue. Rabbit-mq was showing errors regards to Network Intermittent connectivity , But that is ok since the Internal IP was not reachable But as a Whole i do not see any specific errors which cause OpenStack Internal Networking to stop Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Mon Jul 18 15:57:41 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Mon, 18 Jul 2022 16:57:41 +0100 Subject: Upgrading to a more recent version of jsonschema In-Reply-To: References: <74f5fdba-8225-5f6a-a6f6-68853875d4f8@debian.org> <3a6170d4-e1fb-2988-e980-e8c152cb852b@debian.org> <181649f0df6.11d045b0f280764.1056849246214160471@ghanshyammann.com> <7fda4e895d6bb1d325c8b72522650c809bcc87f9.camel@redhat.com> <4d3f63840239c2533a060ed9596b57820cf3dfed.camel@redhat.com> <2707b10cbccab3e5a5a7930c1369727c896fde3a.camel@redhat.com> <4265a04f-689d-b738-fbdc-3dfbe3036f95@debian.org> <2c02eb0f261fe0edd2432061ebb01e945a6ebc46.camel@redhat.com> <6f552ddb-4b28-153a-5b11-d2491433399a@debian.org> <358d1aa4298c4fa7f1077be35954a187d5134109.camel@redhat.com> Message-ID: On Sat, 2022-07-16 at 11:12 +0200, Thomas Goirand wrote: > Hi there! > > On 7/14/22 18:09, Dmitry Tantsur wrote: > > Ironic was not too bad either: > > https://review.opendev.org/c/openstack/ironic/+/849882 > > > > Similar for Nova: https://review.opendev.org/c/openstack/nova/+/849881 > > > > Thanks, I was able to backport these fixes in the Yoga version of Ironic > and Nova, and uploaded them to Debian Unstable (currently I'm finishing > to build Ironic that had its unit tests passing already). > > > > - sahara > > > > > > I'll try to see what I can do to fix these, maybe some of the > > failures > > > are unrelated (I haven't investigated yet). > > Only Sahara is missing a fix now. I tracked it down to this: > > https://github.com/openstack/sahara/blob/master/sahara/utils/api_validator.py#L177 > > the error being: > > TypeError: __init__() got an unexpected keyword argument 'types' > > Looks like "types" has gone away from the parent class. Does anyone know > what's going on, and what the replacement is? I first thought it looks > like "types" should really be "type_checker", so I tried that, but it > didn't work... > > Once Sahara is fixed, I'm done for all OpenStack packages, and only 2 > other Debian packages will need a fix (but I filed Debian bugs against > these packages, so we're good...). Looks like Sahara didn't have deprecation warnings turned on in CI and missed . [1]. Google leads me to [2]. I guess you want [3]. That's all I've got so far. Stephen [1] https://github.com/python-jsonschema/jsonschema/blob/v3.2.0/jsonschema/validators.py#L269-L282 [2] https://python-jsonschema.readthedocs.io/en/latest/creating/ [3] https://python-jsonschema.readthedocs.io/en/latest/creating/#jsonschema.validators.extend > > Cheers, > > Thomas Goirand (zigo) > From gmann at ghanshyammann.com Mon Jul 18 17:16:37 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Jul 2022 12:16:37 -0500 Subject: [dev][horizon] [keystone]directions to the right project team In-Reply-To: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: <182124fd4e5.dcaaed29287314.8537919544851181316@ghanshyammann.com> ---- On Sun, 17 Jul 2022 15:01:19 -0500 Sergey Drozdov wrote --- > Too whom it may concern, > > I am relatively new to contributing to OpenStack and therefore am not sure who to direct this question too. I have the following issue. The firm I am currently working at is running OpenStack at sale with circa 6000 different projects. Whenever we try to access a projects tab via horizon we experience a timeout (accessing through the API works fine). To that end we were hoping to implement something akin to pagination or filtering. I was hoping to work on this and contribute the outcome to OpenStack. Is this something worthwhile and which group should I talk to; keystone, horizon or both ? I apologise in advance if I have missed something. We do not have such large scale testing at upstream so timeout might be from keystone APIs or horizon panel. I think adding pagination support in Keystone API make sense. Adding horizon and keystone tag in subject, you can talk to both group in this ML or in IRC (OFTC) channel #openstack-keystone #openstack-horizon -gmann > > > Best Regards, > Sergey > From lokendrarathour at gmail.com Mon Jul 18 17:17:52 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Mon, 18 Jul 2022 22:47:52 +0530 Subject: Regarding Open Stack Internal IP not responding for compute nodes In-Reply-To: References: Message-ID: Hi, can you please share the required logs? - it is very hard to tell about the issue. Also, can you in any case reproduce the errors? On Mon, Jul 18, 2022 at 8:41 PM Adivya Singh wrote: > hi Team, > > I had an issue over the weekend where OpenStack Installed on the XENA > version suddenly stopped responding on its internal IP. > > I checked the switch, Switch was never rebooted and did not see any > errors, When I rebooted the Compute node , it fixed the issue. > > Rabbit-mq was showing errors regards to Network Intermittent connectivity > , But that is ok since the Internal IP was not reachable > > But as a Whole i do not see any specific errors which cause > OpenStack Internal Networking to stop > > Regards > Adivya Singh > -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Mon Jul 18 17:28:29 2022 From: helena at openstack.org (Helena Spease) Date: Mon, 18 Jul 2022 12:28:29 -0500 Subject: Vote for the next OpenStack release name! Message-ID: Hello everyone! We are so excited to announce that voting for the next OpenStack release name has opened! A few popular choices, like Aardvark, unfortunately, did not pass trademark checks. Here are your finalists: Anchovy - boring can be delicious! also a town in Jamaica Anteater - an animal where form clearly follows function Antelope - swift and gracious, also a type of steam locomotive Submit your vote by July 20th at 11:59pm PT (July 21st 6:59 UTC) and help us pick the next OpenStack release name! https://civs1.civs.us/cgi-bin/vote.pl?id=E_2b6c69494a6d3222&akey=d3350c7bda8bad74 Thank you, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From iurygregory at gmail.com Mon Jul 18 17:36:06 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Mon, 18 Jul 2022 14:36:06 -0300 Subject: [ironic] Revise Ironic Vision Meeting Message-ID: Hello ironicers, During the upstream meeting today we have scheduled the meeting to revise the vision of our project. The meeting will happen tomorrow at 15:00 UTC, details about the meeting are in the etherpad [1]. See you tomorrow! [1] https://etherpad.opendev.org/p/revise-ironic-vision -- *Att[]'s* *Iury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Ironic PTL * *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Mon Jul 18 18:43:42 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Mon, 18 Jul 2022 13:43:42 -0500 Subject: [all][tc] Technical Committee next weekly meeting on 21 July 2022 at 1500 UTC Message-ID: <182129f8d40.d3efcbd6290137.2594372562715618344@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 21 July 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, 20 July at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From jimmy at openinfra.dev Mon Jul 18 18:42:09 2022 From: jimmy at openinfra.dev (Jimmy McArthur) Date: Mon, 18 Jul 2022 13:42:09 -0500 Subject: [interop] Assistance with refstack client Message-ID: <8557fe85-feec-e22b-961e-3cda3c69d50b@openinfra.dev> Hi - I've got someone encountering errors attempting to install the refstack client. Error below.? Is there anyone on the interop team that I can onnect these folks to for assistance? Cheers, Jimm warning: openssl 0x00000000 is too old for _hashlib INFO: Can't locate Tcl/Tk libs and/or headers building 'nis' extension gcc -Wno-unused-result -Wsign-compare -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -fPIC -I./Include -I/root/refstack-client/.localpython/include -I. -I/usr/include/x86_64-linux-gnu -I/usr/local/include -I/root/refstack-client/.python_src/Python-3.6.0/Include -I/root/refstack-client/.python_src/Python-3.6.0 -c /root/refstack-client/.python_src/Python-3.6.0/Modules/nismodule.c -o build/temp.linux-x86_64-3.6/root/refstack-client/.python_src/Python-3.6.0/Modules/nismodule.o /root/refstack-client/.python_src/Python-3.6.0/Modules/nismodule.c:17:10: fatal error: rpc/rpc.h: No such file or directory 17 | #include | ^~~~~~~~~~~ compilation terminated. AND + rm 16.7.9.tar.gz + cd virtualenv-16.7.9 + '[' -d /root/refstack-client/.venv ']' ++ command -v python3 + '[' -n /usr/bin/python3 ']' + python=python3 + python3 virtualenv.py /root/refstack-client/.venv --python=/root/refstack-client/.localpython/bin/python3.6 /root/refstack-client/virtualenv-16.7.9/virtualenv.py:24: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives import distutils.spawn /root/refstack-client/virtualenv-16.7.9/virtualenv.py:25: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead import distutils.sysconfig Running virtualenv with interpreter /root/refstack-client/.localpython/bin/python3.6 Already using interpreter /root/refstack-client/.localpython/bin/python3.6 Using base prefix '/root/refstack-client/.localpython' New python executable in /root/refstack-client/.venv/bin/python3.6 Also creating executable in /root/refstack-client/.venv/bin/python Command /root/refstack-client/.venv/bin/python3.6 -m pip config list had error code -11 Installing setuptools, pip, wheel... done. Traceback (most recent call last): File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 2634, in main() File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 870, in main symlink=options.symlink, File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 1179, in create_environment install_wheel(to_install, py_executable, search_dirs, download=download) File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 1023, in install_wheel _install_wheel_with_search_dir(download, project_names, py_executable, search_dirs) File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 1116, in _install_wheel_with_search_dir call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=script) File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 963, in call_subprocess raise OSError("Command {} failed with error code {}".format(cmd_desc, proc.returncode)) OSError: Command /root/refstack-client/.venv/bin/python3.6 - setuptools pip wheel failed with error code -11 + python3 virtualenv.py /root/refstack-client/.tempest/.venv --python=/root/refstack-client/.localpython/bin/python3.6 /root/refstack-client/virtualenv-16.7.9/virtualenv.py:24: DeprecationWarning: The distutils package is deprecated and slated for removal in Python 3.12. Use setuptools or check PEP 632 for potential alternatives import distutils.spawn /root/refstack-client/virtualenv-16.7.9/virtualenv.py:25: DeprecationWarning: The distutils.sysconfig module is deprecated, use sysconfig instead import distutils.sysconfig Running virtualenv with interpreter /root/refstack-client/.localpython/bin/python3.6 Already using interpreter /root/refstack-client/.localpython/bin/python3.6 Using base prefix '/root/refstack-client/.localpython' New python executable in /root/refstack-client/.tempest/.venv/bin/python3.6 Also creating executable in /root/refstack-client/.tempest/.venv/bin/python Command /root/refstack-clien.../.venv/bin/python3.6 -m pip config list had error code -11 Installing setuptools, pip, wheel... done. Traceback (most recent call last): File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 2634, in main() File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 870, in main symlink=options.symlink, File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 1179, in create_environment install_wheel(to_install, py_executable, search_dirs, download=download) File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 1023, in install_wheel _install_wheel_with_search_dir(download, project_names, py_executable, search_dirs) File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 1116, in _install_wheel_with_search_dir call_subprocess(cmd, show_stdout=False, extra_env=env, stdin=script) File "/root/refstack-client/virtualenv-16.7.9/virtualenv.py", line 963, in call_subprocess raise OSError("Command {} failed with error code {}".format(cmd_desc, proc.returncode)) OSError: Command /root/refstack-clien.../.venv/bin/python3.6 - setuptools pip wheel failed with error code -11 + cd .. + rm -rf virtualenv-16.7.9 + /root/refstack-client/.venv/bin/python -m pip install -c https://releases.openstack.org/constraints/upper/master -e . /root/refstack-client/.venv/bin/python: No module named pip + cd /root/refstack-client/.tempestconf + /root/refstack-client/.venv/bin/python -m pip install -c https://releases.openstack.org/constraints/upper/master -e . /root/refstack-client/.venv/bin/python: No module named pip + cd .. + /root/refstack-client/.tempest/.venv/bin/python -m pip install -c https://releases.openstack.org/constraints/upper/master /root/refstack-client/.tempest /root/refstack-client/.tempest/.venv/bin/python: No module named pip -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Jul 18 19:08:27 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 18 Jul 2022 19:08:27 +0000 Subject: [interop] Assistance with refstack client In-Reply-To: <8557fe85-feec-e22b-961e-3cda3c69d50b@openinfra.dev> References: <8557fe85-feec-e22b-961e-3cda3c69d50b@openinfra.dev> Message-ID: <20220718190826.j6bwcc6wsv5w7q4f@yuggoth.org> On 2022-07-18 13:42:09 -0500 (-0500), Jimmy McArthur wrote: > I've got someone encountering errors attempting to install the > refstack client. Error below. Is there anyone on the interop team > that I can onnect these folks to for assistance? [...] I'm not really involved with RefStack development, but skimming the errors it looks like some dependency is getting compiled from source because there's no pre-built binaries for the target platform. Knowing what steps/commands led to that error condition, as well as the platform (Linux distribution name and version, processor architecture, that sort of info) would help to narrow down possible causes. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From juliaashleykreger at gmail.com Mon Jul 18 22:14:20 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 18 Jul 2022 15:14:20 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: If you could supply some conductor logs, that would be helpful. It should be re-authenticating, but obviously we have a larger bug there we need to find the root issue behind. On Mon, Jul 18, 2022 at 3:06 PM Wade Albright wrote: > > I was able to use the patches to update the code, but unfortunately the problem is still there for me. > > I also tried an RPM upgrade to the versions Julia mentioned had the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released in January 2022. But it did not fix the problem. > > I am able to consistently reproduce the error. > - step 1: change BMC password directly on the node itself > - step 2: update BMC password (redfish_password) in ironic with 'openstack baremetal node set --driver-info redfish_password='newpass' > > After step 1 there are errors in the logs entries like "Session authentication appears to have been lost at some point in time" and eventually it puts the node into maintenance mode and marks the power state as "none." > After step 2 and taking the host back out of maintenance mode, it goes through a similar set of log entries puts the node into MM again. > > After the above steps, a conductor restart fixes the problem and operations work normally again. Given this it seems like there is still some kind of caching issue. > > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright wrote: >> >> Hi Julia, >> >> Thank you so much for the reply! Hopefully this is the issue. I'll try out the patches next week and report back. I'll also email you on Monday about the versions, that would be very helpful to know. >> >> Thanks again, really appreciate it. >> >> Wade >> >> >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger wrote: >>> >>> Greetings! >>> >>> I believe you need two patches, one in ironic and one in sushy. >>> >>> Sushy: >>> https://review.opendev.org/c/openstack/sushy/+/832860 >>> >>> Ironic: >>> https://review.opendev.org/c/openstack/ironic/+/820588 >>> >>> I think it is variation, and the comment about working after you restart the conductor is the big signal to me. I?m on a phone on a bad data connection, if you email me on Monday I can see what versions the fixes would be in. >>> >>> For the record, it is a session cache issue, the bug was that the service didn?t quite know what to do when auth fails. >>> >>> -Julia >>> >>> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright wrote: >>>> >>>> Hi, >>>> >>>> I'm hitting a problem when trying to update the redfish_password for an existing node. I'm curious to know if anyone else has encountered this problem. I'm not sure if I'm just doing something wrong or if there is a bug. Or if the problem is unique to my setup. >>>> >>>> I have a node already added into ironic with all the driver details set, and things are working fine. I am able to run deployments. >>>> >>>> Now I need to change the redfish password on the host. So I update the password for redfish access on the host, then use an 'openstack baremetal node set --driver-info redfish_password=' command to set the new redfish_password. >>>> >>>> Once this has been done, deployment no longer works. I see redfish authentication errors in the logs and the operation fails. I waited a bit to see if there might just be a delay in updating the password, but after awhile it still didn't work. >>>> >>>> I restarted the conductor, and after that things work fine again. So it seems like the password is cached or something. Is there a way to force the password to update? I even tried removing the redfish credentials and re-adding them, but that didn't work either. Only a conductor restart seems to make the new password work. >>>> >>>> We are running Xena, using rpm installation on Oracle Linux 8.5. >>>> >>>> Thanks in advance for any help with this issue. From juliaashleykreger at gmail.com Mon Jul 18 22:27:43 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 18 Jul 2022 15:27:43 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Debug would be best. I think I have an idea what is going on, and this is a similar variation. If you want, you can email them directly to me. Specifically only need entries reported by the sushy library and ironic.drivers.modules.redfish.utils. On Mon, Jul 18, 2022 at 3:20 PM Wade Albright wrote: > > I'm happy to supply some logs, what verbosity level should i use? And should I just embed the logs in email to the list or upload somewhere? > > On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger wrote: >> >> If you could supply some conductor logs, that would be helpful. It >> should be re-authenticating, but obviously we have a larger bug there >> we need to find the root issue behind. >> >> On Mon, Jul 18, 2022 at 3:06 PM Wade Albright >> wrote: >> > >> > I was able to use the patches to update the code, but unfortunately the problem is still there for me. >> > >> > I also tried an RPM upgrade to the versions Julia mentioned had the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released in January 2022. But it did not fix the problem. >> > >> > I am able to consistently reproduce the error. >> > - step 1: change BMC password directly on the node itself >> > - step 2: update BMC password (redfish_password) in ironic with 'openstack baremetal node set --driver-info redfish_password='newpass' >> > >> > After step 1 there are errors in the logs entries like "Session authentication appears to have been lost at some point in time" and eventually it puts the node into maintenance mode and marks the power state as "none." >> > After step 2 and taking the host back out of maintenance mode, it goes through a similar set of log entries puts the node into MM again. >> > >> > After the above steps, a conductor restart fixes the problem and operations work normally again. Given this it seems like there is still some kind of caching issue. >> > >> > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright wrote: >> >> >> >> Hi Julia, >> >> >> >> Thank you so much for the reply! Hopefully this is the issue. I'll try out the patches next week and report back. I'll also email you on Monday about the versions, that would be very helpful to know. >> >> >> >> Thanks again, really appreciate it. >> >> >> >> Wade >> >> >> >> >> >> >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger wrote: >> >>> >> >>> Greetings! >> >>> >> >>> I believe you need two patches, one in ironic and one in sushy. >> >>> >> >>> Sushy: >> >>> https://review.opendev.org/c/openstack/sushy/+/832860 >> >>> >> >>> Ironic: >> >>> https://review.opendev.org/c/openstack/ironic/+/820588 >> >>> >> >>> I think it is variation, and the comment about working after you restart the conductor is the big signal to me. I?m on a phone on a bad data connection, if you email me on Monday I can see what versions the fixes would be in. >> >>> >> >>> For the record, it is a session cache issue, the bug was that the service didn?t quite know what to do when auth fails. >> >>> >> >>> -Julia >> >>> >> >>> >> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright wrote: >> >>>> >> >>>> Hi, >> >>>> >> >>>> I'm hitting a problem when trying to update the redfish_password for an existing node. I'm curious to know if anyone else has encountered this problem. I'm not sure if I'm just doing something wrong or if there is a bug. Or if the problem is unique to my setup. >> >>>> >> >>>> I have a node already added into ironic with all the driver details set, and things are working fine. I am able to run deployments. >> >>>> >> >>>> Now I need to change the redfish password on the host. So I update the password for redfish access on the host, then use an 'openstack baremetal node set --driver-info redfish_password=' command to set the new redfish_password. >> >>>> >> >>>> Once this has been done, deployment no longer works. I see redfish authentication errors in the logs and the operation fails. I waited a bit to see if there might just be a delay in updating the password, but after awhile it still didn't work. >> >>>> >> >>>> I restarted the conductor, and after that things work fine again. So it seems like the password is cached or something. Is there a way to force the password to update? I even tried removing the redfish credentials and re-adding them, but that didn't work either. Only a conductor restart seems to make the new password work. >> >>>> >> >>>> We are running Xena, using rpm installation on Oracle Linux 8.5. >> >>>> >> >>>> Thanks in advance for any help with this issue. From juliaashleykreger at gmail.com Mon Jul 18 23:15:20 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Mon, 18 Jul 2022 16:15:20 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Excellent, hopefully I'll be able to figure out why Sushy is not doing the needful... Or if it is and Ironic is not picking up on it. Anyway, I've posted https://review.opendev.org/c/openstack/ironic/+/850259 which might handle this issue. Obviously a work in progress, but it represents what I think is happening inside of ironic itself leading into sushy when cache access occurs. On Mon, Jul 18, 2022 at 4:04 PM Wade Albright wrote: > > Sounds good, I will do that tomorrow. Thanks Julia. > > On Mon, Jul 18, 2022 at 3:27 PM Julia Kreger wrote: >> >> Debug would be best. I think I have an idea what is going on, and this >> is a similar variation. If you want, you can email them directly to >> me. Specifically only need entries reported by the sushy library and >> ironic.drivers.modules.redfish.utils. >> >> On Mon, Jul 18, 2022 at 3:20 PM Wade Albright >> wrote: >> > >> > I'm happy to supply some logs, what verbosity level should i use? And should I just embed the logs in email to the list or upload somewhere? >> > >> > On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger wrote: >> >> >> >> If you could supply some conductor logs, that would be helpful. It >> >> should be re-authenticating, but obviously we have a larger bug there >> >> we need to find the root issue behind. >> >> >> >> On Mon, Jul 18, 2022 at 3:06 PM Wade Albright >> >> wrote: >> >> > >> >> > I was able to use the patches to update the code, but unfortunately the problem is still there for me. >> >> > >> >> > I also tried an RPM upgrade to the versions Julia mentioned had the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released in January 2022. But it did not fix the problem. >> >> > >> >> > I am able to consistently reproduce the error. >> >> > - step 1: change BMC password directly on the node itself >> >> > - step 2: update BMC password (redfish_password) in ironic with 'openstack baremetal node set --driver-info redfish_password='newpass' >> >> > >> >> > After step 1 there are errors in the logs entries like "Session authentication appears to have been lost at some point in time" and eventually it puts the node into maintenance mode and marks the power state as "none." >> >> > After step 2 and taking the host back out of maintenance mode, it goes through a similar set of log entries puts the node into MM again. >> >> > >> >> > After the above steps, a conductor restart fixes the problem and operations work normally again. Given this it seems like there is still some kind of caching issue. >> >> > >> >> > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright wrote: >> >> >> >> >> >> Hi Julia, >> >> >> >> >> >> Thank you so much for the reply! Hopefully this is the issue. I'll try out the patches next week and report back. I'll also email you on Monday about the versions, that would be very helpful to know. >> >> >> >> >> >> Thanks again, really appreciate it. >> >> >> >> >> >> Wade >> >> >> >> >> >> >> >> >> >> >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger wrote: >> >> >>> >> >> >>> Greetings! >> >> >>> >> >> >>> I believe you need two patches, one in ironic and one in sushy. >> >> >>> >> >> >>> Sushy: >> >> >>> https://review.opendev.org/c/openstack/sushy/+/832860 >> >> >>> >> >> >>> Ironic: >> >> >>> https://review.opendev.org/c/openstack/ironic/+/820588 >> >> >>> >> >> >>> I think it is variation, and the comment about working after you restart the conductor is the big signal to me. I?m on a phone on a bad data connection, if you email me on Monday I can see what versions the fixes would be in. >> >> >>> >> >> >>> For the record, it is a session cache issue, the bug was that the service didn?t quite know what to do when auth fails. >> >> >>> >> >> >>> -Julia >> >> >>> >> >> >>> >> >> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright wrote: >> >> >>>> >> >> >>>> Hi, >> >> >>>> >> >> >>>> I'm hitting a problem when trying to update the redfish_password for an existing node. I'm curious to know if anyone else has encountered this problem. I'm not sure if I'm just doing something wrong or if there is a bug. Or if the problem is unique to my setup. >> >> >>>> >> >> >>>> I have a node already added into ironic with all the driver details set, and things are working fine. I am able to run deployments. >> >> >>>> >> >> >>>> Now I need to change the redfish password on the host. So I update the password for redfish access on the host, then use an 'openstack baremetal node set --driver-info redfish_password=' command to set the new redfish_password. >> >> >>>> >> >> >>>> Once this has been done, deployment no longer works. I see redfish authentication errors in the logs and the operation fails. I waited a bit to see if there might just be a delay in updating the password, but after awhile it still didn't work. >> >> >>>> >> >> >>>> I restarted the conductor, and after that things work fine again. So it seems like the password is cached or something. Is there a way to force the password to update? I even tried removing the redfish credentials and re-adding them, but that didn't work either. Only a conductor restart seems to make the new password work. >> >> >>>> >> >> >>>> We are running Xena, using rpm installation on Oracle Linux 8.5. >> >> >>>> >> >> >>>> Thanks in advance for any help with this issue. From skaplons at redhat.com Tue Jul 19 09:07:13 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 19 Jul 2022 11:07:13 +0200 Subject: [neutron] CI meeting - Tuesday 19.07.2022 Message-ID: <4395138.44csPzL39Z@p1> Hi, This is just a reminder about today's Neutron CI meeting which will be on meetpad [1] at 1500 UTC. Agenda is in the etherpad at [2]. [1] https://meetpad.opendev.org/neutron-ci-meetings[1] [2] https://etherpad.opendev.org/p/neutron-ci-meetings[2] -- Slawek Kaplonski Principal Software Engineer Red Hat -------- [1] https://meetpad.opendev.org/neutron-ci-meetings [2] https://etherpad.opendev.org/p/neutron-ci-meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From Danny.Webb at thehutgroup.com Tue Jul 19 10:37:18 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Tue, 19 Jul 2022 10:37:18 +0000 Subject: [dev] directions to the right project team In-Reply-To: References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: Unfortunately pagination was removed from keystone in the v3 api and as far as we're aware it was never re-added. https://docs.openstack.org/api-ref/identity/v3/?expanded=get-available-project-scopes-detail,list-projects-detail,list-domains-detail#list-projects https://docs.openstack.org/api-ref/identity/v3/?expanded=get-available-project-scopes-detail,list-projects-detail,list-domains-detail#list-domains The only scope to limit the returns from keystone is with "list_limit" in the keystone.conf, but that just means that any given api call will return no more than that set value. We're looking at skyline, but need SSO integrations added before we can use it (which is in our backlog for Sergey) ________________________________ From: Dmitriy Rabotyagov Sent: 17 July 2022 21:12 Cc: openstack-discuss Subject: Re: [dev] directions to the right project team CAUTION: This email originates from outside THG ________________________________ Just out of interest, have you tried using Skyline as a dashboard instead of the horizon? I believe keystone should already support pagination. The better question is what horizon uses as client - openstacksdk or python-keystoneclient. As I believe a lot of issues are solved with openstacksdk as of today, and switching to it also would match community goal. So imo that might be good thing to contribute to:) ??, 17 ???. 2022 ?., 22:07 Sergey Drozdov >: Too whom it may concern, I am relatively new to contributing to OpenStack and therefore am not sure who to direct this question too. I have the following issue. The firm I am currently working at is running OpenStack at sale with circa 6000 different projects. Whenever we try to access a projects tab via horizon we experience a timeout (accessing through the API works fine). To that end we were hoping to implement something akin to pagination or filtering. I was hoping to work on this and contribute the outcome to OpenStack. Is this something worthwhile and which group should I talk to; keystone, horizon or both ? I apologise in advance if I have missed something. Best Regards, Sergey Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From jlibosva at redhat.com Tue Jul 19 10:58:39 2022 From: jlibosva at redhat.com (Jakub Libosvar) Date: Tue, 19 Jul 2022 06:58:39 -0400 Subject: [Neutron] Bug Deputy Report July 11 -18 Message-ID: Hi, there were only 2 bugs reported last week. Critical * Some jobs broken post pyroute2 update to 0.7.1 - https://bugs.launchpad.net/neutron/+bug/1981963 - Unassigned but has good attention Low * neutron revision_number does not bump on network update - https://bugs.launchpad.net/neutron/+bug/1981817 - Unassigned From amonster369 at gmail.com Tue Jul 19 12:51:16 2022 From: amonster369 at gmail.com (A Monster) Date: Tue, 19 Jul 2022 13:51:16 +0100 Subject: Use redundancy between multiple controller nodes [kolla] Message-ID: I've deployed openstack xena using *kolla-ansible* on *centos 8 stream*, I've used multiple controller nodes, a bunch of compute nodes and a storage cluster. Controller nodes include network services. I want to know how can I successfully set up redundancy between these controller nodes, in a way that some nodes are active while others are on standby state, and would only become active in case when the active nodes are to fail. Thank you. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Tue Jul 19 13:21:47 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 19 Jul 2022 06:21:47 -0700 Subject: [dev] directions to the right project team In-Reply-To: References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: On Tue, Jul 19, 2022 at 3:44 AM Danny Webb wrote: > > Unfortunately pagination was removed from keystone in the v3 api and as far as we're aware it was never re-added. > This is quite concerning, and a quick look at the code confirms it. Mostly. There are remnants of "hints" and SQL Query filtering, but the internal limit is just a truncation which seems bad as well. https://github.com/openstack/keystone/blame/d7b1d57cae738183f8d85413e942402a8a4efb31/keystone/server/flask/common.py#L675 This seems like a fundamental performance oriented feature because the overhead in data conversion can be quite a bit when you have a large number of well... any objects being returned from a database. Does anyone know if a bug is open for this issue? From lokendrarathour at gmail.com Tue Jul 19 03:27:59 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 19 Jul 2022 08:57:59 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Swogat and Vikarna, We have tried adding the DNS entry for the overcloud domain. we are getting the same error: 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:11:18.785769 | 2.16s 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | TASK | Create identity internal endpoint 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://overcloud-hsc.com:13000/v3/services, The request you have made requires authentication."} 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | TIMING | tripleo_keystone_resources : Create identity internal endpoint | undercloud | 0:11:21.074605 | 2. Certificate configs: [stack at undercloud oc-domain-name]$ cat server.csr.cnf [req] default_bits = 2048 prompt = no default_md = sha256 distinguished_name = dn [dn] C=IN ST=UTTAR PRADESH L=NOIDA O=HSC OU=HSC emailAddress=demo at demo.com CN=overcloud-hsc.com [stack at undercloud oc-domain-name]$ cat v3.ext authorityKeyIdentifier=keyid,issuer basicConstraints=CA:FALSE keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment subjectAltName = @alt_names [alt_names] DNS.1=overcloud-hsc.com [stack at undercloud oc-domain-name]$ the difference we see from others is that we are using self-signed certificates. please let me know in case we need to check something else. Somehow this issue remains stuck. On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan wrote: > I was facing a similar kind of issue. > https://bugzilla.redhat.com/show_bug.cgi?id=2089442 > Here is the solution that helped me fix it. > Also make sure the cn that you will use is reachable from undercloud > (maybe) script should take care of it. > > Also please follow Mr. Tathe's mail to add the cn first. > > With regards > Swogat Pradhan > > On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe > wrote: > >> Hi Lokendra, >> >> The CN field is missing. Can you add that and generate the certificate >> again. >> >> CN=ipaddress >> >> Also add dns.1=ipaddress under alt_names for precaution. >> >> Vikarna >> >> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, >> wrote: >> >>> HI Vikarna, >>> Thanks for the inputs. >>> I am note able to access any tabs in GUI. >>> [image: image.png] >>> >>> to re-state, we are failing at the time of deployment at step4 : >>> >>> >>> PLAY [External deployment step 4] >>> ********************************************** >>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >>> TASK | External deployment step 4 >>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >>> OK | External deployment step 4 | undercloud -> localhost | result={ >>> "changed": false, >>> "msg": "Use --start-at-task 'External deployment step 4' to resume >>> from this task" >>> } >>> [WARNING]: ('undercloud -> localhost', >>> '525400ae-089b-870a-fab6-0000000000d7') >>> missing from stats >>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >>> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >>> INCLUDED | >>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>> | undercloud >>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >>> TASK | Clean up legacy Cinder keystone catalog entries >>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:24.204562 | 2.48s >>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>> item={'service_name': 'cinderv3', 'service_type': 'volume'} >>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:26.122584 | 4.40s >>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>> 0:11:26.124296 | 4.40s >>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >>> TASK | Manage Keystone resources for OpenStack services >>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >>> TIMING | Manage Keystone resources for OpenStack services | undercloud | >>> 0:11:26.169842 | 0.03s >>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >>> TASK | Gather variables for each operating system >>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >>> TIMING | tripleo_keystone_resources : Gather variables for each operating >>> system | undercloud | 0:11:26.253383 | 0.04s >>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >>> TASK | Create Keystone Admin resources >>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >>> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >>> undercloud | 0:11:26.299608 | 0.03s >>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >>> INCLUDED | >>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>> undercloud >>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >>> TASK | Create default domain >>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >>> OK | Create default domain | undercloud >>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >>> TIMING | tripleo_keystone_resources : Create default domain | undercloud | >>> 0:11:28.437360 | 2.09s >>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >>> TASK | Create admin and service projects >>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >>> TIMING | tripleo_keystone_resources : Create admin and service projects | >>> undercloud | 0:11:28.483468 | 0.03s >>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >>> INCLUDED | >>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>> undercloud >>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >>> TASK | Async creation of Keystone project >>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >>> CHANGED | Async creation of Keystone project | undercloud | item=admin >>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >>> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >>> undercloud | 0:11:29.238078 | 0.72s >>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >>> CHANGED | Async creation of Keystone project | undercloud | item=service >>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >>> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >>> undercloud | 0:11:29.586587 | 1.06s >>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >>> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >>> undercloud | 0:11:29.587916 | 1.07s >>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >>> TASK | Check Keystone project status >>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >>> WAITING | Check Keystone project status | undercloud | 30 retries left >>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >>> OK | Check Keystone project status | undercloud | item=admin >>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>> undercloud | 0:11:35.260666 | 5.66s >>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >>> OK | Check Keystone project status | undercloud | item=service >>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>> undercloud | 0:11:35.494729 | 5.89s >>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>> undercloud | 0:11:35.498771 | 5.89s >>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >>> TASK | Create admin role >>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >>> OK | Create admin role | undercloud >>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >>> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >>> 0:11:37.725949 | 2.20s >>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >>> TASK | Create _member_ role >>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >>> SKIPPED | Create _member_ role | undercloud >>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >>> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | >>> 0:11:37.783369 | 0.04s >>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >>> TASK | Create admin user >>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >>> CHANGED | Create admin user | undercloud >>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >>> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >>> 0:11:41.145472 | 3.34s >>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >>> TASK | Assign admin role to admin project for admin user >>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >>> OK | Assign admin role to admin project for admin user | undercloud >>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >>> TIMING | tripleo_keystone_resources : Assign admin role to admin project >>> for admin user | undercloud | 0:11:44.288848 | 3.13s >>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >>> TASK | Assign _member_ role to admin project for admin user >>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >>> SKIPPED | Assign _member_ role to admin project for admin user | undercloud >>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >>> TIMING | tripleo_keystone_resources : Assign _member_ role to admin project >>> for admin user | undercloud | 0:11:44.346479 | 0.04s >>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >>> TASK | Create identity service >>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >>> OK | Create identity service | undercloud >>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >>> TIMING | tripleo_keystone_resources : Create identity service | undercloud >>> | 0:11:46.022362 | 1.66s >>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >>> TASK | Create identity public endpoint >>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >>> OK | Create identity public endpoint | undercloud >>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>> undercloud | 0:11:48.233349 | 2.19s >>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >>> TASK | Create identity internal endpoint >>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >>> FATAL | Create identity internal endpoint | undercloud | error={"changed": >>> false, "extra_data": {"data": null, "details": "The request you have made >>> requires authentication.", "response": >>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>> The request you have made requires authentication."} >>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>> undercloud | 0:11:50.660654 | 2.41s >>> >>> PLAY RECAP >>> ********************************************************************* >>> localhost : ok=1 changed=0 unreachable=0 >>> failed=0 skipped=2 rescued=0 ignored=0 >>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>> failed=0 skipped=214 rescued=0 ignored=0 >>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>> failed=0 skipped=198 rescued=0 ignored=0 >>> undercloud : ok=39 changed=7 unreachable=0 >>> failed=1 skipped=6 rescued=0 ignored=0 >>> >>> Also : >>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>> [req] >>> default_bits = 2048 >>> prompt = no >>> default_md = sha256 >>> distinguished_name = dn >>> [dn] >>> C=IN >>> ST=UTTAR PRADESH >>> L=NOIDA >>> O=HSC >>> OU=HSC >>> emailAddress=demo at demo.com >>> >>> v3.ext: >>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>> authorityKeyIdentifier=keyid,issuer >>> basicConstraints=CA:FALSE >>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>> dataEncipherment >>> subjectAltName = @alt_names >>> [alt_names] >>> IP.1=fd00:fd00:fd00:9900::81 >>> >>> Using these files we create other certificates. >>> Please check and let me know in case we need anything else. >>> >>> >>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe >>> wrote: >>> >>>> Hi Lokendra, >>>> >>>> Are you able to access all the tabs in the OpenStack dashboard without >>>> any error? If not, please retry generating the certificate. Also, share the >>>> openssl.cnf or server.cnf. >>>> >>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Team, >>>>> Any input on this case raised. >>>>> >>>>> Thanks, >>>>> Lokendra >>>>> >>>>> >>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Shephard/Swogat, >>>>>> I tried changing the setting as suggested and it looks like it has >>>>>> failed at step 4 with error: >>>>>> >>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>>>> 0:24:47.736198 | 2.21s >>>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>>> TASK | Create identity internal endpoint >>>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>> request you have made requires authentication.", "response": >>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>> The request you have made requires authentication."} >>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>> >>>>>> >>>>>> Checking further the endpoint list: >>>>>> I see only one endpoint for keystone is gettin created. >>>>>> >>>>>> DeprecationWarning >>>>>> >>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>> | ID | Region | Service Name | >>>>>> Service Type | Enabled | Interface | URL >>>>>> | >>>>>> >>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>> | >>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>> | >>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>> | >>>>>> >>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>> >>>>>> >>>>>> it looks like something related to the SSL, we have also verified >>>>>> that the GUI login screen shows that Certificates are applied. >>>>>> exploring more in logs, meanwhile any suggestions or know observation >>>>>> would be of great help. >>>>>> thanks again for the support. >>>>>> >>>>>> Best Regards, >>>>>> Lokendra >>>>>> >>>>>> >>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> I had faced a similar kind of issue, for ip based setup you need to >>>>>>> specify the domain name as the ip that you are going to use, this error is >>>>>>> showing up because the ssl is ip based but the fqdns seems to be >>>>>>> undercloud.com or overcloud.example.com. >>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>> >>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>> address for overcloud? because it seems he has not specified the >>>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>>> domain for overcloud.example.com. >>>>>>> >>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>> >>>>>>>> What is the domain name you have specified in the undercloud.conf >>>>>>>> file? >>>>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>>>> >>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Team, >>>>>>>>> We were trying to install overcloud with SSL enabled for which the >>>>>>>>> UC is installed, but OC install is getting failed at step 4: >>>>>>>>> >>>>>>>>> ERROR >>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac >>>>>>>>> | FATAL | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>> server_hostname)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>> (most recent call last):\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>> send\n timeout=timeout\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>> last):\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>> last):\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>> authenticated=authenticated)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>> authenticated=authenticated)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>> in _send_request\n raise >>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>> run_globals)\n File >>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>> line 185, in \n File >>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>> line 181, in main\n File >>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>> line 407, in __call__\n File >>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>> line 141, in run\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>> File >>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>> max_version='3.latest')\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>> **kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>> **kwargs)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac >>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | 0:11:01.271914 | 2.47s >>>>>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac >>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | 0:11:01.273659 | 2.47s >>>>>>>>> >>>>>>>>> PLAY RECAP >>>>>>>>> ********************************************************************* >>>>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>> Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>> >>>>>>>>> >>>>>>>>> in the deploy.sh: >>>>>>>>> >>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>> --networks-file /home/stack/templates/custom_network_data.yaml >>>>>>>>> \ >>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>> --baremetal-deployment >>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>> --network-config \ >>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>> \ >>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>> \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>> -e >>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>> >>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>> modifications: >>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>> Passed as is in the defaults. >>>>>>>>> enable-tls.yaml: >>>>>>>>> >>>>>>>>> # >>>>>>>>> ******************************************************************* >>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>> instead >>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>> # >>>>>>>>> ******************************************************************* >>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>> # description: | >>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>> deployments. >>>>>>>>> # For these values to take effect, one of the >>>>>>>>> tls-endpoints-*.yaml >>>>>>>>> # environments must also be used. >>>>>>>>> parameter_defaults: >>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>> # Type: boolean >>>>>>>>> HorizonSecureCookies: True >>>>>>>>> >>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>> services in the public network. >>>>>>>>> # Type: string >>>>>>>>> PublicTLSCAFile: >>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>> >>>>>>>>> # The content of the SSL certificate (without Key) in PEM format. >>>>>>>>> # Type: string >>>>>>>>> SSLRootCertificate: | >>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>> -----END CERTIFICATE----- >>>>>>>>> >>>>>>>>> SSLCertificate: | >>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>> -----END CERTIFICATE----- >>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>> format. >>>>>>>>> # Type: string >>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>> >>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>> # Type: string >>>>>>>>> SSLKey: | >>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>> >>>>>>>>> # ****************************************************** >>>>>>>>> # Static parameters - these are values that must be >>>>>>>>> # included in the environment but should not be changed. >>>>>>>>> # ****************************************************** >>>>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>>>> controller. >>>>>>>>> # Type: string >>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>> >>>>>>>>> # ********************* >>>>>>>>> # End static parameters >>>>>>>>> # ********************* >>>>>>>>> >>>>>>>>> inject-trust-anchor.yaml >>>>>>>>> >>>>>>>>> # >>>>>>>>> ******************************************************************* >>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>> # generator. Developers should use `tox -e genconfig` to update it. >>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>> instead >>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>> # >>>>>>>>> ******************************************************************* >>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>> # description: | >>>>>>>>> # When using an SSL certificate signed by a CA that is not in >>>>>>>>> the default >>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>> certificate to >>>>>>>>> # the overcloud nodes. >>>>>>>>> parameter_defaults: >>>>>>>>> # The content of a CA's SSL certificate file in PEM format. This >>>>>>>>> is evaluated on the client side. >>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>> # Type: string >>>>>>>>> SSLRootCertificate: | >>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>> -----END CERTIFICATE----- >>>>>>>>> >>>>>>>>> resource_registry: >>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> The procedure to create such files was followed using: >>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>>>> >>>>>>>>> >>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>> >>>>>>>>> Any idea around this error would be of great help. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> skype: lokendrarathour >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>> >>>>>> >>>>>> >>>>> >>>>> -- >>>>> >>>> >>> >>> -- >>> ~ Lokendra >>> skype: lokendrarathour >>> >>> >>> -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From the.wade.albright at gmail.com Mon Jul 18 22:06:40 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Mon, 18 Jul 2022 15:06:40 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: I was able to use the patches to update the code, but unfortunately the problem is still there for me. I also tried an RPM upgrade to the versions Julia mentioned had the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released in January 2022. But it did not fix the problem. I am able to consistently reproduce the error. - step 1: change BMC password directly on the node itself - step 2: update BMC password (redfish_password) in ironic with 'openstack baremetal node set --driver-info redfish_password='newpass' After step 1 there are errors in the logs entries like "Session authentication appears to have been lost at some point in time" and eventually it puts the node into maintenance mode and marks the power state as "none." After step 2 and taking the host back out of maintenance mode, it goes through a similar set of log entries puts the node into MM again. After the above steps, a conductor restart fixes the problem and operations work normally again. Given this it seems like there is still some kind of caching issue. On Sat, Jul 16, 2022 at 6:01 PM Wade Albright wrote: > Hi Julia, > > Thank you so much for the reply! Hopefully this is the issue. I'll try out > the patches next week and report back. I'll also email you on Monday about > the versions, that would be very helpful to know. > > Thanks again, really appreciate it. > > Wade > > > > On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger > wrote: > >> Greetings! >> >> I believe you need two patches, one in ironic and one in sushy. >> >> Sushy: >> https://review.opendev.org/c/openstack/sushy/+/832860 >> >> Ironic: >> https://review.opendev.org/c/openstack/ironic/+/820588 >> >> I think it is variation, and the comment about working after you restart >> the conductor is the big signal to me. I?m on a phone on a bad data >> connection, if you email me on Monday I can see what versions the fixes >> would be in. >> >> For the record, it is a session cache issue, the bug was that the service >> didn?t quite know what to do when auth fails. >> >> -Julia >> >> >> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright < >> the.wade.albright at gmail.com> wrote: >> >>> Hi, >>> >>> I'm hitting a problem when trying to update the redfish_password for an >>> existing node. I'm curious to know if anyone else has encountered this >>> problem. I'm not sure if I'm just doing something wrong or if there is a >>> bug. Or if the problem is unique to my setup. >>> >>> I have a node already added into ironic with all the driver details set, >>> and things are working fine. I am able to run deployments. >>> >>> Now I need to change the redfish password on the host. So I update the >>> password for redfish access on the host, then use an 'openstack baremetal >>> node set --driver-info redfish_password=' command to set >>> the new redfish_password. >>> >>> Once this has been done, deployment no longer works. I see redfish >>> authentication errors in the logs and the operation fails. I waited a bit >>> to see if there might just be a delay in updating the password, but after >>> awhile it still didn't work. >>> >>> I restarted the conductor, and after that things work fine again. So it >>> seems like the password is cached or something. Is there a way to force the >>> password to update? I even tried removing the redfish credentials and >>> re-adding them, but that didn't work either. Only a conductor restart seems >>> to make the new password work. >>> >>> We are running Xena, using rpm installation on Oracle Linux 8.5. >>> >>> Thanks in advance for any help with this issue. >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.wade.albright at gmail.com Mon Jul 18 22:20:32 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Mon, 18 Jul 2022 15:20:32 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: I'm happy to supply some logs, what verbosity level should i use? And should I just embed the logs in email to the list or upload somewhere? On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger wrote: > If you could supply some conductor logs, that would be helpful. It > should be re-authenticating, but obviously we have a larger bug there > we need to find the root issue behind. > > On Mon, Jul 18, 2022 at 3:06 PM Wade Albright > wrote: > > > > I was able to use the patches to update the code, but unfortunately the > problem is still there for me. > > > > I also tried an RPM upgrade to the versions Julia mentioned had the > fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released > in January 2022. But it did not fix the problem. > > > > I am able to consistently reproduce the error. > > - step 1: change BMC password directly on the node itself > > - step 2: update BMC password (redfish_password) in ironic with > 'openstack baremetal node set --driver-info > redfish_password='newpass' > > > > After step 1 there are errors in the logs entries like "Session > authentication appears to have been lost at some point in time" and > eventually it puts the node into maintenance mode and marks the power state > as "none." > > After step 2 and taking the host back out of maintenance mode, it goes > through a similar set of log entries puts the node into MM again. > > > > After the above steps, a conductor restart fixes the problem and > operations work normally again. Given this it seems like there is still > some kind of caching issue. > > > > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright < > the.wade.albright at gmail.com> wrote: > >> > >> Hi Julia, > >> > >> Thank you so much for the reply! Hopefully this is the issue. I'll try > out the patches next week and report back. I'll also email you on Monday > about the versions, that would be very helpful to know. > >> > >> Thanks again, really appreciate it. > >> > >> Wade > >> > >> > >> > >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > >>> > >>> Greetings! > >>> > >>> I believe you need two patches, one in ironic and one in sushy. > >>> > >>> Sushy: > >>> https://review.opendev.org/c/openstack/sushy/+/832860 > >>> > >>> Ironic: > >>> https://review.opendev.org/c/openstack/ironic/+/820588 > >>> > >>> I think it is variation, and the comment about working after you > restart the conductor is the big signal to me. I?m on a phone on a bad data > connection, if you email me on Monday I can see what versions the fixes > would be in. > >>> > >>> For the record, it is a session cache issue, the bug was that the > service didn?t quite know what to do when auth fails. > >>> > >>> -Julia > >>> > >>> > >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright < > the.wade.albright at gmail.com> wrote: > >>>> > >>>> Hi, > >>>> > >>>> I'm hitting a problem when trying to update the redfish_password for > an existing node. I'm curious to know if anyone else has encountered this > problem. I'm not sure if I'm just doing something wrong or if there is a > bug. Or if the problem is unique to my setup. > >>>> > >>>> I have a node already added into ironic with all the driver details > set, and things are working fine. I am able to run deployments. > >>>> > >>>> Now I need to change the redfish password on the host. So I update > the password for redfish access on the host, then use an 'openstack > baremetal node set --driver-info redfish_password=' command > to set the new redfish_password. > >>>> > >>>> Once this has been done, deployment no longer works. I see redfish > authentication errors in the logs and the operation fails. I waited a bit > to see if there might just be a delay in updating the password, but after > awhile it still didn't work. > >>>> > >>>> I restarted the conductor, and after that things work fine again. So > it seems like the password is cached or something. Is there a way to force > the password to update? I even tried removing the redfish credentials and > re-adding them, but that didn't work either. Only a conductor restart seems > to make the new password work. > >>>> > >>>> We are running Xena, using rpm installation on Oracle Linux 8.5. > >>>> > >>>> Thanks in advance for any help with this issue. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.wade.albright at gmail.com Mon Jul 18 23:04:24 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Mon, 18 Jul 2022 16:04:24 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Sounds good, I will do that tomorrow. Thanks Julia. On Mon, Jul 18, 2022 at 3:27 PM Julia Kreger wrote: > Debug would be best. I think I have an idea what is going on, and this > is a similar variation. If you want, you can email them directly to > me. Specifically only need entries reported by the sushy library and > ironic.drivers.modules.redfish.utils. > > On Mon, Jul 18, 2022 at 3:20 PM Wade Albright > wrote: > > > > I'm happy to supply some logs, what verbosity level should i use? And > should I just embed the logs in email to the list or upload somewhere? > > > > On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > >> > >> If you could supply some conductor logs, that would be helpful. It > >> should be re-authenticating, but obviously we have a larger bug there > >> we need to find the root issue behind. > >> > >> On Mon, Jul 18, 2022 at 3:06 PM Wade Albright > >> wrote: > >> > > >> > I was able to use the patches to update the code, but unfortunately > the problem is still there for me. > >> > > >> > I also tried an RPM upgrade to the versions Julia mentioned had the > fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released > in January 2022. But it did not fix the problem. > >> > > >> > I am able to consistently reproduce the error. > >> > - step 1: change BMC password directly on the node itself > >> > - step 2: update BMC password (redfish_password) in ironic with > 'openstack baremetal node set --driver-info > redfish_password='newpass' > >> > > >> > After step 1 there are errors in the logs entries like "Session > authentication appears to have been lost at some point in time" and > eventually it puts the node into maintenance mode and marks the power state > as "none." > >> > After step 2 and taking the host back out of maintenance mode, it > goes through a similar set of log entries puts the node into MM again. > >> > > >> > After the above steps, a conductor restart fixes the problem and > operations work normally again. Given this it seems like there is still > some kind of caching issue. > >> > > >> > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright < > the.wade.albright at gmail.com> wrote: > >> >> > >> >> Hi Julia, > >> >> > >> >> Thank you so much for the reply! Hopefully this is the issue. I'll > try out the patches next week and report back. I'll also email you on > Monday about the versions, that would be very helpful to know. > >> >> > >> >> Thanks again, really appreciate it. > >> >> > >> >> Wade > >> >> > >> >> > >> >> > >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > >> >>> > >> >>> Greetings! > >> >>> > >> >>> I believe you need two patches, one in ironic and one in sushy. > >> >>> > >> >>> Sushy: > >> >>> https://review.opendev.org/c/openstack/sushy/+/832860 > >> >>> > >> >>> Ironic: > >> >>> https://review.opendev.org/c/openstack/ironic/+/820588 > >> >>> > >> >>> I think it is variation, and the comment about working after you > restart the conductor is the big signal to me. I?m on a phone on a bad data > connection, if you email me on Monday I can see what versions the fixes > would be in. > >> >>> > >> >>> For the record, it is a session cache issue, the bug was that the > service didn?t quite know what to do when auth fails. > >> >>> > >> >>> -Julia > >> >>> > >> >>> > >> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright < > the.wade.albright at gmail.com> wrote: > >> >>>> > >> >>>> Hi, > >> >>>> > >> >>>> I'm hitting a problem when trying to update the redfish_password > for an existing node. I'm curious to know if anyone else has encountered > this problem. I'm not sure if I'm just doing something wrong or if there is > a bug. Or if the problem is unique to my setup. > >> >>>> > >> >>>> I have a node already added into ironic with all the driver > details set, and things are working fine. I am able to run deployments. > >> >>>> > >> >>>> Now I need to change the redfish password on the host. So I update > the password for redfish access on the host, then use an 'openstack > baremetal node set --driver-info redfish_password=' command > to set the new redfish_password. > >> >>>> > >> >>>> Once this has been done, deployment no longer works. I see redfish > authentication errors in the logs and the operation fails. I waited a bit > to see if there might just be a delay in updating the password, but after > awhile it still didn't work. > >> >>>> > >> >>>> I restarted the conductor, and after that things work fine again. > So it seems like the password is cached or something. Is there a way to > force the password to update? I even tried removing the redfish credentials > and re-adding them, but that didn't work either. Only a conductor restart > seems to make the new password work. > >> >>>> > >> >>>> We are running Xena, using rpm installation on Oracle Linux 8.5. > >> >>>> > >> >>>> Thanks in advance for any help with this issue. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Tue Jul 19 05:36:57 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Tue, 19 Jul 2022 15:36:57 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, Doesn't look like there is anything wrong with the certificate there. You would be getting a TLS error if that was the problem. What does your clouds.yaml file look like now? What happens if you run this command from the Undercloud node: $ OS_CLOUD=overcloud openstack endpoint list Do you get the same error? Brendan Shephard Software Engineer Red Hat APAC 193 N Quay Brisbane City QLD 4000 @RedHat Red Hat Red Hat On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour wrote: > Hi Swogat and Vikarna, > We have tried adding the DNS entry for the overcloud domain. we are > getting the same error: > > 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | > TIMING | tripleo_keystone_resources : Create identity public endpoint | > undercloud | 0:11:18.785769 | 2.16s > 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | > TASK | Create identity internal endpoint > 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | > FATAL | Create identity internal endpoint | undercloud | > error={"changed": false, "extra_data": {"data": null, "details": "The > request you have made requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list > services: Client Error for url: > https://overcloud-hsc.com:13000/v3/services, The request you have made > requires authentication."} > 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | > TIMING | tripleo_keystone_resources : Create identity internal endpoint | > undercloud | 0:11:21.074605 | 2. > > > Certificate configs: > > [stack at undercloud oc-domain-name]$ cat server.csr.cnf > [req] > default_bits = 2048 > prompt = no > default_md = sha256 > distinguished_name = dn > [dn] > C=IN > ST=UTTAR PRADESH > L=NOIDA > O=HSC > OU=HSC > emailAddress=demo at demo.com > CN=overcloud-hsc.com > [stack at undercloud oc-domain-name]$ cat v3.ext > authorityKeyIdentifier=keyid,issuer > basicConstraints=CA:FALSE > keyUsage = digitalSignature, nonRepudiation, keyEncipherment, > dataEncipherment > subjectAltName = @alt_names > [alt_names] > DNS.1=overcloud-hsc.com > [stack at undercloud oc-domain-name]$ > > the difference we see from others is that we are using self-signed > certificates. > > please let me know in case we need to check something else. Somehow this > issue remains stuck. > > > On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan > wrote: > >> I was facing a similar kind of issue. >> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >> Here is the solution that helped me fix it. >> Also make sure the cn that you will use is reachable from undercloud >> (maybe) script should take care of it. >> >> Also please follow Mr. Tathe's mail to add the cn first. >> >> With regards >> Swogat Pradhan >> >> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe >> wrote: >> >>> Hi Lokendra, >>> >>> The CN field is missing. Can you add that and generate the certificate >>> again. >>> >>> CN=ipaddress >>> >>> Also add dns.1=ipaddress under alt_names for precaution. >>> >>> Vikarna >>> >>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, >>> wrote: >>> >>>> HI Vikarna, >>>> Thanks for the inputs. >>>> I am note able to access any tabs in GUI. >>>> [image: image.png] >>>> >>>> to re-state, we are failing at the time of deployment at step4 : >>>> >>>> >>>> PLAY [External deployment step 4] >>>> ********************************************** >>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >>>> TASK | External deployment step 4 >>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >>>> OK | External deployment step 4 | undercloud -> localhost | result={ >>>> "changed": false, >>>> "msg": "Use --start-at-task 'External deployment step 4' to resume >>>> from this task" >>>> } >>>> [WARNING]: ('undercloud -> localhost', >>>> '525400ae-089b-870a-fab6-0000000000d7') >>>> missing from stats >>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >>>> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >>>> INCLUDED | >>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>> | undercloud >>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >>>> TASK | Clean up legacy Cinder keystone catalog entries >>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:24.204562 | 2.48s >>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:26.122584 | 4.40s >>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>> 0:11:26.124296 | 4.40s >>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >>>> TASK | Manage Keystone resources for OpenStack services >>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >>>> TIMING | Manage Keystone resources for OpenStack services | undercloud | >>>> 0:11:26.169842 | 0.03s >>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >>>> TASK | Gather variables for each operating system >>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >>>> TIMING | tripleo_keystone_resources : Gather variables for each operating >>>> system | undercloud | 0:11:26.253383 | 0.04s >>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >>>> TASK | Create Keystone Admin resources >>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >>>> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >>>> undercloud | 0:11:26.299608 | 0.03s >>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >>>> INCLUDED | >>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>> undercloud >>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >>>> TASK | Create default domain >>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >>>> OK | Create default domain | undercloud >>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >>>> TIMING | tripleo_keystone_resources : Create default domain | undercloud | >>>> 0:11:28.437360 | 2.09s >>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >>>> TASK | Create admin and service projects >>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >>>> TIMING | tripleo_keystone_resources : Create admin and service projects | >>>> undercloud | 0:11:28.483468 | 0.03s >>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >>>> INCLUDED | >>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>> undercloud >>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >>>> TASK | Async creation of Keystone project >>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >>>> CHANGED | Async creation of Keystone project | undercloud | item=admin >>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >>>> undercloud | 0:11:29.238078 | 0.72s >>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >>>> CHANGED | Async creation of Keystone project | undercloud | item=service >>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >>>> undercloud | 0:11:29.586587 | 1.06s >>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project | >>>> undercloud | 0:11:29.587916 | 1.07s >>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >>>> TASK | Check Keystone project status >>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >>>> WAITING | Check Keystone project status | undercloud | 30 retries left >>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >>>> OK | Check Keystone project status | undercloud | item=admin >>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>> undercloud | 0:11:35.260666 | 5.66s >>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >>>> OK | Check Keystone project status | undercloud | item=service >>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>> undercloud | 0:11:35.494729 | 5.89s >>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>> undercloud | 0:11:35.498771 | 5.89s >>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >>>> TASK | Create admin role >>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >>>> OK | Create admin role | undercloud >>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >>>> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >>>> 0:11:37.725949 | 2.20s >>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >>>> TASK | Create _member_ role >>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >>>> SKIPPED | Create _member_ role | undercloud >>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >>>> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | >>>> 0:11:37.783369 | 0.04s >>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >>>> TASK | Create admin user >>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >>>> CHANGED | Create admin user | undercloud >>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >>>> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >>>> 0:11:41.145472 | 3.34s >>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >>>> TASK | Assign admin role to admin project for admin user >>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >>>> OK | Assign admin role to admin project for admin user | undercloud >>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >>>> TIMING | tripleo_keystone_resources : Assign admin role to admin project >>>> for admin user | undercloud | 0:11:44.288848 | 3.13s >>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >>>> TASK | Assign _member_ role to admin project for admin user >>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >>>> SKIPPED | Assign _member_ role to admin project for admin user | undercloud >>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >>>> TIMING | tripleo_keystone_resources : Assign _member_ role to admin project >>>> for admin user | undercloud | 0:11:44.346479 | 0.04s >>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >>>> TASK | Create identity service >>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >>>> OK | Create identity service | undercloud >>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >>>> TIMING | tripleo_keystone_resources : Create identity service | undercloud >>>> | 0:11:46.022362 | 1.66s >>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >>>> TASK | Create identity public endpoint >>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >>>> OK | Create identity public endpoint | undercloud >>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>> undercloud | 0:11:48.233349 | 2.19s >>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >>>> TASK | Create identity internal endpoint >>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >>>> FATAL | Create identity internal endpoint | undercloud | error={"changed": >>>> false, "extra_data": {"data": null, "details": "The request you have made >>>> requires authentication.", "response": >>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>> The request you have made requires authentication."} >>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>>> undercloud | 0:11:50.660654 | 2.41s >>>> >>>> PLAY RECAP >>>> ********************************************************************* >>>> localhost : ok=1 changed=0 unreachable=0 >>>> failed=0 skipped=2 rescued=0 ignored=0 >>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>> failed=0 skipped=214 rescued=0 ignored=0 >>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>> failed=0 skipped=198 rescued=0 ignored=0 >>>> undercloud : ok=39 changed=7 unreachable=0 >>>> failed=1 skipped=6 rescued=0 ignored=0 >>>> >>>> Also : >>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>> [req] >>>> default_bits = 2048 >>>> prompt = no >>>> default_md = sha256 >>>> distinguished_name = dn >>>> [dn] >>>> C=IN >>>> ST=UTTAR PRADESH >>>> L=NOIDA >>>> O=HSC >>>> OU=HSC >>>> emailAddress=demo at demo.com >>>> >>>> v3.ext: >>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>> authorityKeyIdentifier=keyid,issuer >>>> basicConstraints=CA:FALSE >>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>> dataEncipherment >>>> subjectAltName = @alt_names >>>> [alt_names] >>>> IP.1=fd00:fd00:fd00:9900::81 >>>> >>>> Using these files we create other certificates. >>>> Please check and let me know in case we need anything else. >>>> >>>> >>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe >>>> wrote: >>>> >>>>> Hi Lokendra, >>>>> >>>>> Are you able to access all the tabs in the OpenStack dashboard without >>>>> any error? If not, please retry generating the certificate. Also, share the >>>>> openssl.cnf or server.cnf. >>>>> >>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Team, >>>>>> Any input on this case raised. >>>>>> >>>>>> Thanks, >>>>>> Lokendra >>>>>> >>>>>> >>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Shephard/Swogat, >>>>>>> I tried changing the setting as suggested and it looks like it has >>>>>>> failed at step 4 with error: >>>>>>> >>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>>>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>>>>> 0:24:47.736198 | 2.21s >>>>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>>>> TASK | Create identity internal endpoint >>>>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>> request you have made requires authentication.", "response": >>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>> The request you have made requires authentication."} >>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>> >>>>>>> >>>>>>> Checking further the endpoint list: >>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>> >>>>>>> DeprecationWarning >>>>>>> >>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>> | ID | Region | Service Name | >>>>>>> Service Type | Enabled | Interface | URL >>>>>>> | >>>>>>> >>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>>> | >>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>>> | >>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>>> | >>>>>>> >>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>> >>>>>>> >>>>>>> it looks like something related to the SSL, we have also verified >>>>>>> that the GUI login screen shows that Certificates are applied. >>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>> observation would be of great help. >>>>>>> thanks again for the support. >>>>>>> >>>>>>> Best Regards, >>>>>>> Lokendra >>>>>>> >>>>>>> >>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>> >>>>>>>> I had faced a similar kind of issue, for ip based setup you need to >>>>>>>> specify the domain name as the ip that you are going to use, this error is >>>>>>>> showing up because the ssl is ip based but the fqdns seems to be >>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>> >>>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>>> address for overcloud? because it seems he has not specified the >>>>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>>>> domain for overcloud.example.com. >>>>>>>> >>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>> >>>>>>>>> What is the domain name you have specified in the undercloud.conf >>>>>>>>> file? >>>>>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>>>>> >>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Team, >>>>>>>>>> We were trying to install overcloud with SSL enabled for which >>>>>>>>>> the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>> >>>>>>>>>> ERROR >>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>> 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac >>>>>>>>>> | FATAL | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | item={'service_name': 'cinderv3', 'service_type': 'volume'} | >>>>>>>>>> error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": >>>>>>>>>> "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": >>>>>>>>>> "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover >>>>>>>>>> available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>> server_hostname)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>> (most recent call last):\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>> last):\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>> last):\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>> in _send_request\n raise >>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>> run_globals)\n File >>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>> line 185, in \n File >>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>> line 181, in main\n File >>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>> line 407, in __call__\n File >>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>> line 141, in run\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>> File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>> **kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>> **kwargs)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>> 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac >>>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | 0:11:01.271914 | 2.47s >>>>>>>>>> 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac >>>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | 0:11:01.273659 | 2.47s >>>>>>>>>> >>>>>>>>>> PLAY RECAP >>>>>>>>>> ********************************************************************* >>>>>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>> Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> in the deploy.sh: >>>>>>>>>> >>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>> --networks-file >>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>> --baremetal-deployment >>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>> --network-config \ >>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>> \ >>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>> \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>> -e >>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>> >>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>> modifications: >>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>> Passed as is in the defaults. >>>>>>>>>> enable-tls.yaml: >>>>>>>>>> >>>>>>>>>> # >>>>>>>>>> ******************************************************************* >>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>> # generator. Developers should use `tox -e genconfig` to update >>>>>>>>>> it. >>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>> instead >>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>> # >>>>>>>>>> ******************************************************************* >>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>> # description: | >>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>> deployments. >>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>> # environments must also be used. >>>>>>>>>> parameter_defaults: >>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>>> # Type: boolean >>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>> >>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>> services in the public network. >>>>>>>>>> # Type: string >>>>>>>>>> PublicTLSCAFile: >>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>> >>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>> format. >>>>>>>>>> # Type: string >>>>>>>>>> SSLRootCertificate: | >>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>> >>>>>>>>>> SSLCertificate: | >>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>>> format. >>>>>>>>>> # Type: string >>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>> >>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>> # Type: string >>>>>>>>>> SSLKey: | >>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>> >>>>>>>>>> # ****************************************************** >>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>> # ****************************************************** >>>>>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>>>>> controller. >>>>>>>>>> # Type: string >>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>> >>>>>>>>>> # ********************* >>>>>>>>>> # End static parameters >>>>>>>>>> # ********************* >>>>>>>>>> >>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>> >>>>>>>>>> # >>>>>>>>>> ******************************************************************* >>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>> # generator. Developers should use `tox -e genconfig` to update >>>>>>>>>> it. >>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>> instead >>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>> # >>>>>>>>>> ******************************************************************* >>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>> # description: | >>>>>>>>>> # When using an SSL certificate signed by a CA that is not in >>>>>>>>>> the default >>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>> certificate to >>>>>>>>>> # the overcloud nodes. >>>>>>>>>> parameter_defaults: >>>>>>>>>> # The content of a CA's SSL certificate file in PEM format. >>>>>>>>>> This is evaluated on the client side. >>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>> # Type: string >>>>>>>>>> SSLRootCertificate: | >>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>> >>>>>>>>>> resource_registry: >>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>> >>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> skype: lokendrarathour >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> >>>>> >>>> >>>> -- >>>> ~ Lokendra >>>> skype: lokendrarathour >>>> >>>> >>>> > > -- > ~ Lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From lokendrarathour at gmail.com Tue Jul 19 06:16:35 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Tue, 19 Jul 2022 11:46:35 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Brendan,, Thanks for the inputs. when i run the command as you suggested I get this: (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | identity | True | admin | http://30.30.30.173:35357 | | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | identity | True | public | https://overcloud-hsc.com:13000 | +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ (undercloud) [stack at undercloud ~]$ On the other note that i notices was as below: - HAproxy container is not running. - [root at overcloud-controller-2 stdouts]# podman ps -a | grep haproxy e91dbde042db undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo 24 hours ago Exited (1) Less than a second ago container-puppet-haproxy\ - Checking logs: - 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] 2022-07-19T08:47:00.496323705+05:30 stderr F + . kolla_extend_start 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c '$*' -- eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind fd00:fd00:fd00:9900::81:13776' : 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library will use an automatically generated DH parameter. automatically2022-07-19T08:47:00.513967576+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind fd00:fd00:fd00:9900::81:13292' : 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind fd00:fd00:fd00:9900::81:13004' : 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind fd00:fd00:fd00:9900::81:13005' : 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind fd00:fd00:fd00:2000::326:443' : - 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind fd00:fd00:fd00:9900::81:13000' : 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind fd00:fd00:fd00:9900::81:13696' : 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind fd00:fd00:fd00:9900::81:13080' : 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind fd00:fd00:fd00:9900::81:13774' : 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind fd00:fd00:fd00:9900::81:13778' : 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind fd00:fd00:fd00:9900::81:13808' : 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library will use an automatically generated DH parameter. 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] 199/084700 (7) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear. - pcs status also show that proxy is down for the controller with VIP: - Failed Resource Actions: * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 'error' (1): call=139, status='complete', exitreason='podman failed to launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', queued=0ms, exec=1222ms * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 'error' (1): call=191, status='complete', exitreason='podman failed to launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', queued=0ms, exec=1171ms * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 'error' (1): call=193, status='complete', exitreason='podman failed to launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', queued=0ms, exec=1256ms do let me know in case we need anything more around it. thanks once again for the support. -Lokendra On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard wrote: > Hey, > > Doesn't look like there is anything wrong with the certificate there. You > would be getting a TLS error if that was the problem. > > What does your clouds.yaml file look like now? What happens if you run > this command from the Undercloud node: > $ OS_CLOUD=overcloud openstack endpoint list > > Do you get the same error? > > Brendan Shephard > > Software Engineer > > Red Hat APAC > > 193 N Quay > > Brisbane City QLD 4000 > @RedHat Red Hat > Red Hat > > > > > > On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Swogat and Vikarna, >> We have tried adding the DNS entry for the overcloud domain. we are >> getting the same error: >> >> 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | >> TIMING | tripleo_keystone_resources : Create identity public endpoint | >> undercloud | 0:11:18.785769 | 2.16s >> 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | >> TASK | Create identity internal endpoint >> 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | >> FATAL | Create identity internal endpoint | undercloud | >> error={"changed": false, "extra_data": {"data": null, "details": "The >> request you have made requires authentication.", "response": >> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >> services: Client Error for url: >> https://overcloud-hsc.com:13000/v3/services, The request you have made >> requires authentication."} >> 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | >> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >> undercloud | 0:11:21.074605 | 2. >> >> >> Certificate configs: >> >> [stack at undercloud oc-domain-name]$ cat server.csr.cnf >> [req] >> default_bits = 2048 >> prompt = no >> default_md = sha256 >> distinguished_name = dn >> [dn] >> C=IN >> ST=UTTAR PRADESH >> L=NOIDA >> O=HSC >> OU=HSC >> emailAddress=demo at demo.com >> CN=overcloud-hsc.com >> [stack at undercloud oc-domain-name]$ cat v3.ext >> authorityKeyIdentifier=keyid,issuer >> basicConstraints=CA:FALSE >> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >> dataEncipherment >> subjectAltName = @alt_names >> [alt_names] >> DNS.1=overcloud-hsc.com >> [stack at undercloud oc-domain-name]$ >> >> the difference we see from others is that we are using self-signed >> certificates. >> >> please let me know in case we need to check something else. Somehow this >> issue remains stuck. >> >> >> On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan >> wrote: >> >>> I was facing a similar kind of issue. >>> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >>> Here is the solution that helped me fix it. >>> Also make sure the cn that you will use is reachable from undercloud >>> (maybe) script should take care of it. >>> >>> Also please follow Mr. Tathe's mail to add the cn first. >>> >>> With regards >>> Swogat Pradhan >>> >>> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe >>> wrote: >>> >>>> Hi Lokendra, >>>> >>>> The CN field is missing. Can you add that and generate the certificate >>>> again. >>>> >>>> CN=ipaddress >>>> >>>> Also add dns.1=ipaddress under alt_names for precaution. >>>> >>>> Vikarna >>>> >>>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> HI Vikarna, >>>>> Thanks for the inputs. >>>>> I am note able to access any tabs in GUI. >>>>> [image: image.png] >>>>> >>>>> to re-state, we are failing at the time of deployment at step4 : >>>>> >>>>> >>>>> PLAY [External deployment step 4] >>>>> ********************************************** >>>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>> TASK | External deployment step 4 >>>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>> OK | External deployment step 4 | undercloud -> localhost | result={ >>>>> "changed": false, >>>>> "msg": "Use --start-at-task 'External deployment step 4' to resume >>>>> from this task" >>>>> } >>>>> [WARNING]: ('undercloud -> localhost', >>>>> '525400ae-089b-870a-fab6-0000000000d7') >>>>> missing from stats >>>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >>>>> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >>>>> INCLUDED | >>>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>>> | undercloud >>>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >>>>> TASK | Clean up legacy Cinder keystone catalog entries >>>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:24.204562 | 2.48s >>>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:26.122584 | 4.40s >>>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>> 0:11:26.124296 | 4.40s >>>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >>>>> TASK | Manage Keystone resources for OpenStack services >>>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >>>>> TIMING | Manage Keystone resources for OpenStack services | undercloud | >>>>> 0:11:26.169842 | 0.03s >>>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >>>>> TASK | Gather variables for each operating system >>>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >>>>> TIMING | tripleo_keystone_resources : Gather variables for each operating >>>>> system | undercloud | 0:11:26.253383 | 0.04s >>>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >>>>> TASK | Create Keystone Admin resources >>>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >>>>> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >>>>> undercloud | 0:11:26.299608 | 0.03s >>>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >>>>> INCLUDED | >>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>>> undercloud >>>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >>>>> TASK | Create default domain >>>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >>>>> OK | Create default domain | undercloud >>>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >>>>> TIMING | tripleo_keystone_resources : Create default domain | undercloud >>>>> | 0:11:28.437360 | 2.09s >>>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >>>>> TASK | Create admin and service projects >>>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >>>>> TIMING | tripleo_keystone_resources : Create admin and service projects | >>>>> undercloud | 0:11:28.483468 | 0.03s >>>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >>>>> INCLUDED | >>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>>> undercloud >>>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >>>>> TASK | Async creation of Keystone project >>>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >>>>> CHANGED | Async creation of Keystone project | undercloud | item=admin >>>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project >>>>> | undercloud | 0:11:29.238078 | 0.72s >>>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >>>>> CHANGED | Async creation of Keystone project | undercloud | item=service >>>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project >>>>> | undercloud | 0:11:29.586587 | 1.06s >>>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project >>>>> | undercloud | 0:11:29.587916 | 1.07s >>>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >>>>> TASK | Check Keystone project status >>>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >>>>> WAITING | Check Keystone project status | undercloud | 30 retries left >>>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >>>>> OK | Check Keystone project status | undercloud | item=admin >>>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>> undercloud | 0:11:35.260666 | 5.66s >>>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >>>>> OK | Check Keystone project status | undercloud | item=service >>>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>> undercloud | 0:11:35.494729 | 5.89s >>>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>> undercloud | 0:11:35.498771 | 5.89s >>>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >>>>> TASK | Create admin role >>>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >>>>> OK | Create admin role | undercloud >>>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >>>>> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >>>>> 0:11:37.725949 | 2.20s >>>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>> TASK | Create _member_ role >>>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>> SKIPPED | Create _member_ role | undercloud >>>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | >>>>> 0:11:37.783369 | 0.04s >>>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>> TASK | Create admin user >>>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>> CHANGED | Create admin user | undercloud >>>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >>>>> 0:11:41.145472 | 3.34s >>>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>> TASK | Assign admin role to admin project for admin user >>>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>> OK | Assign admin role to admin project for admin user | undercloud >>>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>> TIMING | tripleo_keystone_resources : Assign admin role to admin project >>>>> for admin user | undercloud | 0:11:44.288848 | 3.13s >>>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>> TASK | Assign _member_ role to admin project for admin user >>>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>> SKIPPED | Assign _member_ role to admin project for admin user | undercloud >>>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>> TIMING | tripleo_keystone_resources : Assign _member_ role to admin >>>>> project for admin user | undercloud | 0:11:44.346479 | 0.04s >>>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>> TASK | Create identity service >>>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>> OK | Create identity service | undercloud >>>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>> TIMING | tripleo_keystone_resources : Create identity service | >>>>> undercloud | 0:11:46.022362 | 1.66s >>>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>> TASK | Create identity public endpoint >>>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>> OK | Create identity public endpoint | undercloud >>>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>> undercloud | 0:11:48.233349 | 2.19s >>>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>> TASK | Create identity internal endpoint >>>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>> FATAL | Create identity internal endpoint | undercloud | >>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>> request you have made requires authentication.", "response": >>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>> The request you have made requires authentication."} >>>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>>>> undercloud | 0:11:50.660654 | 2.41s >>>>> >>>>> PLAY RECAP >>>>> ********************************************************************* >>>>> localhost : ok=1 changed=0 unreachable=0 >>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>> undercloud : ok=39 changed=7 unreachable=0 >>>>> failed=1 skipped=6 rescued=0 ignored=0 >>>>> >>>>> Also : >>>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>>> [req] >>>>> default_bits = 2048 >>>>> prompt = no >>>>> default_md = sha256 >>>>> distinguished_name = dn >>>>> [dn] >>>>> C=IN >>>>> ST=UTTAR PRADESH >>>>> L=NOIDA >>>>> O=HSC >>>>> OU=HSC >>>>> emailAddress=demo at demo.com >>>>> >>>>> v3.ext: >>>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>>> authorityKeyIdentifier=keyid,issuer >>>>> basicConstraints=CA:FALSE >>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>> dataEncipherment >>>>> subjectAltName = @alt_names >>>>> [alt_names] >>>>> IP.1=fd00:fd00:fd00:9900::81 >>>>> >>>>> Using these files we create other certificates. >>>>> Please check and let me know in case we need anything else. >>>>> >>>>> >>>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe >>>>> wrote: >>>>> >>>>>> Hi Lokendra, >>>>>> >>>>>> Are you able to access all the tabs in the OpenStack dashboard >>>>>> without any error? If not, please retry generating the certificate. Also, >>>>>> share the openssl.cnf or server.cnf. >>>>>> >>>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Team, >>>>>>> Any input on this case raised. >>>>>>> >>>>>>> Thanks, >>>>>>> Lokendra >>>>>>> >>>>>>> >>>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Shephard/Swogat, >>>>>>>> I tried changing the setting as suggested and it looks like it has >>>>>>>> failed at step 4 with error: >>>>>>>> >>>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | >>>>>>>> tripleo_keystone_resources : Create identity public endpoint | undercloud | >>>>>>>> 0:24:47.736198 | 2.21s >>>>>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>>>>> TASK | Create identity internal endpoint >>>>>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | >>>>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>>> request you have made requires authentication.", "response": >>>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>> The request you have made requires authentication."} >>>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>>> >>>>>>>> >>>>>>>> Checking further the endpoint list: >>>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>>> >>>>>>>> DeprecationWarning >>>>>>>> >>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>> | ID | Region | Service Name | >>>>>>>> Service Type | Enabled | Interface | URL >>>>>>>> | >>>>>>>> >>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>>>> | >>>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>>>> | >>>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>>>> | >>>>>>>> >>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>> >>>>>>>> >>>>>>>> it looks like something related to the SSL, we have also verified >>>>>>>> that the GUI login screen shows that Certificates are applied. >>>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>>> observation would be of great help. >>>>>>>> thanks again for the support. >>>>>>>> >>>>>>>> Best Regards, >>>>>>>> Lokendra >>>>>>>> >>>>>>>> >>>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>> >>>>>>>>> I had faced a similar kind of issue, for ip based setup you need >>>>>>>>> to specify the domain name as the ip that you are going to use, this error >>>>>>>>> is showing up because the ssl is ip based but the fqdns seems to be >>>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>>> >>>>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>>>> address for overcloud? because it seems he has not specified the >>>>>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>>>>> domain for overcloud.example.com. >>>>>>>>> >>>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> What is the domain name you have specified in the undercloud.conf >>>>>>>>>> file? >>>>>>>>>> And what is the fqdn name used for the generation of the SSL cert? >>>>>>>>>> >>>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Team, >>>>>>>>>>> We were trying to install overcloud with SSL enabled for which >>>>>>>>>>> the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>>> >>>>>>>>>>> ERROR >>>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>> 2022-07-08 17:03:23.606739 | >>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder >>>>>>>>>>> keystone catalog entries | undercloud | item={'service_name': 'cinderv3', >>>>>>>>>>> 'service_type': 'volume'} | error={"ansible_index_var": >>>>>>>>>>> "cinder_api_service", "ansible_loop_var": "item", "changed": false, >>>>>>>>>>> "cinder_api_service": 1, "item": {"service_name": "cinderv3", >>>>>>>>>>> "service_type": "volume"}, "module_stderr": "Failed to discover available >>>>>>>>>>> identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>>> server_hostname)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>>> (most recent call last):\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>> last):\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>> last):\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>>> in _send_request\n raise >>>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>>> run_globals)\n File >>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>> line 185, in \n File >>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>> line 181, in main\n File >>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>>> line 407, in __call__\n File >>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>> line 141, in run\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>>> File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>>> **kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>>> **kwargs)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' % >>>>>>>>>>> e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>> 2022-07-08 17:03:23.609354 | >>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s >>>>>>>>>>> 2022-07-08 17:03:23.611094 | >>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s >>>>>>>>>>> >>>>>>>>>>> PLAY RECAP >>>>>>>>>>> ********************************************************************* >>>>>>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>> Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> in the deploy.sh: >>>>>>>>>>> >>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>> --networks-file >>>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>>> --baremetal-deployment >>>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>>> --network-config \ >>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>>> \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>> -e >>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>> >>>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>>> modifications: >>>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>>> Passed as is in the defaults. >>>>>>>>>>> enable-tls.yaml: >>>>>>>>>>> >>>>>>>>>>> # >>>>>>>>>>> ******************************************************************* >>>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to update >>>>>>>>>>> it. >>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>> instead >>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>> # >>>>>>>>>>> ******************************************************************* >>>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>>> # description: | >>>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>>> deployments. >>>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>>> # environments must also be used. >>>>>>>>>>> parameter_defaults: >>>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>>>> # Type: boolean >>>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>>> >>>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>>> services in the public network. >>>>>>>>>>> # Type: string >>>>>>>>>>> PublicTLSCAFile: >>>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>>> >>>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>>> format. >>>>>>>>>>> # Type: string >>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>> >>>>>>>>>>> SSLCertificate: | >>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>>>> format. >>>>>>>>>>> # Type: string >>>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>>> >>>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>>> # Type: string >>>>>>>>>>> SSLKey: | >>>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>>> >>>>>>>>>>> # ****************************************************** >>>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>>> # ****************************************************** >>>>>>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>>>>>> controller. >>>>>>>>>>> # Type: string >>>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>>> >>>>>>>>>>> # ********************* >>>>>>>>>>> # End static parameters >>>>>>>>>>> # ********************* >>>>>>>>>>> >>>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>>> >>>>>>>>>>> # >>>>>>>>>>> ******************************************************************* >>>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to update >>>>>>>>>>> it. >>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>> instead >>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>> # >>>>>>>>>>> ******************************************************************* >>>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>>> # description: | >>>>>>>>>>> # When using an SSL certificate signed by a CA that is not in >>>>>>>>>>> the default >>>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>>> certificate to >>>>>>>>>>> # the overcloud nodes. >>>>>>>>>>> parameter_defaults: >>>>>>>>>>> # The content of a CA's SSL certificate file in PEM format. >>>>>>>>>>> This is evaluated on the client side. >>>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>>> # Type: string >>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>> >>>>>>>>>>> resource_registry: >>>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>>> >>>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> skype: lokendrarathour >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> >>>>>> >>>>> >>>>> -- >>>>> ~ Lokendra >>>>> skype: lokendrarathour >>>>> >>>>> >>>>> >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From Danny.Webb at thehutgroup.com Tue Jul 19 13:47:52 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Tue, 19 Jul 2022 13:47:52 +0000 Subject: [dev] directions to the right project team In-Reply-To: References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: It was a conscious decision by the keystone team back in 2015 as far as we can tell. There was a large discussion on the mailing list regarding this that seemed to have left this issue unresolved (most specifically around user pagination, but this seems to have affected project / domain elements as well). https://lists.openstack.org/pipermail/openstack-dev/2015-August/thread.html#72082 Ultimately what we're seeking to do is start a discussion within the keystone / horizon (and potentially skyline) community about how we can rectify the current issues we're facing around the usability of the UX portion of keystone elements. Eg, if pagination isn't the way the keystone community wants to go should we look at instead having dynamic filtering instead in the UI? Are there any other options that people can think of that might be a better way forward? ________________________________ From: Julia Kreger Sent: 19 July 2022 14:21 To: Danny Webb Cc: Dmitriy Rabotyagov ; openstack-discuss Subject: Re: [dev] directions to the right project team CAUTION: This email originates from outside THG On Tue, Jul 19, 2022 at 3:44 AM Danny Webb wrote: > > Unfortunately pagination was removed from keystone in the v3 api and as far as we're aware it was never re-added. > This is quite concerning, and a quick look at the code confirms it. Mostly. There are remnants of "hints" and SQL Query filtering, but the internal limit is just a truncation which seems bad as well. https://github.com/openstack/keystone/blame/d7b1d57cae738183f8d85413e942402a8a4efb31/keystone/server/flask/common.py#L675 This seems like a fundamental performance oriented feature because the overhead in data conversion can be quite a bit when you have a large number of well... any objects being returned from a database. Does anyone know if a bug is open for this issue? Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jul 19 14:21:13 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 19 Jul 2022 15:21:13 +0100 Subject: Use redundancy between multiple controller nodes [kolla] In-Reply-To: References: Message-ID: On Tue, 2022-07-19 at 13:51 +0100, A Monster wrote: > I've deployed openstack xena using *kolla-ansible* on *centos 8 stream*, > I've used multiple controller nodes, a bunch of compute nodes and a storage > cluster. > Controller nodes include network services. > > I want to know how can I successfully set up redundancy between these > controller nodes, in a way that some nodes are active while others are on > standby state, and would only become active in case when the active nodes > are to fail. in generall that is not needed with openstack. most service are partly or mostly stateless using the db to store any relevent persitent state. that generally menas that for example you can have n nova api process active at any time and you do not need to use external managment fo the serivce to have some be active and other be passive. in general such active/standby configurtation is an antipatter and should not need to be done with openstack. in kolla keepalived and haproxe shoudl migrate the contol plane vips and include/exclude donw services automaiticaly. if you want to manually take over that function its likely possibel but not how HA is intended to work in general. i dont work on kolla anymore so perhapse there perspectiive has change but in generaly i would advocat for active active ha in preference ot active standby. > > Thank you. Regards From amonster369 at gmail.com Tue Jul 19 14:34:47 2022 From: amonster369 at gmail.com (A Monster) Date: Tue, 19 Jul 2022 15:34:47 +0100 Subject: Use redundancy between multiple controller nodes [kolla] In-Reply-To: References: Message-ID: Thank you for the clarification. So I should let the handling of redundancy to keepalive and haproxy instead of using external tools. On Tue, 19 Jul 2022 at 15:21, Sean Mooney wrote: > On Tue, 2022-07-19 at 13:51 +0100, A Monster wrote: > > I've deployed openstack xena using *kolla-ansible* on *centos 8 stream*, > > I've used multiple controller nodes, a bunch of compute nodes and a > storage > > cluster. > > Controller nodes include network services. > > > > I want to know how can I successfully set up redundancy between these > > controller nodes, in a way that some nodes are active while others are on > > standby state, and would only become active in case when the active nodes > > are to fail. > in generall that is not needed with openstack. > > most service are partly or mostly stateless using the db to store any > relevent persitent state. > > that generally menas that for example you can have n nova api process > active at any time and > you do not need to use external managment fo the serivce to have some be > active and other be passive. > > in general such active/standby configurtation is an antipatter and should > not need to be done with openstack. > > in kolla keepalived and haproxe shoudl migrate the contol plane vips and > include/exclude donw services automaiticaly. > > if you want to manually take over that function its likely possibel but > not how HA is intended to work in general. > > i dont work on kolla anymore so perhapse there perspectiive has change but > in generaly i would advocat for active active > ha in preference ot active standby. > > > > Thank you. Regards > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Tue Jul 19 14:54:44 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Tue, 19 Jul 2022 16:54:44 +0200 Subject: Use redundancy between multiple controller nodes [kolla] In-Reply-To: References: Message-ID: The main thing you have to enable in Kolla Ansible is Neutron L3 HA, which is disabled by default. See enable_neutron_agent_ha. Other services should be redundant when using multiple controllers. On Tue, 19 Jul 2022 at 16:48, A Monster wrote: > Thank you for the clarification. > So I should let the handling of redundancy to keepalive and haproxy > instead of using external tools. > > On Tue, 19 Jul 2022 at 15:21, Sean Mooney wrote: > >> On Tue, 2022-07-19 at 13:51 +0100, A Monster wrote: >> > I've deployed openstack xena using *kolla-ansible* on *centos 8 stream*, >> > I've used multiple controller nodes, a bunch of compute nodes and a >> storage >> > cluster. >> > Controller nodes include network services. >> > >> > I want to know how can I successfully set up redundancy between these >> > controller nodes, in a way that some nodes are active while others are >> on >> > standby state, and would only become active in case when the active >> nodes >> > are to fail. >> in generall that is not needed with openstack. >> >> most service are partly or mostly stateless using the db to store any >> relevent persitent state. >> >> that generally menas that for example you can have n nova api process >> active at any time and >> you do not need to use external managment fo the serivce to have some be >> active and other be passive. >> >> in general such active/standby configurtation is an antipatter and should >> not need to be done with openstack. >> >> in kolla keepalived and haproxe shoudl migrate the contol plane vips and >> include/exclude donw services automaiticaly. >> >> if you want to manually take over that function its likely possibel but >> not how HA is intended to work in general. >> >> i dont work on kolla anymore so perhapse there perspectiive has change >> but in generaly i would advocat for active active >> ha in preference ot active standby. >> > >> > Thank you. Regards >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openinfra.dev Tue Jul 19 16:49:10 2022 From: jimmy at openinfra.dev (Jimmy McArthur) Date: Tue, 19 Jul 2022 11:49:10 -0500 Subject: [interop] Assistance with refstack client In-Reply-To: <20220718190826.j6bwcc6wsv5w7q4f@yuggoth.org> References: <8557fe85-feec-e22b-961e-3cda3c69d50b@openinfra.dev> <20220718190826.j6bwcc6wsv5w7q4f@yuggoth.org> Message-ID: I've encouraged the team in question to subscribe to the ML and reply themselves to hopefully help speed up the process.? But here's the response from our Zendesk queue on this question from Jeremy: I am trying to install ref-stack on a? clean ubuntu VM root at ubuntu20-04lts-scpu-8gb-cdg1-1:~/refstack-client# hostnamectl ? ?Static hostname: ubuntu20-04lts-scpu-8gb-cdg1-1 ? ? ? ? ?Icon name: computer-vm ? ? ? ? ? ?Chassis: vm ? ? ? ? Machine ID: 17b2bc7f7e524632910414ca691accf0 ? ? ? ? ? ?Boot ID: 460cab8e7ccf451fb0ed0342d6c6e253 ? ? Virtualization: kvm ? Operating System: Ubuntu 20.04.1 LTS ? ? ? ? ? ? Kernel: Linux 5.4.0-54-generic ? ? Architecture: x86-64 All I have done is clone the repo and Run the "easy button" setup: |./setup_env| On 7/18/22 2:08 PM, Jeremy Stanley wrote: > On 2022-07-18 13:42:09 -0500 (-0500), Jimmy McArthur wrote: >> I've got someone encountering errors attempting to install the >> refstack client. Error below. Is there anyone on the interop team >> that I can onnect these folks to for assistance? > [...] > > I'm not really involved with RefStack development, but skimming the > errors it looks like some dependency is getting compiled from source > because there's no pre-built binaries for the target platform. > Knowing what steps/commands led to that error condition, as well as > the platform (Linux distribution name and version, processor > architecture, that sort of info) would help to narrow down possible > causes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From allison at openinfra.dev Tue Jul 19 17:21:55 2022 From: allison at openinfra.dev (Allison Price) Date: Tue, 19 Jul 2022 10:21:55 -0700 Subject: [vdi][daas][ops] What are your solutions to VDI/DaaS on OpenStack? In-Reply-To: References: <374C3AA6-7B85-4AEE-84AB-4C0A13F5308C@openinfra.dev> Message-ID: Hi Radek, Exactly - I think that format makes a lot of sense. Can you start a separate thread with me and the other folks you?re in touch with so we can discuss timing and what all we would like to cover? Thanks, Allison > On Jul 7, 2022, at 2:26 AM, Rados?aw Piliszek wrote: > > Hi Allison, > > I am also in touch with folks at rz.uni-freiburg.de who are also > interested in this topic. We might be able to gather a panel for > discussion. I think we need to introduce the topic properly with some > presentations and then move onto a discussion if time allows (I > believe it will as the time slot is 1h and the presentations should > not be overly detailed for an introductory session). > > Cheers, > Radek > -yoctozepto > > On Wed, 6 Jul 2022 at 23:12, Allison Price wrote: >> >> I wanted to follow up on this thread as well as I know highlighting some of this work and perhaps even doing a live demo on OpenInfra Live was something that was discussed. >> >> Andy and Radoslaw - would this be something you would be interested in helping to move forward? If there are others that would like to help drive, please let me know. >> >> Cheers, >> Allison >> >>> On Jul 4, 2022, at 3:33 AM, Rados?aw Piliszek wrote: >>> >>> Just a quick follow up - I was permitted to share a pre-published >>> version of the article I was citing in my email from June 4th. [1] >>> Please enjoy responsibly. :-) >>> >>> [1] https://github.com/yoctozepto/openstack-vdi/blob/main/papers/2022-03%20-%20Bentele%20et%20al%20-%20Towards%20a%20GPU-accelerated%20Open%20Source%20VDI%20for%20OpenStack%20(pre-published).pdf >>> >>> Cheers, >>> Radek >>> -yoctozepto >>> >>> On Mon, 27 Jun 2022 at 17:21, Rados?aw Piliszek >>> wrote: >>>> >>>> On Wed, 8 Jun 2022 at 01:19, Andy Botting wrote: >>>>> >>>>> Hi Rados?aw, >>>> >>>> Hi Andy, >>>> >>>> Sorry for the late reply, been busy vacationing and then dealing with COVID-19. >>>> >>>>>> First of all, wow, that looks very interesting and in fact very much >>>>>> what I'm looking for. As I mentioned in the original message, the >>>>>> things this solution lacks are not something blocking for me. >>>>>> Regarding the approach to Guacamole, I know that it's preferable to >>>>>> have guacamole extension (that provides the dynamic inventory) >>>>>> developed rather than meddle with the internal database but I guess it >>>>>> is a good start. >>>>> >>>>> An even better approach would be something like the Guacozy project >>>>> (https://guacozy.readthedocs.io) >>>> >>>> I am not convinced. The project looks dead by now. [1] >>>> It offers a different UI which may appeal to certain users but I think >>>> sticking to vanilla Guacamole should do us right... For the time being >>>> at least. ;-) >>>> >>>>> They were able to use the Guacmole JavaScript libraries directly to >>>>> embed the HTML5 desktop within a React? app. I think this is a much >>>>> better approach, and I'd love to be able to do something similar in >>>>> the future. Would make the integration that much nicer. >>>> >>>> Well, as an example of embedding in the UI - sure. But it does not >>>> invalidate the need to modify Guacamole's database or write an >>>> extension to it so that it has the necessary creds. >>>> >>>>>> >>>>>> Any "quickstart setting up" would be awesome to have at this stage. As >>>>>> this is a Django app, I think I should be able to figure out the bits >>>>>> and bolts to get it up and running in some shape but obviously it will >>>>>> impede wider adoption. >>>>> >>>>> Yeah I agree. I'm in the process of documenting it, so I'll aim to get >>>>> a quickstart guide together. >>>>> >>>>> I have a private repo with code to set up a development environment >>>>> which uses Heat and Ansible - this might be the quickest way to get >>>>> started. I'm happy to share this with you privately if you like. >>>> >>>> I'm interested. Please share it. >>>> >>>>>> On the note of adoption, if I find it usable, I can provide support >>>>>> for it in Kolla [1] and help grow the project's adoption this way. >>>>> >>>>> Kolla could be useful. We're already using containers for this project >>>>> now, and I have a helm chart for deploying to k8s. >>>>> https://github.com/NeCTAR-RC/bumblebee-helm >>>> >>>> Nice! The catch is obviously that some orgs frown upon K8s because >>>> they lack the necessary know-how. >>>> Kolla by design avoids the use of K8s. OpenStack components are not >>>> cloud-native anyway so benefits of using K8s are diminished (yet it >>>> makes sense to use K8s if there is enough experience with it as it >>>> makes certain ops more streamlined and simpler this way). >>>> >>>>> Also, an important part is making sure the images are set up correctly >>>>> with XRDP, etc. Our images are built using Packer, and the config for >>>>> them can be found at https://github.com/NeCTAR-RC/bumblebee-images >>>> >>>> Ack, thanks for sharing. >>>> >>>>>> Also, since this is OpenStack-centric, maybe you could consider >>>>>> migrating to OpenDev at some point to collaborate with interested >>>>>> parties using a common system? >>>>>> Just food for thought at the moment. >>>>> >>>>> I think it would be more appropriate to start a new project. I think >>>>> our codebase has too many assumptions about the underlying cloud. >>>>> >>>>> We inherited the code from another project too, so it's got twice the cruft. >>>> >>>> I see. Well, that's good to know at least. >>>> >>>>>> Writing to let you know I have also found the following related paper: [1] >>>>>> and reached out to its authors in the hope to enable further >>>>>> collaboration to happen. >>>>>> The paper is not open access so I have only obtained it for myself and >>>>>> am unsure if licensing permits me to share, thus I also asked the >>>>>> authors to share their copy (that they have copyrights to). >>>>>> I have obviously let them know of the existence of this thread. ;-) >>>>>> Let's stay tuned. >>>>>> >>>>>> [1] https://link.springer.com/chapter/10.1007/978-3-030-99191-3_12 >>>>> >>>>> This looks interesting. A collaboration would be good if there is >>>>> enough interest in the community. >>>> >>>> I am looking forward to the collaboration happening. This could really >>>> liven up the OpenStack VDI. >>>> >>>> [1] https://github.com/paidem/guacozy/ >>>> >>>> -yoctozepto >>> >> From alex.kavanagh at canonical.com Tue Jul 19 19:57:08 2022 From: alex.kavanagh at canonical.com (Alex Kavanagh) Date: Tue, 19 Jul 2022 20:57:08 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <20220715120139.5vi563hsbznjimbk@yuggoth.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> <20220715120139.5vi563hsbznjimbk@yuggoth.org> Message-ID: On Fri, Jul 15, 2022 at 1:11 PM Jeremy Stanley wrote: > On 2022-07-15 12:05:31 +0200 (+0200), Thomas Goirand wrote: > [...] > > All what I'm asking, is that when Python RC releases are out, and > > I report a bug, the community has the intention to fix it as early > > as possible, at least in master (and maybe help with backports if > > it's very tricky: I can manage trivial backporting by myself). > > That's enough for me, really (at least it has been enough in the > > past...). > [...] > > I don't think fixes have been refused in the past simply because > they address a problem observed with a newer Python interpreter than > we test on. Just be aware that when it comes to pre-merge testing of > proposed changes for OpenStack, the time to decide what platforms > and interpreter versions we'll test with is at the end of the > previous cycle. For Zed that's Python 3.8 and 3.9, though projects > are encouraged to also try 3.10 if they can (we got the ability to > test that partway into the development cycle). We have to set these > expectations before we begin work on a new version of OpenStack, so > that we don't change our testing goals for developers while they're > in the middle of trying to write the software. > -- > Jeremy Stanley > Just to add to the "what python version where" discussion, I wanted to add the situation with Ubuntu packages and distro versions: As a quick summary for the Ubuntu cloud archive and distro packages for OpenStack, for yoga we ship the packages for 20.04 LTS (focal) - in the focal-yoga cloud archive [1] and 22.04 LTS (jammy) as part of distro. 20.04 LTS (focal) shipped with py3.8 so the packages work with that, but 22.04 LTS (jammy) shipped with py3.10, which means that (ultimately) we try very hard to get all the Ubuntu OpenStack yoga packages to work on py3.10. Ubuntu Kinetic (22.10) currently looks like it will also ship with py3.10 [2]. OpenStack Zed is 'partnered' with 22.10/kinetic as the Zed packages will be shipped in distro for 22.10/kinetic and as jammy-kinetic in the cloud archive. Therefore, we'll also be supporting py3.10 for Zed. py3.11 is slated for release on 3rd October 2022, so it won't make Kinetic (with luck), but it will almost certainly make "L" (23.04), which will be against (for us) OpenStack "A". I guess this is going to be quite challenging for everyone! Why is it like this? Well, the latest current stable version of python is selected for each distro version to ensure that each LTS is running the latest stable version; thus each interim updates so that it is easier to work towards the next LTS stable release. Thus, for the Ubuntu packages, we have to support that version for each of the OpenStack releases during the LTS (e.g. for jammy, py3.10) for Y->B versions of OpenStack. So as a summary (for the Ubuntu packages/python versions): - Yoga: py3.8: focal-yoga=py3.8, py3.10: jammy (distro) - Zed: py3.10 (jammy-zed, kinetic(distro)) - "A": py3.10 (jammy-"A"), py3.11 ("L"/22.10 (distro)) - "B": py3.10 (jammy-"B"), py3.11 (py3.12?) - etc Essentially, for each LTS release (every 2 years, April), we eventually ship 4 versions of OpenStack, one with the distro packages, and 3 in the cloud archive. Also, each interim release of Ubuntu also gets a corresponding OpenStack release set of packages for the python version in that release. We do additional testing (which we call distro-regression) where we check the packages against each version of Ubuntu and run tempest to "ensure" that they work. Hope this is useful. [1] http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/focal-updates/yoga/ [2] https://packages.ubuntu.com/kinetic/python/ -- Alex Kavanagh - Software Engineer OpenStack Engineering - Data Centre Development - Canonical Ltd -------------- next part -------------- An HTML attachment was scrubbed... URL: From rlandy at redhat.com Tue Jul 19 21:02:56 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Tue, 19 Jul 2022 17:02:56 -0400 Subject: [tripleo] Gate blocker - master and stable/wallaby Message-ID: Hello All, Currently we have a gate blocker impacting master and stable/wallaby. The details are in: https://bugs.launchpad.net/tripleo/+bug/1982195 - Barbican: Can't run container barbican_api_db_sync. Please hold rechecks while we investigate a fix. Thank you, TripleO CI -------------- next part -------------- An HTML attachment was scrubbed... URL: From aneesh.p at fungible.com Tue Jul 19 21:59:43 2022 From: aneesh.p at fungible.com (Aneesh Pachilangottil) Date: Tue, 19 Jul 2022 14:59:43 -0700 Subject: Help needed with Openstack Third Party CI Message-ID: Hi, I am trying to set up Openstack third party CI using software factory version 3.7. Followed the steps from this guide https://softwarefactory-project.io/docs/guides/third_party_ci.html. After adding gerrit connection details for review.opendev.org, I am getting an error in the zuul scheduler logs: ./scheduler.log:2022-07-18 21:10:25,564 INFO zuul.GerritConnection: review.opendev.org: Gerrit Poller is disabled because no HTTP authentication is defined ./web.log:2022-07-18 21:10:23,948 ERROR zuul.BranchCache.review.opendev.org: Exception loading ZKObject at /zuul/cache/connection/review.opendev.org/branches/data The gerrit connection details entry in the sfconfig.yaml file is: gerrit_connections: - name: review.opendev.org hostname: review.opendev.org port: 29418 puburl: https://review.opendev.org username: username canonical_hostname: opendev.org The SSH key located at /var/lib/software-factory/bootstrap-data/ssh_keys/zuul_rsa.pub is added to the account in review.opendev.org Could anyone suggest if I am missing anything? Are there better alternatives than using a software factory? Any documentation available? Best regards, Aneesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Jul 19 22:05:50 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 19 Jul 2022 15:05:50 -0700 Subject: Help needed with Openstack Third Party CI In-Reply-To: References: Message-ID: On Tue, Jul 19, 2022, at 2:59 PM, Aneesh Pachilangottil wrote: > Hi, > I am trying to set up Openstack third party CI using software factory > version 3.7. Followed the steps from this guide > https://softwarefactory-project.io/docs/guides/third_party_ci.html. > After adding gerrit connection details for review.opendev.org, I am > getting an error in the zuul scheduler logs: > > ./scheduler.log:2022-07-18 21:10:25,564 INFO zuul.GerritConnection: > review.opendev.org: Gerrit Poller is disabled because no HTTP > authentication is defined I think this first log message is just noise and doesn't point to a problem. Zuul is able to talk to Gerrit via both http and ssh. In this case you have only configured ssh and it is telling you that some functionality is disabled due to the lack of http connection details. > ./web.log:2022-07-18 21:10:23,948 ERROR > zuul.BranchCache.review.opendev.org > : Exception loading > ZKObject 0x7efee2784190> at > /zuul/cache/connection/review.opendev.org/branches/data This is very likely the actual issue. But without the full traceback is difficult to say more. Can you include the full traceback from your logs? > > The gerrit connection details entry in the sfconfig.yaml file is: > gerrit_connections: > - name: review.opendev.org > hostname: review.opendev.org > port: 29418 > puburl: https://review.opendev.org > username: username > canonical_hostname: opendev.org > > The SSH key located at > /var/lib/software-factory/bootstrap-data/ssh_keys/zuul_rsa.pub is added > to the account in review.opendev.org > > Could anyone suggest if I am missing anything? > Are there better alternatives than using a software factory? Any > documentation available? Software factory should be a fine way to run Zuul. That said it might be helpful to run the Zuul quickstart to gain some familiarity with how Zuul's components are deployed and function. Then you can map that to your software factory deployment. > > Best regards, > Aneesh From kennelson11 at gmail.com Tue Jul 19 23:13:34 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 19 Jul 2022 18:13:34 -0500 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack Message-ID: Hello Everyone! We are again signed up to participate in Open Source Day at the Grace Hopper Conference. It's a virtual event, being held on Friday, September 16, 2022, from 8am to 3pm Pacific Time. If you are interested in mentoring for this one day event, please let me know ASAP. I am supposed to give them a list of mentors by the end of this week. Day of, we will essentially get participants to setup a dev environment (gerrit, etc) and work on a bug and get it pushed. At this point I was thinking of making use of gaps in the SDK/OSC, but if your project has some low hanging fruit that you want to bring along, that works too! Looking forward to working with you!! -Kendall (diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliaashleykreger at gmail.com Wed Jul 20 00:37:23 2022 From: juliaashleykreger at gmail.com (Julia Kreger) Date: Tue, 19 Jul 2022 17:37:23 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Just to provide a brief update for the mailing list. It looks like this is a case of use of Basic Auth with the BMC, where we were not catching the error properly... and thus not reporting the authentication failure to ironic so it would catch, and initiate a new client with the most up to date password. The default, typically used path is Session based authentication as BMCs generally handle internal session/user login tracking in a far better fashion. But not every BMC supports sessions. Fix in review[0] :) -Julia [0] https://review.opendev.org/c/openstack/sushy/+/850425 On Mon, Jul 18, 2022 at 4:15 PM Julia Kreger wrote: > > Excellent, hopefully I'll be able to figure out why Sushy is not doing > the needful... Or if it is and Ironic is not picking up on it. > > Anyway, I've posted > https://review.opendev.org/c/openstack/ironic/+/850259 which might > handle this issue. Obviously a work in progress, but it represents > what I think is happening inside of ironic itself leading into sushy > when cache access occurs. > > On Mon, Jul 18, 2022 at 4:04 PM Wade Albright > wrote: > > > > Sounds good, I will do that tomorrow. Thanks Julia. > > > > On Mon, Jul 18, 2022 at 3:27 PM Julia Kreger wrote: > >> > >> Debug would be best. I think I have an idea what is going on, and this > >> is a similar variation. If you want, you can email them directly to > >> me. Specifically only need entries reported by the sushy library and > >> ironic.drivers.modules.redfish.utils. > >> > >> On Mon, Jul 18, 2022 at 3:20 PM Wade Albright > >> wrote: > >> > > >> > I'm happy to supply some logs, what verbosity level should i use? And should I just embed the logs in email to the list or upload somewhere? > >> > > >> > On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger wrote: > >> >> > >> >> If you could supply some conductor logs, that would be helpful. It > >> >> should be re-authenticating, but obviously we have a larger bug there > >> >> we need to find the root issue behind. > >> >> > >> >> On Mon, Jul 18, 2022 at 3:06 PM Wade Albright > >> >> wrote: > >> >> > > >> >> > I was able to use the patches to update the code, but unfortunately the problem is still there for me. > >> >> > > >> >> > I also tried an RPM upgrade to the versions Julia mentioned had the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - Released in January 2022. But it did not fix the problem. > >> >> > > >> >> > I am able to consistently reproduce the error. > >> >> > - step 1: change BMC password directly on the node itself > >> >> > - step 2: update BMC password (redfish_password) in ironic with 'openstack baremetal node set --driver-info redfish_password='newpass' > >> >> > > >> >> > After step 1 there are errors in the logs entries like "Session authentication appears to have been lost at some point in time" and eventually it puts the node into maintenance mode and marks the power state as "none." > >> >> > After step 2 and taking the host back out of maintenance mode, it goes through a similar set of log entries puts the node into MM again. > >> >> > > >> >> > After the above steps, a conductor restart fixes the problem and operations work normally again. Given this it seems like there is still some kind of caching issue. > >> >> > > >> >> > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright wrote: > >> >> >> > >> >> >> Hi Julia, > >> >> >> > >> >> >> Thank you so much for the reply! Hopefully this is the issue. I'll try out the patches next week and report back. I'll also email you on Monday about the versions, that would be very helpful to know. > >> >> >> > >> >> >> Thanks again, really appreciate it. > >> >> >> > >> >> >> Wade > >> >> >> > >> >> >> > >> >> >> > >> >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger wrote: > >> >> >>> > >> >> >>> Greetings! > >> >> >>> > >> >> >>> I believe you need two patches, one in ironic and one in sushy. > >> >> >>> > >> >> >>> Sushy: > >> >> >>> https://review.opendev.org/c/openstack/sushy/+/832860 > >> >> >>> > >> >> >>> Ironic: > >> >> >>> https://review.opendev.org/c/openstack/ironic/+/820588 > >> >> >>> > >> >> >>> I think it is variation, and the comment about working after you restart the conductor is the big signal to me. I?m on a phone on a bad data connection, if you email me on Monday I can see what versions the fixes would be in. > >> >> >>> > >> >> >>> For the record, it is a session cache issue, the bug was that the service didn?t quite know what to do when auth fails. > >> >> >>> > >> >> >>> -Julia > >> >> >>> > >> >> >>> > >> >> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright wrote: > >> >> >>>> > >> >> >>>> Hi, > >> >> >>>> > >> >> >>>> I'm hitting a problem when trying to update the redfish_password for an existing node. I'm curious to know if anyone else has encountered this problem. I'm not sure if I'm just doing something wrong or if there is a bug. Or if the problem is unique to my setup. > >> >> >>>> > >> >> >>>> I have a node already added into ironic with all the driver details set, and things are working fine. I am able to run deployments. > >> >> >>>> > >> >> >>>> Now I need to change the redfish password on the host. So I update the password for redfish access on the host, then use an 'openstack baremetal node set --driver-info redfish_password=' command to set the new redfish_password. > >> >> >>>> > >> >> >>>> Once this has been done, deployment no longer works. I see redfish authentication errors in the logs and the operation fails. I waited a bit to see if there might just be a delay in updating the password, but after awhile it still didn't work. > >> >> >>>> > >> >> >>>> I restarted the conductor, and after that things work fine again. So it seems like the password is cached or something. Is there a way to force the password to update? I even tried removing the redfish credentials and re-adding them, but that didn't work either. Only a conductor restart seems to make the new password work. > >> >> >>>> > >> >> >>>> We are running Xena, using rpm installation on Oracle Linux 8.5. > >> >> >>>> > >> >> >>>> Thanks in advance for any help with this issue. From rlandy at redhat.com Wed Jul 20 01:00:13 2022 From: rlandy at redhat.com (Ronelle Landy) Date: Tue, 19 Jul 2022 21:00:13 -0400 Subject: [tripleo] Gate blocker - master and stable/wallaby In-Reply-To: References: Message-ID: On Tue, Jul 19, 2022 at 5:02 PM Ronelle Landy wrote: > Hello All, > > Currently we have a gate blocker impacting master and stable/wallaby. > The details are in: > > https://bugs.launchpad.net/tripleo/+bug/1982195 - Barbican: Can't run > container barbican_api_db_sync. > Merging workaround patch to clear the gates: https://review.opendev.org/c/openstack/tripleo-quickstart/+/850428/ > Please hold rechecks while we investigate a fix. > > Thank you, > TripleO CI > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jul 20 08:34:18 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 Jul 2022 09:34:18 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> <20220715120139.5vi563hsbznjimbk@yuggoth.org> Message-ID: <98afeab653490f28816e67353b4ff5ccec9728e8.camel@redhat.com> On Tue, 2022-07-19 at 20:57 +0100, Alex Kavanagh wrote: > On Fri, Jul 15, 2022 at 1:11 PM Jeremy Stanley wrote: > > > On 2022-07-15 12:05:31 +0200 (+0200), Thomas Goirand wrote: > > [...] > > > All what I'm asking, is that when Python RC releases are out, and > > > I report a bug, the community has the intention to fix it as early > > > as possible, at least in master (and maybe help with backports if > > > it's very tricky: I can manage trivial backporting by myself). > > > That's enough for me, really (at least it has been enough in the > > > past...). > > [...] > > > > I don't think fixes have been refused in the past simply because > > they address a problem observed with a newer Python interpreter than > > we test on. Just be aware that when it comes to pre-merge testing of > > proposed changes for OpenStack, the time to decide what platforms > > and interpreter versions we'll test with is at the end of the > > previous cycle. For Zed that's Python 3.8 and 3.9, though projects > > are encouraged to also try 3.10 if they can (we got the ability to > > test that partway into the development cycle). We have to set these > > expectations before we begin work on a new version of OpenStack, so > > that we don't change our testing goals for developers while they're > > in the middle of trying to write the software. > > -- > > Jeremy Stanley > > > > Just to add to the "what python version where" discussion, I wanted to add > the situation with Ubuntu packages and distro versions: > > As a quick summary for the Ubuntu cloud archive and distro packages for > OpenStack, for yoga we ship the packages for 20.04 LTS (focal) - in the > focal-yoga cloud archive [1] and 22.04 LTS (jammy) as part of distro. > 20.04 LTS (focal) shipped with py3.8 so the packages work with that, but > 22.04 LTS (jammy) shipped with py3.10, which means that (ultimately) we try > very hard to get all the Ubuntu OpenStack yoga packages to work on py3.10. > > Ubuntu Kinetic (22.10) currently looks like it will also ship with py3.10 > [2]. OpenStack Zed is 'partnered' with 22.10/kinetic as the Zed packages > will be shipped in distro for 22.10/kinetic and as jammy-kinetic in the > cloud archive. Therefore, we'll also be supporting py3.10 for Zed. py3.11 > is slated for release on 3rd October 2022, so it won't make Kinetic (with > luck), but it will almost certainly make "L" (23.04), which will be against > (for us) OpenStack "A". I guess this is going to be quite challenging for > everyone! > > Why is it like this? Well, the latest current stable version of python is > selected for each distro version to ensure that each LTS is running the > latest stable version; thus each interim updates so that it is easier to > work towards the next LTS stable release. Thus, for the Ubuntu packages, > we have to support that version for each of the OpenStack releases during > the LTS (e.g. for jammy, py3.10) for Y->B versions of OpenStack. > > So as a summary (for the Ubuntu packages/python versions): > > - Yoga: py3.8: focal-yoga=py3.8, py3.10: jammy (distro) > - Zed: py3.10 (jammy-zed, kinetic(distro)) > - "A": py3.10 (jammy-"A"), py3.11 ("L"/22.10 (distro)) > - "B": py3.10 (jammy-"B"), py3.11 (py3.12?) > - etc > > Essentially, for each LTS release (every 2 years, April), we eventually > ship 4 versions of OpenStack, one with the distro packages, and 3 in the > cloud archive. Also, each interim release of Ubuntu also gets a > corresponding OpenStack release set of packages for the python version in > that release. We do additional testing (which we call distro-regression) > where we check the packages against each version of Ubuntu and run tempest > to "ensure" that they work. > > Hope this is useful. that is not as missalinged with our testing as debain will be. currently our ubuntu testing is based on 20.04 and we have 22.04 image which we use for the non voting python 3.10 jobs (orginaly those were fedora but we change when ubuntu 22.04 became avaiable) for the A release the testing runtimes have not been chossen yet but my expecation is we will drop 20.04 form the testing requirements for master and move to 22.04 making 3.10 our default python for tempest integration testing. assuming the python 3.11 interperter and ideally standard libary are aviabel as a package to install on 22.04 we should be able to add a non vovting py3.11 tox job to replace our current py3.10 job and perhaps even have a periodic-weekly python 3.11 tempest job. while the master branch for A will open in advance of the offical release the end of the zed schdule is the week of october 3rd. https://releases.openstack.org/zed/schedule.html the A branches technially will open on september 15th with the release of rc1 so if the py3.11 package is aviable close to the offial start of the A cycle we may be able to include it in the project testing requriementes even if tis only an advisery testing target and not a required one. for B we shoudl have it as a required testing target but we will need to keep py3.9 likely until C or D since centos/rhel 9 will still be shiping that as the only supproted interperter. im not sure if ubuntu has the same poicly where non default intpererteres are shipped only for dev and testing use and not for production. centos/rhel also only ships one version fo the standard libary so new libiary feature would not be avaiabel with the new inteperter if i understand correctly. as a result of needing to support py 3.9 that means we cannot use any libeary feature that has not either been backported to 3.9 though standalone packages as dataclaess was for py3.6. the main concen this bring is we also need to be carful not to continue using any py 3.9 feature that are removed form 3.11 or 3.12 ectra. > > [1] > http://ubuntu-cloud.archive.canonical.com/ubuntu/dists/focal-updates/yoga/ > [2] https://packages.ubuntu.com/kinetic/python/ > From smooney at redhat.com Wed Jul 20 08:59:50 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 Jul 2022 09:59:50 +0100 Subject: Help needed with Openstack Third Party CI In-Reply-To: References: Message-ID: <05ffcc20abc6204af0cc496728833e4be60e9ce7.camel@redhat.com> On Tue, 2022-07-19 at 15:05 -0700, Clark Boylan wrote: > On Tue, Jul 19, 2022, at 2:59 PM, Aneesh Pachilangottil wrote: > > Hi, > > I am trying to set up Openstack third party CI using software factory > > version 3.7. Followed the steps from this guide > > https://softwarefactory-project.io/docs/guides/third_party_ci.html. > > After adding gerrit connection details for review.opendev.org, I am > > getting an error in the zuul scheduler logs: > > > > ./scheduler.log:2022-07-18 21:10:25,564 INFO zuul.GerritConnection: > > review.opendev.org: Gerrit Poller is disabled because no HTTP > > authentication is defined > > I think this first log message is just noise and doesn't point to a problem. Zuul is able to talk to Gerrit via both http and ssh. In this case you have only configured ssh and it is telling you that some functionality is disabled due to the lack of http connection details. > clark can correct me if this has changed but if i recall correctly its prefer to configre both ssh an http instead of just ssh to reduce load on the upstream gerrit servers. so it should look like this [connection opendev.org] driver=gerrit canonical_hostname=opendev.org server=review.openstack.org sshkey=/var/lib/zuul/.ssh/nodepool_rsa user=ci-sean-mooney baseurl=https://review.opendev.org password= auth_type=basic ill also point out that since this is using basic auth you really should ensure you use https urls :) i shoudl also proably update that server url to review.opendev.org instead of review.openstack.org but this zuul installation has not been running for about 2 years sicne i redpeloy the cloud it ran on. to conenct via http to report result you need to define https://zuul-ci.org/docs/zuul/latest/drivers/gerrit.html#attr-%3Cgerrit%20connection%3E.password that also enable line level reporting which si nice for linint jobs but we generally dont tend to need it for tempest ectra so not sure it will be useful ina third part context. > > ./web.log:2022-07-18 21:10:23,948 ERROR > > zuul.BranchCache.review.opendev.org > > : Exception loading > > ZKObject > 0x7efee2784190> at > > /zuul/cache/connection/review.opendev.org/branches/data > > This is very likely the actual issue. But without the full traceback is difficult to say more. Can you include the full traceback from your logs? > > > > > The gerrit connection details entry in the sfconfig.yaml file is: > > gerrit_connections: > > - name: review.opendev.org > > hostname: review.opendev.org > > port: 29418 > > puburl: https://review.opendev.org > > username: username > > canonical_hostname: opendev.org based on my own config you are msising either the password or ssh key. the default location zull will check for the sshkey is ~zuul/.ssh/id_rsa https://zuul-ci.org/docs/zuul/latest/drivers/gerrit.html#attr-%3Cgerrit%20connection%3E.sshkey and you do not have password defiend which is needed for http reporting. > > > > The SSH key located at > > /var/lib/software-factory/bootstrap-data/ssh_keys/zuul_rsa.pub is added > > to the account in review.opendev.org > > > > Could anyone suggest if I am missing anything? > > Are there better alternatives than using a software factory? Any > > documentation available? > > Software factory should be a fine way to run Zuul. That said it might be helpful to run the Zuul quickstart to gain some familiarity with how Zuul's components are deployed and function. Then you can map that to your software factory deployment. > > > > > Best regards, > > Aneesh > From senrique at redhat.com Wed Jul 20 11:28:12 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 20 Jul 2022 08:28:12 -0300 Subject: [cinder] Bug deputy report for week of 07-20-2022 Message-ID: This is a bug report from 07-13-2022 to 07-20-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Low - https://bugs.launchpad.net/cinder/+bug/1981562 "[NFS] Nova raises an error on server to resize command." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1982032 "NFS Backup driver doesn't remove empty directories on backup deletion." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1982078 "[storwize] Driver Initialization error w.r.t default portset." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1981982 "Infinidat Cinder driver ignores driver_use_ssl option." Fix proposed to master. Incomplete - https://bugs.launchpad.net/cinder/+bug/1981961 "RemoteFs: Unexcepted error while running the command." Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jul 20 11:49:38 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 20 Jul 2022 11:49:38 +0000 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <98afeab653490f28816e67353b4ff5ccec9728e8.camel@redhat.com> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> <20220715120139.5vi563hsbznjimbk@yuggoth.org> <98afeab653490f28816e67353b4ff5ccec9728e8.camel@redhat.com> Message-ID: <20220720114938.u77z75psua5fsrec@yuggoth.org> On 2022-07-20 09:34:18 +0100 (+0100), Sean Mooney wrote: [...] > for the A release the testing runtimes have not been chossen yet > but my expecation is we will drop 20.04 form the testing > requirements for master and move to 22.04 making 3.10 our default > python for tempest integration testing. Unless I've misunderstood the recent TC discussions around supported platform overlap, the new release cadence, and upgrade testing, it's my understanding that we'll still at least need to test that we can upgrade from Zed to Anchovy/Anteater/Antelope on Ubuntu 20.04 LTS (future A->B and A->C upgrade testing will be able to just use 22.04 LTS though), so it's not going away entirely for master branch tests until B. > assuming the python 3.11 interperter and ideally standard libary > are aviabel as a package to install on 22.04 we should be able to > add a non vovting py3.11 tox job to replace our current py3.10 job > and perhaps even have a periodic-weekly python 3.11 tempest job. [...] In the past, Ubuntu has waited to backport a new Python minor version until after its .1 point release is available. For 3.11.1 that's scheduled to be approximately 2 months after 3.11.0 is done, so expect early December. That puts any Jammy backport of it into 2023 at the earliest, I expect, at best a couple of months before we release so probably not soon enough in the cycle for meaningful testing before master is open for B cycle work. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From mkopec at redhat.com Wed Jul 20 12:59:12 2022 From: mkopec at redhat.com (Martin Kopec) Date: Wed, 20 Jul 2022 14:59:12 +0200 Subject: [interop] Assistance with refstack client In-Reply-To: References: <8557fe85-feec-e22b-961e-3cda3c69d50b@openinfra.dev> <20220718190826.j6bwcc6wsv5w7q4f@yuggoth.org> Message-ID: Hi Jimmy, I can help here, they can connect with me and we'll take a look. I'm sorry for not replying sooner, On Tue, 19 Jul 2022 at 19:04, Jimmy McArthur wrote: > I've encouraged the team in question to subscribe to the ML and reply > themselves to hopefully help speed up the process. But here's the response > from our Zendesk queue on this question from Jeremy: > > > I am trying to install ref-stack on a clean ubuntu VM > > root at ubuntu20-04lts-scpu-8gb-cdg1-1:~/refstack-client# hostnamectl > Static hostname: ubuntu20-04lts-scpu-8gb-cdg1-1 > Icon name: computer-vm > Chassis: vm > Machine ID: 17b2bc7f7e524632910414ca691accf0 > Boot ID: 460cab8e7ccf451fb0ed0342d6c6e253 > Virtualization: kvm > Operating System: Ubuntu 20.04.1 LTS > Kernel: Linux 5.4.0-54-generic > Architecture: x86-64 > > > All I have done is clone the repo and Run the "easy button" setup: > ./setup_env > On 7/18/22 2:08 PM, Jeremy Stanley wrote: > > On 2022-07-18 13:42:09 -0500 (-0500), Jimmy McArthur wrote: > > I've got someone encountering errors attempting to install the > refstack client. Error below. Is there anyone on the interop team > that I can onnect these folks to for assistance? > > [...] > > I'm not really involved with RefStack development, but skimming the > errors it looks like some dependency is getting compiled from source > because there's no pre-built binaries for the target platform. > Knowing what steps/commands led to that error condition, as well as > the platform (Linux distribution name and version, processor > architecture, that sort of info) would help to narrow down possible > causes. > > -- Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Wed Jul 20 13:15:34 2022 From: smooney at redhat.com (Sean Mooney) Date: Wed, 20 Jul 2022 14:15:34 +0100 Subject: [all] Debian unstable has Python 3.11: please help support it. In-Reply-To: <20220720114938.u77z75psua5fsrec@yuggoth.org> References: <5b5955de-0d42-bf68-d76f-5d13f193845b@debian.org> <5b4ee313e472f602cbb1a4d1f809bb0c2320d1eb.camel@redhat.com> <20220714143048.gxznifh7oeaaqldi@yuggoth.org> <7adc7c0d-7077-6917-c3a3-4c7a886c65b8@debian.org> <20220715120139.5vi563hsbznjimbk@yuggoth.org> <98afeab653490f28816e67353b4ff5ccec9728e8.camel@redhat.com> <20220720114938.u77z75psua5fsrec@yuggoth.org> Message-ID: On Wed, 2022-07-20 at 11:49 +0000, Jeremy Stanley wrote: > On 2022-07-20 09:34:18 +0100 (+0100), Sean Mooney wrote: > [...] > > for the A release the testing runtimes have not been chossen yet > > but my expecation is we will drop 20.04 form the testing > > requirements for master and move to 22.04 making 3.10 our default > > python for tempest integration testing. > > Unless I've misunderstood the recent TC discussions around supported > platform overlap, the new release cadence, and upgrade testing, it's > my understanding that we'll still at least need to test that we can > upgrade from Zed to Anchovy/Anteater/Antelope on Ubuntu 20.04 LTS ah correct this is not really related to the new lifecycle. we will need to test with ubuntu 20.04 for grenade or other upgrade jobs since we do not update the os in those jobs so deploy on the older release. > (future A->B and A->C upgrade testing will be able to just use 22.04 > LTS though), so it's not going away entirely for master branch tests > until B. yes 22.04 can only become the base of the grenade job for A->B all the standard jobs should mvoe to 22.04 in B ideally which means we need to keep 3.8 until B since that is the default in 20.04 in B we can increase the minium to 3.9 and add 3.11 based on the timeline you explain below since that would better align with our developemnt cycle. > > > assuming the python 3.11 interperter and ideally standard libary > > are aviabel as a package to install on 22.04 we should be able to > > add a non vovting py3.11 tox job to replace our current py3.10 job > > and perhaps even have a periodic-weekly python 3.11 tempest job. > [...] > > In the past, Ubuntu has waited to backport a new Python minor > version until after its .1 point release is available. For 3.11.1 > that's scheduled to be approximately 2 months after 3.11.0 is done, > so expect early December. That puts any Jammy backport of it into > 2023 at the earliest, I expect, at best a couple of months before we > release so probably not soon enough in the cycle for meaningful > testing before master is open for B cycle work. ack so ya that likely means it cant be considerd a required target for A but could be optioanlly tested by some projects once its avaiable. is assume there is no reason not to add 3.10 to the requried testing runtimes for A based on ubuntu 22.04 form its current best effort status. From helena at openstack.org Wed Jul 20 13:36:09 2022 From: helena at openstack.org (Helena Spease) Date: Wed, 20 Jul 2022 08:36:09 -0500 Subject: OpenInfra Live - July 21 at 9am CT Message-ID: Hi everyone, This week?s OpenInfra Live episode is brought to you by OpenInfra Days organizers from France, South Korea, Vietnam, and China. 25 releases, 25 million cores, and 12 years later, the OpenStack community is thriving. Join this week?s OpenInfra Live episode to meet community members from France, South Korea, Vietnam, and China as they discuss the growth of OpenStack in their region over the past 12 years. In addition, come learn about how the OpenStack release naming process will continue following the Zed release! At the end of the episode, we will announce the next OpenStack release name! If you haven?t voted yet make sure to vote before July 20th at 11:59pm PT (July 21st 6:59 UTC)! https://civs1.civs.us/cgi-bin/vote.pl?id=E_2b6c69494a6d3222&akey=d3350c7bda8bad74 Episode: 12 Years of OpenStack Date and time: July 21 at 9am CT (1400 UTC) You can watch us live on: YouTube: https://youtu.be/yhYq0MOA5nQ LinkedIn: https://www.linkedin.com/video/event/urn:li:ugcPost:6953746806220488704/ Facebook: https://www.facebook.com/104139126308032/posts/5267403669981526/ WeChat: recording will be posted on OpenStack WeChat after the live stream Speakers: Soumaya Msallem Seongsoo Cho Tuan Luong Haoyang Li Have an idea for a future episode? Share it now at ideas.openinfra.live . Thanks, Helena -------------- next part -------------- An HTML attachment was scrubbed... URL: From eduardo.experimental at gmail.com Wed Jul 20 15:29:20 2022 From: eduardo.experimental at gmail.com (Eduardo Santos) Date: Wed, 20 Jul 2022 12:29:20 -0300 Subject: Help needed with Openstack Third Party CI In-Reply-To: <05ffcc20abc6204af0cc496728833e4be60e9ce7.camel@redhat.com> References: <05ffcc20abc6204af0cc496728833e4be60e9ce7.camel@redhat.com> Message-ID: Sean's right. Actually, I believe /var/lib/software-factory/bootstrap-data/ssh_keys/zuul_rsa gets written to /var/lib/zuul/.ssh as id_rsa, not zuul_rsa, which is confusing. So use /var/lib/zuul/.ssh/id_rsa. Note that you need to use the private key (without the .pub) in the connection configuration, not the public one; the public one is the one you add to the upstream Gerrit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From swogatpradhan22 at gmail.com Wed Jul 20 17:37:03 2022 From: swogatpradhan22 at gmail.com (Swogat Pradhan) Date: Wed, 20 Jul 2022 23:07:03 +0530 Subject: Correct way to add firewall rules in tripleo | Wallaby Message-ID: Hi, I am trying to add a rule for zabbix in my tripleo wallaby setup on top of centos 8 stream. i followed https://docs.openstack.org/project-deploy-guide/tripleo-docs/latest/features/security_hardening.html but got the error message: [ERROR]: Failed, module return: {'msg': 'value of action must be one of: append, insert, got: accept', 'failed': True, 'invocation': {'module_args': {'state': 'present', 'action': 'accept', 'jump': 'ACCEPT', 'chain': 'INPUT', 'protocol': 'tcp', 'source': '172.25.161.50', 'ctstate': ['NEW'], 'ip_version': 'ipv4', 'comment': '301 allow zabbix ipv4', 'destination_port': '10050', 'table': 'filter', 'match': [], 'syn': 'ignore', 'flush': False}}, 'warnings': ["The value 10050 (type int) in a string field was converted to '10050' (type string). If this does not look like what you expect, quote the entire value to ensure it does not change."], '_ansible_parsed': True} [ERROR]: Failed, return data: {'stdout': None, 'stderr': None, 'msg': 'value of action must be one of: append, insert, got: accept', 'cmd': None, 'rc': 0, 'failed': True} 2022-07-21 01:27:33.335477 | 48d539a1-1679-1e80-25fd-000000005aa1 | TASK | Manage firewall rules 2022-07-21 01:27:33.351515 | 48d539a1-1679-1e80-25fd-000000005542 | FATAL | Manage firewall rules | overcloud-controller-0 | error={"changed": false, "cmd": null, "msg": "value of action must be one of: append, insert, got: accept", "rc": 0, "stderr": null, "stdout": null} When i tried the following link: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/15/html/security_and_hardening_guide/using_director_to_configure_security_hardening my script is running fine but rules are not updated in iptables for zabbix. Can you please suggest a correct approach to open port 10050 in tripleo? With regards, Swogat Pradhan -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Wed Jul 20 09:37:14 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Wed, 20 Jul 2022 19:37:14 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, I think it's weird that you got a response at all when you run the openstack endpoint list, since you said haproxy isn't running. So there should be nothing serving that endpoint. I noticed you have the stackrc file sourced. Try it again without that file sourced, so: $ su - stack $ OS_CLOUD=overcloud openstack endpoint list I would suspect that nothing should be responding. It could be the stackrc file causing issues with some of the environment variables. If the above command doesn't return anything, then my suggestion would be to re-run the deployment like this: $ su - stack $ export OS_CLOUD=undercloud # Then run your deployment script again $ bash overcloud_deploy.sh The OS_CLOUD variable tells the openstackclient to lookup the details about that cloud from your clouds.yaml file. Which will be located in /home/stack/.config/openstack/clouds.yaml. This method is preferable to the sourcing of RC files. Reference: https://docs.openstack.org/openstacksdk/latest/user/guides/connect_from_config.html Regarding the HAProxy warnings. I don't think they should be fatal. afaik, HAProxy should still be starting. If it's not, there might be another error that you will need to look for in the log files under /var/log/containers/haproxy/ I wasn't able to reproduce that warning by following the documentation for enabling TLS though. So it seems like an odd error to be getting. Brendan Shephard Software Engineer Red Hat APAC 193 N Quay Brisbane City QLD 4000 @RedHat Red Hat Red Hat On Wed, Jul 20, 2022 at 7:02 PM Lokendra Rathour wrote: > Hi Brendan / Team, > Any lead for the issue raised? > > -Lokendra > > > > On Tue, Jul 19, 2022 at 11:46 AM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Brendan,, >> Thanks for the inputs. >> when i run the command as you suggested I get this: >> >> (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint >> list >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >> | ID | Region | Service Name | Service >> Type | Enabled | Interface | URL | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >> | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | identity >> | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | >> | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | identity >> | True | admin | http://30.30.30.173:35357 | >> | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | identity >> | True | public | https://overcloud-hsc.com:13000 | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >> (undercloud) [stack at undercloud ~]$ >> >> >> On the other note that i notices was as below: >> >> - HAproxy container is not running. >> - [root at overcloud-controller-2 stdouts]# podman ps -a | grep >> haproxy >> e91dbde042db >> undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo >> 24 hours ago Exited (1) Less than a >> second ago container-puppet-haproxy\ >> - Checking logs: >> - 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= >> 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] >> 2022-07-19T08:47:00.496323705+05:30 stderr F + . kolla_extend_start >> 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running >> command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper >> ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; >> else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' >> 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: >> 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec >> /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec >> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' >> 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c '$*' >> -- eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec >> /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec >> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi >> 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind >> fd00:fd00:fd00:9900::81:13776' : >> 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> automatically2022-07-19T08:47:00.513967576+05:30 stderr F >> [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind >> fd00:fd00:fd00:9900::81:13292' : >> 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind >> fd00:fd00:fd00:9900::81:13004' : >> 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind >> fd00:fd00:fd00:9900::81:13005' : >> 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind >> fd00:fd00:fd00:2000::326:443' : >> - 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library >> will use an automatically generated DH parameter. >> 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind >> fd00:fd00:fd00:9900::81:13000' : >> 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind >> fd00:fd00:fd00:9900::81:13696' : >> 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind >> fd00:fd00:fd00:9900::81:13080' : >> 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind >> fd00:fd00:fd00:9900::81:13774' : >> 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind >> fd00:fd00:fd00:9900::81:13778' : >> 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] 199/084700 >> (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind >> fd00:fd00:fd00:9900::81:13808' : >> 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load >> default 1024 bits DH parameter for certificate >> '/etc/pki/tls/private/overcloud_endpoint.pem'. >> 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library will >> use an automatically generated DH parameter. >> 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] 199/084700 >> (7) : Setting tune.ssl.default-dh-param to 1024 by default, if your >> workload permits it you should set it to at least 2048. Please set a value >> >= 1024 to make this warning disappear. >> - pcs status also show that proxy is down for the controller with >> VIP: >> - Failed Resource Actions: >> * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 >> 'error' (1): call=139, status='complete', exitreason='podman failed to >> launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', >> queued=0ms, exec=1222ms >> * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 >> 'error' (1): call=191, status='complete', exitreason='podman failed to >> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', >> queued=0ms, exec=1171ms >> * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 >> 'error' (1): call=193, status='complete', exitreason='podman failed to >> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', >> queued=0ms, exec=1256ms >> >> do let me know in case we need anything more around it. >> thanks once again for the support. >> -Lokendra >> >> On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard >> wrote: >> >>> Hey, >>> >>> Doesn't look like there is anything wrong with the certificate there. >>> You would be getting a TLS error if that was the problem. >>> >>> What does your clouds.yaml file look like now? What happens if you run >>> this command from the Undercloud node: >>> $ OS_CLOUD=overcloud openstack endpoint list >>> >>> Do you get the same error? >>> >>> Brendan Shephard >>> >>> Software Engineer >>> >>> Red Hat APAC >>> >>> 193 N Quay >>> >>> Brisbane City QLD 4000 >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> >>> >>> On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Swogat and Vikarna, >>>> We have tried adding the DNS entry for the overcloud domain. we are >>>> getting the same error: >>>> >>>> 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | >>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>> undercloud | 0:11:18.785769 | 2.16s >>>> 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | >>>> TASK | Create identity internal endpoint >>>> 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | >>>> FATAL | Create identity internal endpoint | undercloud | >>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>> request you have made requires authentication.", "response": >>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>> services: Client Error for url: >>>> https://overcloud-hsc.com:13000/v3/services, The request you have made >>>> requires authentication."} >>>> 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | >>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>>> undercloud | 0:11:21.074605 | 2. >>>> >>>> >>>> Certificate configs: >>>> >>>> [stack at undercloud oc-domain-name]$ cat server.csr.cnf >>>> [req] >>>> default_bits = 2048 >>>> prompt = no >>>> default_md = sha256 >>>> distinguished_name = dn >>>> [dn] >>>> C=IN >>>> ST=UTTAR PRADESH >>>> L=NOIDA >>>> O=HSC >>>> OU=HSC >>>> emailAddress=demo at demo.com >>>> CN=overcloud-hsc.com >>>> [stack at undercloud oc-domain-name]$ cat v3.ext >>>> authorityKeyIdentifier=keyid,issuer >>>> basicConstraints=CA:FALSE >>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>> dataEncipherment >>>> subjectAltName = @alt_names >>>> [alt_names] >>>> DNS.1=overcloud-hsc.com >>>> [stack at undercloud oc-domain-name]$ >>>> >>>> the difference we see from others is that we are using self-signed >>>> certificates. >>>> >>>> please let me know in case we need to check something else. Somehow >>>> this issue remains stuck. >>>> >>>> >>>> On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan < >>>> swogatpradhan22 at gmail.com> wrote: >>>> >>>>> I was facing a similar kind of issue. >>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >>>>> Here is the solution that helped me fix it. >>>>> Also make sure the cn that you will use is reachable from undercloud >>>>> (maybe) script should take care of it. >>>>> >>>>> Also please follow Mr. Tathe's mail to add the cn first. >>>>> >>>>> With regards >>>>> Swogat Pradhan >>>>> >>>>> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe >>>>> wrote: >>>>> >>>>>> Hi Lokendra, >>>>>> >>>>>> The CN field is missing. Can you add that and generate the >>>>>> certificate again. >>>>>> >>>>>> CN=ipaddress >>>>>> >>>>>> Also add dns.1=ipaddress under alt_names for precaution. >>>>>> >>>>>> Vikarna >>>>>> >>>>>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> HI Vikarna, >>>>>>> Thanks for the inputs. >>>>>>> I am note able to access any tabs in GUI. >>>>>>> [image: image.png] >>>>>>> >>>>>>> to re-state, we are failing at the time of deployment at step4 : >>>>>>> >>>>>>> >>>>>>> PLAY [External deployment step 4] >>>>>>> ********************************************** >>>>>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>>>> TASK | External deployment step 4 >>>>>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>>>> OK | External deployment step 4 | undercloud -> localhost | result={ >>>>>>> "changed": false, >>>>>>> "msg": "Use --start-at-task 'External deployment step 4' to >>>>>>> resume from this task" >>>>>>> } >>>>>>> [WARNING]: ('undercloud -> localhost', >>>>>>> '525400ae-089b-870a-fab6-0000000000d7') >>>>>>> missing from stats >>>>>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >>>>>>> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>>>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >>>>>>> INCLUDED | >>>>>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>>>>> | undercloud >>>>>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>> TASK | Clean up legacy Cinder keystone catalog entries >>>>>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>>>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:24.204562 | 2.48s >>>>>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>>>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:26.122584 | 4.40s >>>>>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>> 0:11:26.124296 | 4.40s >>>>>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >>>>>>> TASK | Manage Keystone resources for OpenStack services >>>>>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >>>>>>> TIMING | Manage Keystone resources for OpenStack services | undercloud >>>>>>> | 0:11:26.169842 | 0.03s >>>>>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >>>>>>> TASK | Gather variables for each operating system >>>>>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >>>>>>> TIMING | tripleo_keystone_resources : Gather variables for each >>>>>>> operating system | undercloud | 0:11:26.253383 | 0.04s >>>>>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >>>>>>> TASK | Create Keystone Admin resources >>>>>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >>>>>>> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >>>>>>> undercloud | 0:11:26.299608 | 0.03s >>>>>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >>>>>>> INCLUDED | >>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>>>>> undercloud >>>>>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>>> TASK | Create default domain >>>>>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>>> OK | Create default domain | undercloud >>>>>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>>> TIMING | tripleo_keystone_resources : Create default domain | >>>>>>> undercloud | 0:11:28.437360 | 2.09s >>>>>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >>>>>>> TASK | Create admin and service projects >>>>>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >>>>>>> TIMING | tripleo_keystone_resources : Create admin and service projects >>>>>>> | undercloud | 0:11:28.483468 | 0.03s >>>>>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >>>>>>> INCLUDED | >>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>>>>> undercloud >>>>>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>> TASK | Async creation of Keystone project >>>>>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>> CHANGED | Async creation of Keystone project | undercloud | item=admin >>>>>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>> project | undercloud | 0:11:29.238078 | 0.72s >>>>>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>> CHANGED | Async creation of Keystone project | undercloud | item=service >>>>>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>> project | undercloud | 0:11:29.586587 | 1.06s >>>>>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>> project | undercloud | 0:11:29.587916 | 1.07s >>>>>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> TASK | Check Keystone project status >>>>>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> WAITING | Check Keystone project status | undercloud | 30 retries left >>>>>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> OK | Check Keystone project status | undercloud | item=admin >>>>>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>> undercloud | 0:11:35.260666 | 5.66s >>>>>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> OK | Check Keystone project status | undercloud | item=service >>>>>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>> undercloud | 0:11:35.494729 | 5.89s >>>>>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>> undercloud | 0:11:35.498771 | 5.89s >>>>>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >>>>>>> TASK | Create admin role >>>>>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >>>>>>> OK | Create admin role | undercloud >>>>>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >>>>>>> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >>>>>>> 0:11:37.725949 | 2.20s >>>>>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>>> TASK | Create _member_ role >>>>>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>>> SKIPPED | Create _member_ role | undercloud >>>>>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>>> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud >>>>>>> | 0:11:37.783369 | 0.04s >>>>>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>>> TASK | Create admin user >>>>>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>>> CHANGED | Create admin user | undercloud >>>>>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>>> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >>>>>>> 0:11:41.145472 | 3.34s >>>>>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>>> TASK | Assign admin role to admin project for admin user >>>>>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>>> OK | Assign admin role to admin project for admin user | undercloud >>>>>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>>> TIMING | tripleo_keystone_resources : Assign admin role to admin >>>>>>> project for admin user | undercloud | 0:11:44.288848 | 3.13s >>>>>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>>> TASK | Assign _member_ role to admin project for admin user >>>>>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>>> SKIPPED | Assign _member_ role to admin project for admin user | >>>>>>> undercloud >>>>>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>>> TIMING | tripleo_keystone_resources : Assign _member_ role to admin >>>>>>> project for admin user | undercloud | 0:11:44.346479 | 0.04s >>>>>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>>> TASK | Create identity service >>>>>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>>> OK | Create identity service | undercloud >>>>>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>>> TIMING | tripleo_keystone_resources : Create identity service | >>>>>>> undercloud | 0:11:46.022362 | 1.66s >>>>>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>>> TASK | Create identity public endpoint >>>>>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>>> OK | Create identity public endpoint | undercloud >>>>>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>>> undercloud | 0:11:48.233349 | 2.19s >>>>>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>>> TASK | Create identity internal endpoint >>>>>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>> request you have made requires authentication.", "response": >>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>> The request you have made requires authentication."} >>>>>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint >>>>>>> | undercloud | 0:11:50.660654 | 2.41s >>>>>>> >>>>>>> PLAY RECAP >>>>>>> ********************************************************************* >>>>>>> localhost : ok=1 changed=0 unreachable=0 >>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>> undercloud : ok=39 changed=7 unreachable=0 >>>>>>> failed=1 skipped=6 rescued=0 ignored=0 >>>>>>> >>>>>>> Also : >>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>>>>> [req] >>>>>>> default_bits = 2048 >>>>>>> prompt = no >>>>>>> default_md = sha256 >>>>>>> distinguished_name = dn >>>>>>> [dn] >>>>>>> C=IN >>>>>>> ST=UTTAR PRADESH >>>>>>> L=NOIDA >>>>>>> O=HSC >>>>>>> OU=HSC >>>>>>> emailAddress=demo at demo.com >>>>>>> >>>>>>> v3.ext: >>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>>>>> authorityKeyIdentifier=keyid,issuer >>>>>>> basicConstraints=CA:FALSE >>>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>>> dataEncipherment >>>>>>> subjectAltName = @alt_names >>>>>>> [alt_names] >>>>>>> IP.1=fd00:fd00:fd00:9900::81 >>>>>>> >>>>>>> Using these files we create other certificates. >>>>>>> Please check and let me know in case we need anything else. >>>>>>> >>>>>>> >>>>>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe < >>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Lokendra, >>>>>>>> >>>>>>>> Are you able to access all the tabs in the OpenStack dashboard >>>>>>>> without any error? If not, please retry generating the certificate. Also, >>>>>>>> share the openssl.cnf or server.cnf. >>>>>>>> >>>>>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Team, >>>>>>>>> Any input on this case raised. >>>>>>>>> >>>>>>>>> Thanks, >>>>>>>>> Lokendra >>>>>>>>> >>>>>>>>> >>>>>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Shephard/Swogat, >>>>>>>>>> I tried changing the setting as suggested and it looks like it >>>>>>>>>> has failed at step 4 with error: >>>>>>>>>> >>>>>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING >>>>>>>>>> | tripleo_keystone_resources : Create identity public endpoint | undercloud >>>>>>>>>> | 0:24:47.736198 | 2.21s >>>>>>>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf >>>>>>>>>> | TASK | Create identity internal endpoint >>>>>>>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf >>>>>>>>>> | FATAL | Create identity internal endpoint | undercloud | >>>>>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>>>>> request you have made requires authentication.", "response": >>>>>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>>>> The request you have made requires authentication."} >>>>>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> Checking further the endpoint list: >>>>>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>>>>> >>>>>>>>>> DeprecationWarning >>>>>>>>>> >>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>> | ID | Region | Service Name | >>>>>>>>>> Service Type | Enabled | Interface | URL >>>>>>>>>> | >>>>>>>>>> >>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>>>>>> | >>>>>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>>>>>> | >>>>>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>>>>>> | >>>>>>>>>> >>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> it looks like something related to the SSL, we have also verified >>>>>>>>>> that the GUI login screen shows that Certificates are applied. >>>>>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>>>>> observation would be of great help. >>>>>>>>>> thanks again for the support. >>>>>>>>>> >>>>>>>>>> Best Regards, >>>>>>>>>> Lokendra >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> I had faced a similar kind of issue, for ip based setup you need >>>>>>>>>>> to specify the domain name as the ip that you are going to use, this error >>>>>>>>>>> is showing up because the ssl is ip based but the fqdns seems to be >>>>>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>>>>> >>>>>>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>>>>>> address for overcloud? because it seems he has not specified the >>>>>>>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>>>>>>> domain for overcloud.example.com. >>>>>>>>>>> >>>>>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> What is the domain name you have specified in the >>>>>>>>>>>> undercloud.conf file? >>>>>>>>>>>> And what is the fqdn name used for the generation of the SSL >>>>>>>>>>>> cert? >>>>>>>>>>>> >>>>>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>> We were trying to install overcloud with SSL enabled for which >>>>>>>>>>>>> the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>>>>> >>>>>>>>>>>>> ERROR >>>>>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): >>>>>>>>>>>>> Max retries exceeded with url: / (Caused by >>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>> 2022-07-08 17:03:23.606739 | >>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder >>>>>>>>>>>>> keystone catalog entries | undercloud | item={'service_name': 'cinderv3', >>>>>>>>>>>>> 'service_type': 'volume'} | error={"ansible_index_var": >>>>>>>>>>>>> "cinder_api_service", "ansible_loop_var": "item", "changed": false, >>>>>>>>>>>>> "cinder_api_service": 1, "item": {"service_name": "cinderv3", >>>>>>>>>>>>> "service_type": "volume"}, "module_stderr": "Failed to discover available >>>>>>>>>>>>> identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>>>>> server_hostname)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>>>>> (most recent call last):\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>> last):\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>> last):\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>>>>> in _send_request\n raise >>>>>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>>>>> run_globals)\n File >>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>> line 185, in \n File >>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>> line 181, in main\n File >>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>>>>> line 407, in __call__\n File >>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>> line 141, in run\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>>>>> File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' >>>>>>>>>>>>> % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>> 2022-07-08 17:03:23.609354 | >>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s >>>>>>>>>>>>> 2022-07-08 17:03:23.611094 | >>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s >>>>>>>>>>>>> >>>>>>>>>>>>> PLAY RECAP >>>>>>>>>>>>> ********************************************************************* >>>>>>>>>>>>> localhost : ok=0 changed=0 >>>>>>>>>>>>> unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 >>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 >>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 >>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 >>>>>>>>>>>>> unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>>>>> undercloud : ok=28 changed=7 >>>>>>>>>>>>> unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>>>>> 2022-07-08 17:03:23.647270 | >>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information >>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>> Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> in the deploy.sh: >>>>>>>>>>>>> >>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>> --networks-file >>>>>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>>>>> --baremetal-deployment >>>>>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>>>>> --network-config \ >>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>>>>> \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>> -e >>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>> >>>>>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>>>>> modifications: >>>>>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>>>>> Passed as is in the defaults. >>>>>>>>>>>>> enable-tls.yaml: >>>>>>>>>>>>> >>>>>>>>>>>>> # >>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>> update it. >>>>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>>>> instead >>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>> # >>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>>>>> # description: | >>>>>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>>>>> deployments. >>>>>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>>>>> # environments must also be used. >>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>>>>>> # Type: boolean >>>>>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>>>>> >>>>>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>>>>> services in the public network. >>>>>>>>>>>>> # Type: string >>>>>>>>>>>>> PublicTLSCAFile: >>>>>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>>>>> >>>>>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>>>>> format. >>>>>>>>>>>>> # Type: string >>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>> >>>>>>>>>>>>> SSLCertificate: | >>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>>>>>> format. >>>>>>>>>>>>> # Type: string >>>>>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>>>>> >>>>>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>>>>> # Type: string >>>>>>>>>>>>> SSLKey: | >>>>>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>>>>> >>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>> # The filepath of the certificate as it will be stored in >>>>>>>>>>>>> the controller. >>>>>>>>>>>>> # Type: string >>>>>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>>>>> >>>>>>>>>>>>> # ********************* >>>>>>>>>>>>> # End static parameters >>>>>>>>>>>>> # ********************* >>>>>>>>>>>>> >>>>>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>>>>> >>>>>>>>>>>>> # >>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>> update it. >>>>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>>>> instead >>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>> # >>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>>>>> # description: | >>>>>>>>>>>>> # When using an SSL certificate signed by a CA that is not >>>>>>>>>>>>> in the default >>>>>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>>>>> certificate to >>>>>>>>>>>>> # the overcloud nodes. >>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>> # The content of a CA's SSL certificate file in PEM format. >>>>>>>>>>>>> This is evaluated on the client side. >>>>>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>>>>> # Type: string >>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>> >>>>>>>>>>>>> resource_registry: >>>>>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation >>>>>>>>>>>>> (openstack.org) >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>>>>> >>>>>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>>>>> >>>>>>>>>>>>> -- >>>>>>>>>>>>> skype: lokendrarathour >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> >>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> ~ Lokendra >>>>>>> skype: lokendrarathour >>>>>>> >>>>>>> >>>>>>> >>>> >>>> -- >>>> ~ Lokendra >>>> skype: lokendrarathour >>>> >>>> >>>> >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> > > -- > ~ Lokendra > skype: lokendrarathour > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From lokendrarathour at gmail.com Wed Jul 20 09:01:50 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 20 Jul 2022 14:31:50 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Brendan / Team, Any lead for the issue raised? -Lokendra On Tue, Jul 19, 2022 at 11:46 AM Lokendra Rathour wrote: > Hi Brendan,, > Thanks for the inputs. > when i run the command as you suggested I get this: > > (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint > list > > +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ > | ID | Region | Service Name | Service > Type | Enabled | Interface | URL | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ > | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | identity > | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | > | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | identity > | True | admin | http://30.30.30.173:35357 | > | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | identity > | True | public | https://overcloud-hsc.com:13000 | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ > (undercloud) [stack at undercloud ~]$ > > > On the other note that i notices was as below: > > - HAproxy container is not running. > - [root at overcloud-controller-2 stdouts]# podman ps -a | grep haproxy > e91dbde042db > undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo > 24 hours ago Exited (1) Less than a > second ago container-puppet-haproxy\ > - Checking logs: > - 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= > 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] > 2022-07-19T08:47:00.496323705+05:30 stderr F + . kolla_extend_start > 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running > command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper > ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; > else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' > 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: 'bash > -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec > /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec > /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' > 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c '$*' -- > eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec > /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec > /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi > 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind > fd00:fd00:fd00:9900::81:13776' : > 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > automatically2022-07-19T08:47:00.513967576+05:30 stderr F [WARNING] > 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind > fd00:fd00:fd00:9900::81:13292' : > 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind > fd00:fd00:fd00:9900::81:13004' : > 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind > fd00:fd00:fd00:9900::81:13005' : > 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind > fd00:fd00:fd00:2000::326:443' : > - 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind > fd00:fd00:fd00:9900::81:13000' : > 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind > fd00:fd00:fd00:9900::81:13696' : > 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind > fd00:fd00:fd00:9900::81:13080' : > 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind > fd00:fd00:fd00:9900::81:13774' : > 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind > fd00:fd00:fd00:9900::81:13778' : > 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] 199/084700 > (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind > fd00:fd00:fd00:9900::81:13808' : > 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load > default 1024 bits DH parameter for certificate > '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library will > use an automatically generated DH parameter. > 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] 199/084700 > (7) : Setting tune.ssl.default-dh-param to 1024 by default, if your > workload permits it you should set it to at least 2048. Please set a value > >= 1024 to make this warning disappear. > - pcs status also show that proxy is down for the controller with > VIP: > - Failed Resource Actions: > * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 > 'error' (1): call=139, status='complete', exitreason='podman failed to > launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', > queued=0ms, exec=1222ms > * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 > 'error' (1): call=191, status='complete', exitreason='podman failed to > launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', > queued=0ms, exec=1171ms > * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 > 'error' (1): call=193, status='complete', exitreason='podman failed to > launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', > queued=0ms, exec=1256ms > > do let me know in case we need anything more around it. > thanks once again for the support. > -Lokendra > > On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard > wrote: > >> Hey, >> >> Doesn't look like there is anything wrong with the certificate there. You >> would be getting a TLS error if that was the problem. >> >> What does your clouds.yaml file look like now? What happens if you run >> this command from the Undercloud node: >> $ OS_CLOUD=overcloud openstack endpoint list >> >> Do you get the same error? >> >> Brendan Shephard >> >> Software Engineer >> >> Red Hat APAC >> >> 193 N Quay >> >> Brisbane City QLD 4000 >> @RedHat Red Hat >> Red Hat >> >> >> >> >> >> On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Swogat and Vikarna, >>> We have tried adding the DNS entry for the overcloud domain. we are >>> getting the same error: >>> >>> 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | >>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>> undercloud | 0:11:18.785769 | 2.16s >>> 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | >>> TASK | Create identity internal endpoint >>> 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | >>> FATAL | Create identity internal endpoint | undercloud | >>> error={"changed": false, "extra_data": {"data": null, "details": "The >>> request you have made requires authentication.", "response": >>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>> services: Client Error for url: >>> https://overcloud-hsc.com:13000/v3/services, The request you have made >>> requires authentication."} >>> 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | >>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>> undercloud | 0:11:21.074605 | 2. >>> >>> >>> Certificate configs: >>> >>> [stack at undercloud oc-domain-name]$ cat server.csr.cnf >>> [req] >>> default_bits = 2048 >>> prompt = no >>> default_md = sha256 >>> distinguished_name = dn >>> [dn] >>> C=IN >>> ST=UTTAR PRADESH >>> L=NOIDA >>> O=HSC >>> OU=HSC >>> emailAddress=demo at demo.com >>> CN=overcloud-hsc.com >>> [stack at undercloud oc-domain-name]$ cat v3.ext >>> authorityKeyIdentifier=keyid,issuer >>> basicConstraints=CA:FALSE >>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>> dataEncipherment >>> subjectAltName = @alt_names >>> [alt_names] >>> DNS.1=overcloud-hsc.com >>> [stack at undercloud oc-domain-name]$ >>> >>> the difference we see from others is that we are using self-signed >>> certificates. >>> >>> please let me know in case we need to check something else. Somehow this >>> issue remains stuck. >>> >>> >>> On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan < >>> swogatpradhan22 at gmail.com> wrote: >>> >>>> I was facing a similar kind of issue. >>>> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >>>> Here is the solution that helped me fix it. >>>> Also make sure the cn that you will use is reachable from undercloud >>>> (maybe) script should take care of it. >>>> >>>> Also please follow Mr. Tathe's mail to add the cn first. >>>> >>>> With regards >>>> Swogat Pradhan >>>> >>>> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe >>>> wrote: >>>> >>>>> Hi Lokendra, >>>>> >>>>> The CN field is missing. Can you add that and generate the certificate >>>>> again. >>>>> >>>>> CN=ipaddress >>>>> >>>>> Also add dns.1=ipaddress under alt_names for precaution. >>>>> >>>>> Vikarna >>>>> >>>>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> HI Vikarna, >>>>>> Thanks for the inputs. >>>>>> I am note able to access any tabs in GUI. >>>>>> [image: image.png] >>>>>> >>>>>> to re-state, we are failing at the time of deployment at step4 : >>>>>> >>>>>> >>>>>> PLAY [External deployment step 4] >>>>>> ********************************************** >>>>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>>> TASK | External deployment step 4 >>>>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>>> OK | External deployment step 4 | undercloud -> localhost | result={ >>>>>> "changed": false, >>>>>> "msg": "Use --start-at-task 'External deployment step 4' to >>>>>> resume from this task" >>>>>> } >>>>>> [WARNING]: ('undercloud -> localhost', >>>>>> '525400ae-089b-870a-fab6-0000000000d7') >>>>>> missing from stats >>>>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >>>>>> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >>>>>> INCLUDED | >>>>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>>>> | undercloud >>>>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >>>>>> TASK | Clean up legacy Cinder keystone catalog entries >>>>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >>>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:24.204562 | 2.48s >>>>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >>>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:26.122584 | 4.40s >>>>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>> 0:11:26.124296 | 4.40s >>>>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >>>>>> TASK | Manage Keystone resources for OpenStack services >>>>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >>>>>> TIMING | Manage Keystone resources for OpenStack services | undercloud | >>>>>> 0:11:26.169842 | 0.03s >>>>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >>>>>> TASK | Gather variables for each operating system >>>>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >>>>>> TIMING | tripleo_keystone_resources : Gather variables for each operating >>>>>> system | undercloud | 0:11:26.253383 | 0.04s >>>>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >>>>>> TASK | Create Keystone Admin resources >>>>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >>>>>> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >>>>>> undercloud | 0:11:26.299608 | 0.03s >>>>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >>>>>> INCLUDED | >>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>>>> undercloud >>>>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>> TASK | Create default domain >>>>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>> OK | Create default domain | undercloud >>>>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>> TIMING | tripleo_keystone_resources : Create default domain | undercloud >>>>>> | 0:11:28.437360 | 2.09s >>>>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >>>>>> TASK | Create admin and service projects >>>>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >>>>>> TIMING | tripleo_keystone_resources : Create admin and service projects | >>>>>> undercloud | 0:11:28.483468 | 0.03s >>>>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >>>>>> INCLUDED | >>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>>>> undercloud >>>>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >>>>>> TASK | Async creation of Keystone project >>>>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >>>>>> CHANGED | Async creation of Keystone project | undercloud | item=admin >>>>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project >>>>>> | undercloud | 0:11:29.238078 | 0.72s >>>>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >>>>>> CHANGED | Async creation of Keystone project | undercloud | item=service >>>>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project >>>>>> | undercloud | 0:11:29.586587 | 1.06s >>>>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone project >>>>>> | undercloud | 0:11:29.587916 | 1.07s >>>>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> TASK | Check Keystone project status >>>>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> WAITING | Check Keystone project status | undercloud | 30 retries left >>>>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> OK | Check Keystone project status | undercloud | item=admin >>>>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>> undercloud | 0:11:35.260666 | 5.66s >>>>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> OK | Check Keystone project status | undercloud | item=service >>>>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>> undercloud | 0:11:35.494729 | 5.89s >>>>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>> undercloud | 0:11:35.498771 | 5.89s >>>>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >>>>>> TASK | Create admin role >>>>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >>>>>> OK | Create admin role | undercloud >>>>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >>>>>> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >>>>>> 0:11:37.725949 | 2.20s >>>>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>> TASK | Create _member_ role >>>>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>> SKIPPED | Create _member_ role | undercloud >>>>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | >>>>>> 0:11:37.783369 | 0.04s >>>>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>> TASK | Create admin user >>>>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>> CHANGED | Create admin user | undercloud >>>>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >>>>>> 0:11:41.145472 | 3.34s >>>>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>> TASK | Assign admin role to admin project for admin user >>>>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>> OK | Assign admin role to admin project for admin user | undercloud >>>>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>> TIMING | tripleo_keystone_resources : Assign admin role to admin project >>>>>> for admin user | undercloud | 0:11:44.288848 | 3.13s >>>>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>> TASK | Assign _member_ role to admin project for admin user >>>>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>> SKIPPED | Assign _member_ role to admin project for admin user | undercloud >>>>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>> TIMING | tripleo_keystone_resources : Assign _member_ role to admin >>>>>> project for admin user | undercloud | 0:11:44.346479 | 0.04s >>>>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>> TASK | Create identity service >>>>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>> OK | Create identity service | undercloud >>>>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>> TIMING | tripleo_keystone_resources : Create identity service | >>>>>> undercloud | 0:11:46.022362 | 1.66s >>>>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>> TASK | Create identity public endpoint >>>>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>> OK | Create identity public endpoint | undercloud >>>>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>> undercloud | 0:11:48.233349 | 2.19s >>>>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>> TASK | Create identity internal endpoint >>>>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>> request you have made requires authentication.", "response": >>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>> The request you have made requires authentication."} >>>>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>>>>> undercloud | 0:11:50.660654 | 2.41s >>>>>> >>>>>> PLAY RECAP >>>>>> ********************************************************************* >>>>>> localhost : ok=1 changed=0 unreachable=0 >>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>> undercloud : ok=39 changed=7 unreachable=0 >>>>>> failed=1 skipped=6 rescued=0 ignored=0 >>>>>> >>>>>> Also : >>>>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>>>> [req] >>>>>> default_bits = 2048 >>>>>> prompt = no >>>>>> default_md = sha256 >>>>>> distinguished_name = dn >>>>>> [dn] >>>>>> C=IN >>>>>> ST=UTTAR PRADESH >>>>>> L=NOIDA >>>>>> O=HSC >>>>>> OU=HSC >>>>>> emailAddress=demo at demo.com >>>>>> >>>>>> v3.ext: >>>>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>>>> authorityKeyIdentifier=keyid,issuer >>>>>> basicConstraints=CA:FALSE >>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>> dataEncipherment >>>>>> subjectAltName = @alt_names >>>>>> [alt_names] >>>>>> IP.1=fd00:fd00:fd00:9900::81 >>>>>> >>>>>> Using these files we create other certificates. >>>>>> Please check and let me know in case we need anything else. >>>>>> >>>>>> >>>>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe < >>>>>> vikarnatathe at gmail.com> wrote: >>>>>> >>>>>>> Hi Lokendra, >>>>>>> >>>>>>> Are you able to access all the tabs in the OpenStack dashboard >>>>>>> without any error? If not, please retry generating the certificate. Also, >>>>>>> share the openssl.cnf or server.cnf. >>>>>>> >>>>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Team, >>>>>>>> Any input on this case raised. >>>>>>>> >>>>>>>> Thanks, >>>>>>>> Lokendra >>>>>>>> >>>>>>>> >>>>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Shephard/Swogat, >>>>>>>>> I tried changing the setting as suggested and it looks like it has >>>>>>>>> failed at step 4 with error: >>>>>>>>> >>>>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING >>>>>>>>> | tripleo_keystone_resources : Create identity public endpoint | undercloud >>>>>>>>> | 0:24:47.736198 | 2.21s >>>>>>>>> 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf >>>>>>>>> | TASK | Create identity internal endpoint >>>>>>>>> 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf >>>>>>>>> | FATAL | Create identity internal endpoint | undercloud | >>>>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>>>> request you have made requires authentication.", "response": >>>>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>>> The request you have made requires authentication."} >>>>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>>>> >>>>>>>>> >>>>>>>>> Checking further the endpoint list: >>>>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>>>> >>>>>>>>> DeprecationWarning >>>>>>>>> >>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>> | ID | Region | Service Name | >>>>>>>>> Service Type | Enabled | Interface | URL >>>>>>>>> | >>>>>>>>> >>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>>>>> | >>>>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>>>>> | >>>>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>>>>> | >>>>>>>>> >>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>> >>>>>>>>> >>>>>>>>> it looks like something related to the SSL, we have also verified >>>>>>>>> that the GUI login screen shows that Certificates are applied. >>>>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>>>> observation would be of great help. >>>>>>>>> thanks again for the support. >>>>>>>>> >>>>>>>>> Best Regards, >>>>>>>>> Lokendra >>>>>>>>> >>>>>>>>> >>>>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> I had faced a similar kind of issue, for ip based setup you need >>>>>>>>>> to specify the domain name as the ip that you are going to use, this error >>>>>>>>>> is showing up because the ssl is ip based but the fqdns seems to be >>>>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>>>> >>>>>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>>>>> address for overcloud? because it seems he has not specified the >>>>>>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>>>>>> domain for overcloud.example.com. >>>>>>>>>> >>>>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> What is the domain name you have specified in the >>>>>>>>>>> undercloud.conf file? >>>>>>>>>>> And what is the fqdn name used for the generation of the SSL >>>>>>>>>>> cert? >>>>>>>>>>> >>>>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Team, >>>>>>>>>>>> We were trying to install overcloud with SSL enabled for which >>>>>>>>>>>> the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>>>> >>>>>>>>>>>> ERROR >>>>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", "msg": >>>>>>>>>>>> "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>> 2022-07-08 17:03:23.606739 | >>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder >>>>>>>>>>>> keystone catalog entries | undercloud | item={'service_name': 'cinderv3', >>>>>>>>>>>> 'service_type': 'volume'} | error={"ansible_index_var": >>>>>>>>>>>> "cinder_api_service", "ansible_loop_var": "item", "changed": false, >>>>>>>>>>>> "cinder_api_service": 1, "item": {"service_name": "cinderv3", >>>>>>>>>>>> "service_type": "volume"}, "module_stderr": "Failed to discover available >>>>>>>>>>>> identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>>>> server_hostname)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>>>> (most recent call last):\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>> last):\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>> last):\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>>>> in _send_request\n raise >>>>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the above >>>>>>>>>>>> exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>>>> run_globals)\n File >>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>> line 185, in \n File >>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>> line 181, in main\n File >>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>>>> line 407, in __call__\n File >>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>> line 141, in run\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>>>> File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. %s' >>>>>>>>>>>> % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find >>>>>>>>>>>> versioned identity endpoints when attempting to authenticate. Please check >>>>>>>>>>>> that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": "", >>>>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>> 2022-07-08 17:03:23.609354 | >>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s >>>>>>>>>>>> 2022-07-08 17:03:23.611094 | >>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s >>>>>>>>>>>> >>>>>>>>>>>> PLAY RECAP >>>>>>>>>>>> ********************************************************************* >>>>>>>>>>>> localhost : ok=0 changed=0 unreachable=0 >>>>>>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 unreachable=0 >>>>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 unreachable=0 >>>>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 unreachable=0 >>>>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 >>>>>>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>>>> undercloud : ok=28 changed=7 unreachable=0 >>>>>>>>>>>> failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>>>> 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>> Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>> Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> in the deploy.sh: >>>>>>>>>>>> >>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>> --networks-file >>>>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>>>> --baremetal-deployment >>>>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>>>> --network-config \ >>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>>>> \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>> -e >>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>> >>>>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>>>> modifications: >>>>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>>>> Passed as is in the defaults. >>>>>>>>>>>> enable-tls.yaml: >>>>>>>>>>>> >>>>>>>>>>>> # >>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to update >>>>>>>>>>>> it. >>>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>>> instead >>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>> # >>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>>>> # description: | >>>>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>>>> deployments. >>>>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>>>> # environments must also be used. >>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>>>>> # Type: boolean >>>>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>>>> >>>>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>>>> services in the public network. >>>>>>>>>>>> # Type: string >>>>>>>>>>>> PublicTLSCAFile: >>>>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>>>> >>>>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>>>> format. >>>>>>>>>>>> # Type: string >>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>> >>>>>>>>>>>> SSLCertificate: | >>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>>>>> format. >>>>>>>>>>>> # Type: string >>>>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>>>> >>>>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>>>> # Type: string >>>>>>>>>>>> SSLKey: | >>>>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>>>> >>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>> # The filepath of the certificate as it will be stored in the >>>>>>>>>>>> controller. >>>>>>>>>>>> # Type: string >>>>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>>>> >>>>>>>>>>>> # ********************* >>>>>>>>>>>> # End static parameters >>>>>>>>>>>> # ********************* >>>>>>>>>>>> >>>>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>>>> >>>>>>>>>>>> # >>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>> # This file was created automatically by the sample environment >>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to update >>>>>>>>>>>> it. >>>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>>> instead >>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>> # >>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>>>> # description: | >>>>>>>>>>>> # When using an SSL certificate signed by a CA that is not in >>>>>>>>>>>> the default >>>>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>>>> certificate to >>>>>>>>>>>> # the overcloud nodes. >>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>> # The content of a CA's SSL certificate file in PEM format. >>>>>>>>>>>> This is evaluated on the client side. >>>>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>>>> # Type: string >>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>> >>>>>>>>>>>> resource_registry: >>>>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>>>> >>>>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> skype: lokendrarathour >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> >>>>>>> >>>>>> >>>>>> -- >>>>>> ~ Lokendra >>>>>> skype: lokendrarathour >>>>>> >>>>>> >>>>>> >>> >>> -- >>> ~ Lokendra >>> skype: lokendrarathour >>> >>> >>> > > -- > ~ Lokendra > skype: lokendrarathour > > > -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From the.wade.albright at gmail.com Wed Jul 20 14:36:08 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Wed, 20 Jul 2022 07:36:08 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: Switching to session auth solved the problem, and it seems like the better way to go anyway for equipment that supports it. Thanks again for all your help! Wade On Tue, Jul 19, 2022 at 5:37 PM Julia Kreger wrote: > Just to provide a brief update for the mailing list. It looks like > this is a case of use of Basic Auth with the BMC, where we were not > catching the error properly... and thus not reporting the > authentication failure to ironic so it would catch, and initiate a new > client with the most up to date password. The default, typically used > path is Session based authentication as BMCs generally handle internal > session/user login tracking in a far better fashion. But not every BMC > supports sessions. > > Fix in review[0] :) > > -Julia > [0] https://review.opendev.org/c/openstack/sushy/+/850425 > > On Mon, Jul 18, 2022 at 4:15 PM Julia Kreger > wrote: > > > > Excellent, hopefully I'll be able to figure out why Sushy is not doing > > the needful... Or if it is and Ironic is not picking up on it. > > > > Anyway, I've posted > > https://review.opendev.org/c/openstack/ironic/+/850259 which might > > handle this issue. Obviously a work in progress, but it represents > > what I think is happening inside of ironic itself leading into sushy > > when cache access occurs. > > > > On Mon, Jul 18, 2022 at 4:04 PM Wade Albright > > wrote: > > > > > > Sounds good, I will do that tomorrow. Thanks Julia. > > > > > > On Mon, Jul 18, 2022 at 3:27 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > > >> > > >> Debug would be best. I think I have an idea what is going on, and this > > >> is a similar variation. If you want, you can email them directly to > > >> me. Specifically only need entries reported by the sushy library and > > >> ironic.drivers.modules.redfish.utils. > > >> > > >> On Mon, Jul 18, 2022 at 3:20 PM Wade Albright > > >> wrote: > > >> > > > >> > I'm happy to supply some logs, what verbosity level should i use? > And should I just embed the logs in email to the list or upload somewhere? > > >> > > > >> > On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > > >> >> > > >> >> If you could supply some conductor logs, that would be helpful. It > > >> >> should be re-authenticating, but obviously we have a larger bug > there > > >> >> we need to find the root issue behind. > > >> >> > > >> >> On Mon, Jul 18, 2022 at 3:06 PM Wade Albright > > >> >> wrote: > > >> >> > > > >> >> > I was able to use the patches to update the code, but > unfortunately the problem is still there for me. > > >> >> > > > >> >> > I also tried an RPM upgrade to the versions Julia mentioned had > the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - > Released in January 2022. But it did not fix the problem. > > >> >> > > > >> >> > I am able to consistently reproduce the error. > > >> >> > - step 1: change BMC password directly on the node itself > > >> >> > - step 2: update BMC password (redfish_password) in ironic with > 'openstack baremetal node set --driver-info > redfish_password='newpass' > > >> >> > > > >> >> > After step 1 there are errors in the logs entries like "Session > authentication appears to have been lost at some point in time" and > eventually it puts the node into maintenance mode and marks the power state > as "none." > > >> >> > After step 2 and taking the host back out of maintenance mode, > it goes through a similar set of log entries puts the node into MM again. > > >> >> > > > >> >> > After the above steps, a conductor restart fixes the problem and > operations work normally again. Given this it seems like there is still > some kind of caching issue. > > >> >> > > > >> >> > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright < > the.wade.albright at gmail.com> wrote: > > >> >> >> > > >> >> >> Hi Julia, > > >> >> >> > > >> >> >> Thank you so much for the reply! Hopefully this is the issue. > I'll try out the patches next week and report back. I'll also email you on > Monday about the versions, that would be very helpful to know. > > >> >> >> > > >> >> >> Thanks again, really appreciate it. > > >> >> >> > > >> >> >> Wade > > >> >> >> > > >> >> >> > > >> >> >> > > >> >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger < > juliaashleykreger at gmail.com> wrote: > > >> >> >>> > > >> >> >>> Greetings! > > >> >> >>> > > >> >> >>> I believe you need two patches, one in ironic and one in sushy. > > >> >> >>> > > >> >> >>> Sushy: > > >> >> >>> https://review.opendev.org/c/openstack/sushy/+/832860 > > >> >> >>> > > >> >> >>> Ironic: > > >> >> >>> https://review.opendev.org/c/openstack/ironic/+/820588 > > >> >> >>> > > >> >> >>> I think it is variation, and the comment about working after > you restart the conductor is the big signal to me. I?m on a phone on a bad > data connection, if you email me on Monday I can see what versions the > fixes would be in. > > >> >> >>> > > >> >> >>> For the record, it is a session cache issue, the bug was that > the service didn?t quite know what to do when auth fails. > > >> >> >>> > > >> >> >>> -Julia > > >> >> >>> > > >> >> >>> > > >> >> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright < > the.wade.albright at gmail.com> wrote: > > >> >> >>>> > > >> >> >>>> Hi, > > >> >> >>>> > > >> >> >>>> I'm hitting a problem when trying to update the > redfish_password for an existing node. I'm curious to know if anyone else > has encountered this problem. I'm not sure if I'm just doing something > wrong or if there is a bug. Or if the problem is unique to my setup. > > >> >> >>>> > > >> >> >>>> I have a node already added into ironic with all the driver > details set, and things are working fine. I am able to run deployments. > > >> >> >>>> > > >> >> >>>> Now I need to change the redfish password on the host. So I > update the password for redfish access on the host, then use an 'openstack > baremetal node set --driver-info redfish_password=' command > to set the new redfish_password. > > >> >> >>>> > > >> >> >>>> Once this has been done, deployment no longer works. I see > redfish authentication errors in the logs and the operation fails. I waited > a bit to see if there might just be a delay in updating the password, but > after awhile it still didn't work. > > >> >> >>>> > > >> >> >>>> I restarted the conductor, and after that things work fine > again. So it seems like the password is cached or something. Is there a way > to force the password to update? I even tried removing the redfish > credentials and re-adding them, but that didn't work either. Only a > conductor restart seems to make the new password work. > > >> >> >>>> > > >> >> >>>> We are running Xena, using rpm installation on Oracle Linux > 8.5. > > >> >> >>>> > > >> >> >>>> Thanks in advance for any help with this issue. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Wed Jul 20 19:18:21 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Wed, 20 Jul 2022 20:18:21 +0100 Subject: [dev][horizon][keystone] Deprecated keystone limits leading to a UX issue In-Reply-To: References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: <7FD058D3-118D-46F9-B674-79FF4FC619D2@gmail.com> To whom it may concern, We were hoping to propose something akin to filtering/dynamic listing to the horizon team in order to rectify the issue. In turn, for the keystone team, we were wondering whether there is anything we can do with the API in order to simplify the aforementioned horizon proposal; we are hoping to get both teams on board. Subsequently, should we reach an agreement, we would be more than happy to create a bug report, follow that with (a) blueprint(s) and proceed immediately; unless any other terms of engagement are amicable. Please let me know how everyone would like to proceed. Best Regards, Sergey Drozdov Software Engineer The Hut Group > On 19 Jul 2022, at 14:47, Danny Webb wrote: > > It was a conscious decision by the keystone team back in 2015 as far as we can tell. There was a large discussion on the mailing list regarding this that seemed to have left this issue unresolved (most specifically around user pagination, but this seems to have affected project / domain elements as well). > > https://lists.openstack.org/pipermail/openstack-dev/2015-August/thread.html#72082 > > Ultimately what we're seeking to do is start a discussion within the keystone / horizon (and potentially skyline) community about how we can rectify the current issues we're facing around the usability of the UX portion of keystone elements. Eg, if pagination isn't the way the keystone community wants to go should we look at instead having dynamic filtering instead in the UI? Are there any other options that people can think of that might be a better way forward? > From: Julia Kreger > > Sent: 19 July 2022 14:21 > To: Danny Webb > > Cc: Dmitriy Rabotyagov >; openstack-discuss > > Subject: Re: [dev] directions to the right project team > > CAUTION: This email originates from outside THG > > On Tue, Jul 19, 2022 at 3:44 AM Danny Webb wrote: > > > > Unfortunately pagination was removed from keystone in the v3 api and as far as we're aware it was never re-added. > > > > This is quite concerning, and a quick look at the code confirms it. > Mostly. There are remnants of "hints" and SQL Query filtering, but the > internal limit is just a truncation which seems bad as well. > > https://github.com/openstack/keystone/blame/d7b1d57cae738183f8d85413e942402a8a4efb31/keystone/server/flask/common.py#L675 > > This seems like a fundamental performance oriented feature because the > overhead in data conversion can be quite a bit when you have a large > number of well... any objects being returned from a database. > > Does anyone know if a bug is open for this issue? > Danny Webb > Principal OpenStack Engineer > The Hut Group > > Tel: > Email: Danny.Webb at thehutgroup.com > > For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. > > Confidentiality Notice > This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. > > Encryptions and Viruses > Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. > > Monitoring > Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. > > hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Wed Jul 20 19:35:33 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Wed, 20 Jul 2022 20:35:33 +0100 Subject: [dev][designate][dns] Adding private DNS feature Message-ID: <485F0C96-63D7-49F1-9860-655EAF837974@gmail.com> Dear Sir/Madam, We are running OpenStack at scale and now have a requirement to have private DNS and were wondering if the designate team have any appetite for this? If yes, then further discussion is warranted as we would be happy to get the ball rolling on this. Best Regards, Sergey Drozdov Software Engineer The Hut Group From haiwu.us at gmail.com Wed Jul 20 19:43:43 2022 From: haiwu.us at gmail.com (hai wu) Date: Wed, 20 Jul 2022 14:43:43 -0500 Subject: [nova] nova hypervisor oom killed some openstack guest Message-ID: nova hypervisor sometimes oom would kill some openstack guests. Is it possible to not allow kernel to oom kill any openstack guests? ram is not oversubscribed much .. From ces.eduardo98 at gmail.com Wed Jul 20 19:55:30 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Wed, 20 Jul 2022 16:55:30 -0300 Subject: Manila/NFS support for Zun In-Reply-To: References: Message-ID: Hello, sorry for the late reply Em s?b., 16 de jul. de 2022 ?s 13:28, Vaibhav escreveu: > Hi, > > I want to mount my Manila shares on containers managed by Zun. > > I can see a Fuxi project and driver for this but it is discontinued now. > > I want to have a shared file system to be mounted on multiple containers > simultaneously, it is not possible with cinder. > > Is there any alternative to Fuxi? > I can't think of many by looking at the use case. Isn't there anything on Zun itself that allows the shares to be mounted directly in the containers? > Or can it work with yoga release ? > > I would not say so, as the project is no longer maintained and the latest commit is from 2017. > Please advise and give a suggestion. > > Regards, > Vaibhav > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Wed Jul 20 20:07:50 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Wed, 20 Jul 2022 22:07:50 +0200 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: Message-ID: I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. ??, 20 ???. 2022 ?., 21:52 hai wu : > nova hypervisor sometimes oom would kill some openstack guests. > > Is it possible to not allow kernel to oom kill any openstack guests? > ram is not oversubscribed much .. > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Wed Jul 20 21:59:00 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Wed, 20 Jul 2022 14:59:00 -0700 Subject: [dev][designate][dns] Adding private DNS feature In-Reply-To: <485F0C96-63D7-49F1-9860-655EAF837974@gmail.com> References: <485F0C96-63D7-49F1-9860-655EAF837974@gmail.com> Message-ID: Hi Sergey, Can you tell me a little bit more about what you want to accomplish? Private DNS can mean different things, such as DNS-over-TLS, DNS-over-HTTPS, split views, etc. Michael On Wed, Jul 20, 2022 at 12:51 PM Sergey Drozdov wrote: > > Dear Sir/Madam, > > We are running OpenStack at scale and now have a requirement to have private DNS and were wondering if the designate team have any appetite for this? If yes, then further discussion is warranted as we would be happy to get the ball rolling on this. > > Best Regards, > Sergey Drozdov > Software Engineer > The Hut Group From haiwu.us at gmail.com Wed Jul 20 22:17:56 2022 From: haiwu.us at gmail.com (hai wu) Date: Wed, 20 Jul 2022 17:17:56 -0500 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: Message-ID: Is there any configuration file that is needed to ensure guest domains are under systemd machine.slice? not seeing anything under machine.slice .. On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov wrote: > > I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. > > ??, 20 ???. 2022 ?., 21:52 hai wu : >> >> nova hypervisor sometimes oom would kill some openstack guests. >> >> Is it possible to not allow kernel to oom kill any openstack guests? >> ram is not oversubscribed much .. >> From cboylan at sapwetik.org Wed Jul 20 22:41:42 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 20 Jul 2022 15:41:42 -0700 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: Message-ID: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > Is there any configuration file that is needed to ensure guest domains > are under systemd machine.slice? not seeing anything under > machine.slice .. I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > wrote: >> >> I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. >> >> ??, 20 ???. 2022 ?., 21:52 hai wu : >>> >>> nova hypervisor sometimes oom would kill some openstack guests. >>> >>> Is it possible to not allow kernel to oom kill any openstack guests? >>> ram is not oversubscribed much .. >>> From Danny.Webb at thehutgroup.com Wed Jul 20 22:46:14 2022 From: Danny.Webb at thehutgroup.com (Danny Webb) Date: Wed, 20 Jul 2022 22:46:14 +0000 Subject: [dev][designate][dns] Adding private DNS feature In-Reply-To: References: <485F0C96-63D7-49F1-9860-655EAF837974@gmail.com> Message-ID: We're thinking more of a private view available to individual or shared amongst a defined set of tenants. Loosely something akin to having amphora that serve up internal DNS that can be shared among one or more tenants with a deep integration into nova/neutron. Use case would be for example a enterprise that utilises many projects for various teams but wants to offer a single DNS domain across projects that isn't externally facing. We'll flush out a better use case and proposed architecture in the coming weeks, we're just putting some feelers out to see if this kind of thing was of any interest or use to others. ________________________________ From: Michael Johnson Sent: 20 July 2022 22:59 To: Sergey Drozdov Cc: openstack-discuss Subject: Re: [dev][designate][dns] Adding private DNS feature CAUTION: This email originates from outside THG Hi Sergey, Can you tell me a little bit more about what you want to accomplish? Private DNS can mean different things, such as DNS-over-TLS, DNS-over-HTTPS, split views, etc. Michael On Wed, Jul 20, 2022 at 12:51 PM Sergey Drozdov wrote: > > Dear Sir/Madam, > > We are running OpenStack at scale and now have a requirement to have private DNS and were wondering if the designate team have any appetite for this? If yes, then further discussion is warranted as we would be happy to get the ball rolling on this. > > Best Regards, > Sergey Drozdov > Software Engineer > The Hut Group Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From haiwu.us at gmail.com Wed Jul 20 23:04:10 2022 From: haiwu.us at gmail.com (hai wu) Date: Wed, 20 Jul 2022 18:04:10 -0500 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> Message-ID: After installing some systemd package, and starting up machine.slice, systemd-machined, and hard rebooting the vm from openstack side, I could now see the VM showing up under machine.slice. all vms were showing up under libvirtd.service, which is under system.slice. What are the benefits of running libvirt managed guest instances under machine.slice? On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan wrote: > > On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > > Is there any configuration file that is needed to ensure guest domains > > are under systemd machine.slice? not seeing anything under > > machine.slice .. > > I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > > > > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > > wrote: > >> > >> I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. > >> > >> ??, 20 ???. 2022 ?., 21:52 hai wu : > >>> > >>> nova hypervisor sometimes oom would kill some openstack guests. > >>> > >>> Is it possible to not allow kernel to oom kill any openstack guests? > >>> ram is not oversubscribed much .. > >>> > From cboylan at sapwetik.org Wed Jul 20 23:15:47 2022 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 20 Jul 2022 16:15:47 -0700 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> Message-ID: <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > After installing some systemd package, and starting up machine.slice, > systemd-machined, and hard rebooting the vm from openstack side, I > could now see the VM showing up under machine.slice. all vms were > showing up under libvirtd.service, which is under system.slice. > > What are the benefits of running libvirt managed guest instances under > machine.slice? You can use machine.slice to set system resource options that each sub slice inherits. Those options are documented at https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# (per my earlier link https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I don't see OOMScoreAdjust listed there so I am unsure if you can actually set it via this method. That all said, if you are oversubscribing memory this is likely to always be an issue. If you adjust the oom score for your VMs then the oomkiller is just going to find other victims to kill. Losing your nova compute agent or NetworkManager or iscsid may be just as problematic. Instead, I suspect that you may need to stop oversubscribing memory. > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan wrote: >> >> On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: >> > Is there any configuration file that is needed to ensure guest domains >> > are under systemd machine.slice? not seeing anything under >> > machine.slice .. >> >> I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. >> >> > >> > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov >> > wrote: >> >> >> >> I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. >> >> >> >> ??, 20 ???. 2022 ?., 21:52 hai wu : >> >>> >> >>> nova hypervisor sometimes oom would kill some openstack guests. >> >>> >> >>> Is it possible to not allow kernel to oom kill any openstack guests? >> >>> ram is not oversubscribed much .. >> >>> >> From haiwu.us at gmail.com Wed Jul 20 23:48:28 2022 From: haiwu.us at gmail.com (hai wu) Date: Wed, 20 Jul 2022 18:48:28 -0500 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> Message-ID: In this case there's no memory oversubscription. This oom killer event happened when we did "swapoff -a; swapon -a" to push processes in swap back to memory, which is very strange. On Wed, Jul 20, 2022 at 6:39 PM Clark Boylan wrote: > > On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > > After installing some systemd package, and starting up machine.slice, > > systemd-machined, and hard rebooting the vm from openstack side, I > > could now see the VM showing up under machine.slice. all vms were > > showing up under libvirtd.service, which is under system.slice. > > > > What are the benefits of running libvirt managed guest instances under > > machine.slice? > > You can use machine.slice to set system resource options that each sub slice inherits. Those options are documented at https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# (per my earlier link https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I don't see OOMScoreAdjust listed there so I am unsure if you can actually set it via this method. > > That all said, if you are oversubscribing memory this is likely to always be an issue. If you adjust the oom score for your VMs then the oomkiller is just going to find other victims to kill. Losing your nova compute agent or NetworkManager or iscsid may be just as problematic. Instead, I suspect that you may need to stop oversubscribing memory. > > > > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan wrote: > >> > >> On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > >> > Is there any configuration file that is needed to ensure guest domains > >> > are under systemd machine.slice? not seeing anything under > >> > machine.slice .. > >> > >> I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > >> > >> > > >> > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > >> > wrote: > >> >> > >> >> I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. > >> >> > >> >> ??, 20 ???. 2022 ?., 21:52 hai wu : > >> >>> > >> >>> nova hypervisor sometimes oom would kill some openstack guests. > >> >>> > >> >>> Is it possible to not allow kernel to oom kill any openstack guests? > >> >>> ram is not oversubscribed much .. > >> >>> > >> > From haiwu.us at gmail.com Thu Jul 21 01:25:24 2022 From: haiwu.us at gmail.com (hai wu) Date: Wed, 20 Jul 2022 20:25:24 -0500 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> Message-ID: You are correct, there's no way to set OOMScoreAdjust for machine.slice. It errored out when trying to do that, with "Unknown assignment" error.. On Wed, Jul 20, 2022 at 6:48 PM hai wu wrote: > > In this case there's no memory oversubscription. This oom killer event > happened when we did "swapoff -a; swapon -a" to push processes in swap > back to memory, which is very strange. > > On Wed, Jul 20, 2022 at 6:39 PM Clark Boylan wrote: > > > > On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > > > After installing some systemd package, and starting up machine.slice, > > > systemd-machined, and hard rebooting the vm from openstack side, I > > > could now see the VM showing up under machine.slice. all vms were > > > showing up under libvirtd.service, which is under system.slice. > > > > > > What are the benefits of running libvirt managed guest instances under > > > machine.slice? > > > > You can use machine.slice to set system resource options that each sub slice inherits. Those options are documented at https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# (per my earlier link https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I don't see OOMScoreAdjust listed there so I am unsure if you can actually set it via this method. > > > > That all said, if you are oversubscribing memory this is likely to always be an issue. If you adjust the oom score for your VMs then the oomkiller is just going to find other victims to kill. Losing your nova compute agent or NetworkManager or iscsid may be just as problematic. Instead, I suspect that you may need to stop oversubscribing memory. > > > > > > > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan wrote: > > >> > > >> On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > > >> > Is there any configuration file that is needed to ensure guest domains > > >> > are under systemd machine.slice? not seeing anything under > > >> > machine.slice .. > > >> > > >> I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > >> > > >> > > > >> > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > > >> > wrote: > > >> >> > > >> >> I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. > > >> >> > > >> >> ??, 20 ???. 2022 ?., 21:52 hai wu : > > >> >>> > > >> >>> nova hypervisor sometimes oom would kill some openstack guests. > > >> >>> > > >> >>> Is it possible to not allow kernel to oom kill any openstack guests? > > >> >>> ram is not oversubscribed much .. > > >> >>> > > >> > > From skaplons at redhat.com Thu Jul 21 08:18:11 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 21 Jul 2022 10:18:11 +0200 Subject: [all][TC] Bare rechecks Message-ID: <8925819.fYGDMWaf8X@p1> Hi, New stats from last 7 days about bare rechecks in each team are available in [1]: +--------------------+---------------+--------------+-------------------+ | Team | Bare rechecks | All Rechecks | Bare rechecks [%] | +--------------------+---------------+--------------+-------------------+ | skyline | 2 | 2 | 100.0 | | sahara | 2 | 2 | 100.0 | | trove | 4 | 4 | 100.0 | | tacker | 6 | 6 | 100.0 | | horizon | 25 | 25 | 100.0 | | magnum | 2 | 2 | 100.0 | | masakari | 2 | 2 | 100.0 | | Telemetry | 1 | 1 | 100.0 | | kolla | 71 | 74 | 95.95 | | OpenStack Charms | 16 | 17 | 94.12 | | requirements | 33 | 37 | 89.19 | | cinder | 137 | 163 | 84.05 | | kuryr | 5 | 6 | 83.33 | | OpenStack-Helm | 23 | 29 | 79.31 | | tripleo | 97 | 125 | 77.6 | | glance | 17 | 22 | 77.27 | | Puppet OpenStack | 23 | 30 | 76.67 | | ironic | 43 | 57 | 75.44 | | keystone | 6 | 8 | 75.0 | | octavia | 32 | 44 | 72.73 | | swift | 8 | 11 | 72.73 | | manila | 27 | 38 | 71.05 | | OpenStackSDK | 17 | 24 | 70.83 | | oslo | 12 | 17 | 70.59 | | ec2-api | 2 | 3 | 66.67 | | Quality Assurance | 13 | 20 | 65.0 | | neutron | 33 | 52 | 63.46 | | nova | 84 | 133 | 63.16 | | heat | 3 | 5 | 60.0 | | Release Management | 1 | 2 | 50.0 | | designate | 4 | 8 | 50.0 | | barbican | 2 | 12 | 16.67 | | OpenStackAnsible | 0 | 1 | 0.0 | +--------------------+---------------+--------------+-------------------+ [1] https://etherpad.opendev.org/p/recheck-weekly-summary[1] -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From amonster369 at gmail.com Thu Jul 21 08:39:05 2022 From: amonster369 at gmail.com (A Monster) Date: Thu, 21 Jul 2022 09:39:05 +0100 Subject: Glance api deployed only on a single controller on multi-controller deployment [kolla] Message-ID: I've deployed openstack xena using kolla ansible on a centos 8 stream cluster, using two controller nodes, however I found out after the deployment that glance api is not available in one node, I tried redeploying but I got the same behavior, although the deployment finished without displaying any error. Thank you. Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From gthiemonge at redhat.com Thu Jul 21 08:57:35 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Thu, 21 Jul 2022 10:57:35 +0200 Subject: [all][TC] Bare rechecks In-Reply-To: <8925819.fYGDMWaf8X@p1> References: <8925819.fYGDMWaf8X@p1> Message-ID: On Thu, Jul 21, 2022 at 10:38 AM Slawek Kaplonski wrote: > Hi, > > New stats from last 7 days about bare rechecks in each team are available > in [1]: > > > | octavia | 32 | 44 | 72.73 | > There's something wrong with the script, we didn't trigger 44 rechecks last week. But there are a total of 44 rechecks in the changes that have been updated in the last 7 days. IMHO the script should only count the rechecks triggered in the last 7 days. > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pierre at stackhpc.com Thu Jul 21 09:41:55 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Thu, 21 Jul 2022 11:41:55 +0200 Subject: Glance api deployed only on a single controller on multi-controller deployment [kolla] In-Reply-To: References: Message-ID: With the default backend (file), Glance is deployed on a single controller, because it uses a local Docker volume to store Glance images. This is explained in the documentation [1]: "By default when using file backend only one glance-api container can be running". See also the definition of glance_api_hosts in ansible/group_vars/all.yml. If you set glance_file_datadir_volume to a non-default path, it is assumed to be on shared storage and kolla-ansible will automatically use all glance-api group members. You can also switch to another backend such as Ceph or Swift. [1] https://docs.openstack.org/kolla-ansible/latest/reference/shared-services/glance-guide.html On Thu, 21 Jul 2022 at 10:57, A Monster wrote: > I've deployed openstack xena using kolla ansible on a centos 8 stream > cluster, using two controller nodes, however I found out after the > deployment that glance api is not available in one node, I tried > redeploying but I got the same behavior, although the deployment finished > without displaying any error. > > Thank you. Regards > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jul 21 10:45:19 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 21 Jul 2022 12:45:19 +0200 Subject: [all][TC] Bare rechecks In-Reply-To: References: <8925819.fYGDMWaf8X@p1> Message-ID: <3319614.ZACRpqI0Em@p1> Hi, Dnia czwartek, 21 lipca 2022 10:57:35 CEST Gregory Thiemonge pisze: > On Thu, Jul 21, 2022 at 10:38 AM Slawek Kaplonski > wrote: > > > Hi, > > > > New stats from last 7 days about bare rechecks in each team are available > > in [1]: > > > > > > | octavia | 32 | 44 | 72.73 | > > > > There's something wrong with the script, we didn't trigger 44 rechecks last > week. But there are a total of 44 rechecks in the changes that have been > updated in the last 7 days. IMHO the script should only count the rechecks > triggered in the last 7 days. Yes, I need to update it but didn't had time yet. > > > > > > -- > > > > Slawek Kaplonski > > > > Principal Software Engineer > > > > Red Hat > > > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From smooney at redhat.com Thu Jul 21 11:42:37 2022 From: smooney at redhat.com (Sean Mooney) Date: Thu, 21 Jul 2022 12:42:37 +0100 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> Message-ID: <43b7e69240f80666813945ef9aab408b85feefdb.camel@redhat.com> On Wed, 2022-07-20 at 20:25 -0500, hai wu wrote: > You are correct, there's no way to set OOMScoreAdjust for > machine.slice. It errored out when trying to do that, with "Unknown > assignment" error.. if you mess with the cgroups behind novas back then any hope of support you have with your vendor or updstream is gone. you shoudl really find out why your running out of memroy. it ususllay means you have not configured nova and the host correctly. most often this hapens becuase peopel use cpu pinning wiht out enable per numa node memory memory tracking by setting a page size. it also could be because you have not allcoated enough swap. so before you try to adjust things with cgroups yourslef or explore other options you shoudl determin why the host is runnign out of memroy. if you prevent ti from kill the gues i have see it kill ovs or nova iteslf before where the guest were unkillable or unlkely to be killed because they used hugepages. so you will likely jsut shift the problem else where that will be more impactful. > > On Wed, Jul 20, 2022 at 6:48 PM hai wu wrote: > > > > In this case there's no memory oversubscription. This oom killer event > > happened when we did "swapoff -a; swapon -a" to push processes in swap > > back to memory, which is very strange. > > > > On Wed, Jul 20, 2022 at 6:39 PM Clark Boylan wrote: > > > > > > On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > > > > After installing some systemd package, and starting up machine.slice, > > > > systemd-machined, and hard rebooting the vm from openstack side, I > > > > could now see the VM showing up under machine.slice. all vms were > > > > showing up under libvirtd.service, which is under system.slice. > > > > > > > > What are the benefits of running libvirt managed guest instances under > > > > machine.slice? > > > > > > You can use machine.slice to set system resource options that each sub slice inherits. Those options are documented at https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# (per my earlier link https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I don't see OOMScoreAdjust listed there so I am unsure if you can actually set it via this method. > > > > > > That all said, if you are oversubscribing memory this is likely to always be an issue. If you adjust the oom score for your VMs then the oomkiller is just going to find other victims to kill. Losing your nova compute agent or NetworkManager or iscsid may be just as problematic. Instead, I suspect that you may need to stop oversubscribing memory. > > > > > > > > > > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan wrote: > > > > > > > > > > On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > > > > > > Is there any configuration file that is needed to ensure guest domains > > > > > > are under systemd machine.slice? not seeing anything under > > > > > > machine.slice .. > > > > > > > > > > I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > > > > > > wrote: > > > > > > > > > > > > > > I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. > > > > > > > > > > > > > > ??, 20 ???. 2022 ?., 21:52 hai wu : > > > > > > > > > > > > > > > > nova hypervisor sometimes oom would kill some openstack guests. > > > > > > > > > > > > > > > > Is it possible to not allow kernel to oom kill any openstack guests? > > > > > > > > ram is not oversubscribed much .. > > > > > > > > > > > > > > > > > From senrique at redhat.com Thu Jul 21 13:31:25 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Thu, 21 Jul 2022 10:31:25 -0300 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Hi Kendall, I'm interested in mentoring again! Sofia On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson wrote: > Hello Everyone! > > We are again signed up to participate in Open Source Day at the Grace > Hopper Conference. It's a virtual event, being held on Friday, September > 16, 2022, from 8am to 3pm Pacific Time. > > If you are interested in mentoring for this one day event, please let me > know ASAP. I am supposed to give them a list of mentors by the end of this > week. > > Day of, we will essentially get participants to setup a dev environment > (gerrit, etc) and work on a bug and get it pushed. At this point I was > thinking of making use of gaps in the SDK/OSC, but if your project has some > low hanging fruit that you want to bring along, that works too! > > Looking forward to working with you!! > > -Kendall (diablo_rojo) > > > -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From helena at openstack.org Thu Jul 21 15:14:35 2022 From: helena at openstack.org (Helena Spease) Date: Thu, 21 Jul 2022 10:14:35 -0500 Subject: =?utf-8?B?QW5kIHRoZSB3aW5uZXIgaXPigKYu?= In-Reply-To: References: Message-ID: Hello everyone! Voting for the next OpenStack release name has ended and we have a winner! Get excited for the OpenStack Antelope release! Thank you to everyone who voted and helped us pick the next name. Thank you, Helena > On Jul 18, 2022, at 12:28 PM, Helena Spease wrote: > > Hello everyone! > > We are so excited to announce that voting for the next OpenStack release name has opened! A few popular choices, like Aardvark, unfortunately, did not pass trademark checks. > > Here are your finalists: > Anchovy - boring can be delicious! also a town in Jamaica > Anteater - an animal where form clearly follows function > Antelope - swift and gracious, also a type of steam locomotive > > Submit your vote by July 20th at 11:59pm PT (July 21st 6:59 UTC) and help us pick the next OpenStack release name! > > https://civs1.civs.us/cgi-bin/vote.pl?id=E_2b6c69494a6d3222&akey=d3350c7bda8bad74 > > Thank you, > Helena > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashrodri at redhat.com Thu Jul 21 15:14:56 2022 From: ashrodri at redhat.com (Ashley Rodriguez) Date: Thu, 21 Jul 2022 11:14:56 -0400 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Hey Kendall, Sign me up as well. I had a great time last year and I'm excited to participate again. Best, Ashley Rodriguez On Thu, Jul 21, 2022 at 9:48 AM Sofia Enriquez wrote: > Hi Kendall, > I'm interested in mentoring again! > Sofia > > On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson > wrote: > >> Hello Everyone! >> >> We are again signed up to participate in Open Source Day at the Grace >> Hopper Conference. It's a virtual event, being held on Friday, September >> 16, 2022, from 8am to 3pm Pacific Time. >> >> If you are interested in mentoring for this one day event, please let me >> know ASAP. I am supposed to give them a list of mentors by the end of this >> week. >> >> Day of, we will essentially get participants to setup a dev environment >> (gerrit, etc) and work on a bug and get it pushed. At this point I was >> thinking of making use of gaps in the SDK/OSC, but if your project has some >> low hanging fruit that you want to bring along, that works too! >> >> Looking forward to working with you!! >> >> -Kendall (diablo_rojo) >> >> >> > > > -- > > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > @RedHat Red Hat > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Jul 21 15:52:51 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 21 Jul 2022 17:52:51 +0200 Subject: [neutron] Drivers meeting agenda - 22.07.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. * [rfe][ovn] Support address group for ovn driver (#link [rfe][ovn] Support address group for ovn driver ) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernandoperches at gmail.com Thu Jul 21 16:49:41 2022 From: fernandoperches at gmail.com (Fernando Ferraz) Date: Thu, 21 Jul 2022 13:49:41 -0300 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Hi Kendall, I?m interested in mentoring this year too. :) Fernando On Thu, 21 Jul 2022 at 12:35 Ashley Rodriguez wrote: > Hey Kendall, > Sign me up as well. I had a great time last year and I'm excited to > participate again. > Best, > Ashley Rodriguez > > On Thu, Jul 21, 2022 at 9:48 AM Sofia Enriquez > wrote: > >> Hi Kendall, >> I'm interested in mentoring again! >> Sofia >> >> On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson >> wrote: >> >>> Hello Everyone! >>> >>> We are again signed up to participate in Open Source Day at the Grace >>> Hopper Conference. It's a virtual event, being held on Friday, >>> September 16, 2022, from 8am to 3pm Pacific Time. >>> >>> If you are interested in mentoring for this one day event, please let me >>> know ASAP. I am supposed to give them a list of mentors by the end of this >>> week. >>> >>> Day of, we will essentially get participants to setup a dev environment >>> (gerrit, etc) and work on a bug and get it pushed. At this point I was >>> thinking of making use of gaps in the SDK/OSC, but if your project has some >>> low hanging fruit that you want to bring along, that works too! >>> >>> Looking forward to working with you!! >>> >>> -Kendall (diablo_rojo) >>> >>> >>> >> >> >> -- >> >> Sof?a Enriquez >> >> she/her >> >> Software Engineer >> >> Red Hat PnT >> >> IRC: @enriquetaso >> @RedHat Red Hat >> Red Hat >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mmilan2006 at gmail.com Thu Jul 21 17:07:43 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Thu, 21 Jul 2022 22:37:43 +0530 Subject: Manila/NFS support for Zun In-Reply-To: References: Message-ID: Thank you for your response. On Thu, Jul 21, 2022 at 1:25 AM Carlos Silva wrote: > Hello, sorry for the late reply > > Em s?b., 16 de jul. de 2022 ?s 13:28, Vaibhav > escreveu: > >> Hi, >> >> I want to mount my Manila shares on containers managed by Zun. >> >> I can see a Fuxi project and driver for this but it is discontinued now. >> >> I want to have a shared file system to be mounted on multiple containers >> simultaneously, it is not possible with cinder. >> >> Is there any alternative to Fuxi? >> > I can't think of many by looking at the use case. Isn't there anything on > Zun itself that allows the shares to be mounted directly in the containers? > No, Zun gives only option of cinder volumes to be mounted. With cinder I am not able to have a shared file system among the containers, which I want to have. > Or can it work with yoga release ? >> >> I would not say so, as the project is no longer maintained and the > latest commit is from 2017. > >> Please advise and give a suggestion. >> >> Regards, >> Vaibhav >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ces.eduardo98 at gmail.com Thu Jul 21 17:27:24 2022 From: ces.eduardo98 at gmail.com (Carlos Silva) Date: Thu, 21 Jul 2022 14:27:24 -0300 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Hey, Kendall! I'm interested in mentoring too. I brought this up during today's manila weekly meeting and I think we may have 1 or 2 more mentors willing to participate. Thanks, carloss Em qui., 21 de jul. de 2022 ?s 13:56, Fernando Ferraz < fernandoperches at gmail.com> escreveu: > Hi Kendall, > > I?m interested in mentoring this year too. :) > > Fernando > > On Thu, 21 Jul 2022 at 12:35 Ashley Rodriguez wrote: > >> Hey Kendall, >> Sign me up as well. I had a great time last year and I'm excited to >> participate again. >> Best, >> Ashley Rodriguez >> >> On Thu, Jul 21, 2022 at 9:48 AM Sofia Enriquez >> wrote: >> >>> Hi Kendall, >>> I'm interested in mentoring again! >>> Sofia >>> >>> On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson >>> wrote: >>> >>>> Hello Everyone! >>>> >>>> We are again signed up to participate in Open Source Day at the Grace >>>> Hopper Conference. It's a virtual event, being held on Friday, >>>> September 16, 2022, from 8am to 3pm Pacific Time. >>>> >>>> If you are interested in mentoring for this one day event, please let >>>> me know ASAP. I am supposed to give them a list of mentors by the end of >>>> this week. >>>> >>>> Day of, we will essentially get participants to setup a dev environment >>>> (gerrit, etc) and work on a bug and get it pushed. At this point I was >>>> thinking of making use of gaps in the SDK/OSC, but if your project has some >>>> low hanging fruit that you want to bring along, that works too! >>>> >>>> Looking forward to working with you!! >>>> >>>> -Kendall (diablo_rojo) >>>> >>>> >>>> >>> >>> >>> -- >>> >>> Sof?a Enriquez >>> >>> she/her >>> >>> Software Engineer >>> >>> Red Hat PnT >>> >>> IRC: @enriquetaso >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From the.wade.albright at gmail.com Wed Jul 20 21:04:22 2022 From: the.wade.albright at gmail.com (Wade Albright) Date: Wed, 20 Jul 2022 14:04:22 -0700 Subject: [ironic][xena] problems updating redfish_password for existing node In-Reply-To: References: Message-ID: I forgot to mention, that using session auth solved the problem after upgrading to the newer versions that include the two mentioned patches. On Wed, Jul 20, 2022 at 7:36 AM Wade Albright wrote: > Switching to session auth solved the problem, and it seems like the better > way to go anyway for equipment that supports it. Thanks again for all your > help! > > Wade > > On Tue, Jul 19, 2022 at 5:37 PM Julia Kreger > wrote: > >> Just to provide a brief update for the mailing list. It looks like >> this is a case of use of Basic Auth with the BMC, where we were not >> catching the error properly... and thus not reporting the >> authentication failure to ironic so it would catch, and initiate a new >> client with the most up to date password. The default, typically used >> path is Session based authentication as BMCs generally handle internal >> session/user login tracking in a far better fashion. But not every BMC >> supports sessions. >> >> Fix in review[0] :) >> >> -Julia >> [0] https://review.opendev.org/c/openstack/sushy/+/850425 >> >> On Mon, Jul 18, 2022 at 4:15 PM Julia Kreger >> wrote: >> > >> > Excellent, hopefully I'll be able to figure out why Sushy is not doing >> > the needful... Or if it is and Ironic is not picking up on it. >> > >> > Anyway, I've posted >> > https://review.opendev.org/c/openstack/ironic/+/850259 which might >> > handle this issue. Obviously a work in progress, but it represents >> > what I think is happening inside of ironic itself leading into sushy >> > when cache access occurs. >> > >> > On Mon, Jul 18, 2022 at 4:04 PM Wade Albright >> > wrote: >> > > >> > > Sounds good, I will do that tomorrow. Thanks Julia. >> > > >> > > On Mon, Jul 18, 2022 at 3:27 PM Julia Kreger < >> juliaashleykreger at gmail.com> wrote: >> > >> >> > >> Debug would be best. I think I have an idea what is going on, and >> this >> > >> is a similar variation. If you want, you can email them directly to >> > >> me. Specifically only need entries reported by the sushy library and >> > >> ironic.drivers.modules.redfish.utils. >> > >> >> > >> On Mon, Jul 18, 2022 at 3:20 PM Wade Albright >> > >> wrote: >> > >> > >> > >> > I'm happy to supply some logs, what verbosity level should i use? >> And should I just embed the logs in email to the list or upload somewhere? >> > >> > >> > >> > On Mon, Jul 18, 2022 at 3:14 PM Julia Kreger < >> juliaashleykreger at gmail.com> wrote: >> > >> >> >> > >> >> If you could supply some conductor logs, that would be helpful. It >> > >> >> should be re-authenticating, but obviously we have a larger bug >> there >> > >> >> we need to find the root issue behind. >> > >> >> >> > >> >> On Mon, Jul 18, 2022 at 3:06 PM Wade Albright >> > >> >> wrote: >> > >> >> > >> > >> >> > I was able to use the patches to update the code, but >> unfortunately the problem is still there for me. >> > >> >> > >> > >> >> > I also tried an RPM upgrade to the versions Julia mentioned had >> the fixes, namely Sushy 3.12.1 - Released May 2022 and Ironic 18.2.1 - >> Released in January 2022. But it did not fix the problem. >> > >> >> > >> > >> >> > I am able to consistently reproduce the error. >> > >> >> > - step 1: change BMC password directly on the node itself >> > >> >> > - step 2: update BMC password (redfish_password) in ironic >> with 'openstack baremetal node set --driver-info >> redfish_password='newpass' >> > >> >> > >> > >> >> > After step 1 there are errors in the logs entries like "Session >> authentication appears to have been lost at some point in time" and >> eventually it puts the node into maintenance mode and marks the power state >> as "none." >> > >> >> > After step 2 and taking the host back out of maintenance mode, >> it goes through a similar set of log entries puts the node into MM again. >> > >> >> > >> > >> >> > After the above steps, a conductor restart fixes the problem >> and operations work normally again. Given this it seems like there is still >> some kind of caching issue. >> > >> >> > >> > >> >> > On Sat, Jul 16, 2022 at 6:01 PM Wade Albright < >> the.wade.albright at gmail.com> wrote: >> > >> >> >> >> > >> >> >> Hi Julia, >> > >> >> >> >> > >> >> >> Thank you so much for the reply! Hopefully this is the issue. >> I'll try out the patches next week and report back. I'll also email you on >> Monday about the versions, that would be very helpful to know. >> > >> >> >> >> > >> >> >> Thanks again, really appreciate it. >> > >> >> >> >> > >> >> >> Wade >> > >> >> >> >> > >> >> >> >> > >> >> >> >> > >> >> >> On Sat, Jul 16, 2022 at 4:36 PM Julia Kreger < >> juliaashleykreger at gmail.com> wrote: >> > >> >> >>> >> > >> >> >>> Greetings! >> > >> >> >>> >> > >> >> >>> I believe you need two patches, one in ironic and one in >> sushy. >> > >> >> >>> >> > >> >> >>> Sushy: >> > >> >> >>> https://review.opendev.org/c/openstack/sushy/+/832860 >> > >> >> >>> >> > >> >> >>> Ironic: >> > >> >> >>> https://review.opendev.org/c/openstack/ironic/+/820588 >> > >> >> >>> >> > >> >> >>> I think it is variation, and the comment about working after >> you restart the conductor is the big signal to me. I?m on a phone on a bad >> data connection, if you email me on Monday I can see what versions the >> fixes would be in. >> > >> >> >>> >> > >> >> >>> For the record, it is a session cache issue, the bug was that >> the service didn?t quite know what to do when auth fails. >> > >> >> >>> >> > >> >> >>> -Julia >> > >> >> >>> >> > >> >> >>> >> > >> >> >>> On Fri, Jul 15, 2022 at 2:55 PM Wade Albright < >> the.wade.albright at gmail.com> wrote: >> > >> >> >>>> >> > >> >> >>>> Hi, >> > >> >> >>>> >> > >> >> >>>> I'm hitting a problem when trying to update the >> redfish_password for an existing node. I'm curious to know if anyone else >> has encountered this problem. I'm not sure if I'm just doing something >> wrong or if there is a bug. Or if the problem is unique to my setup. >> > >> >> >>>> >> > >> >> >>>> I have a node already added into ironic with all the driver >> details set, and things are working fine. I am able to run deployments. >> > >> >> >>>> >> > >> >> >>>> Now I need to change the redfish password on the host. So I >> update the password for redfish access on the host, then use an 'openstack >> baremetal node set --driver-info redfish_password=' command >> to set the new redfish_password. >> > >> >> >>>> >> > >> >> >>>> Once this has been done, deployment no longer works. I see >> redfish authentication errors in the logs and the operation fails. I waited >> a bit to see if there might just be a delay in updating the password, but >> after awhile it still didn't work. >> > >> >> >>>> >> > >> >> >>>> I restarted the conductor, and after that things work fine >> again. So it seems like the password is cached or something. Is there a way >> to force the password to update? I even tried removing the redfish >> credentials and re-adding them, but that didn't work either. Only a >> conductor restart seems to make the new password work. >> > >> >> >>>> >> > >> >> >>>> We are running Xena, using rpm installation on Oracle Linux >> 8.5. >> > >> >> >>>> >> > >> >> >>>> Thanks in advance for any help with this issue. >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gk.coltech at gmail.com Thu Jul 21 10:32:38 2022 From: gk.coltech at gmail.com (Tan Tran Trong) Date: Thu, 21 Jul 2022 17:32:38 +0700 Subject: [kolla] RabbitMQ High Availability Message-ID: Hello, I'm trying to figure out how to configure RabbitMQ to make it high available. I have 3 controller nodes and 2 compute nodes, deployed with kolla with mostly default configuration. The RabbitMQ set to ha-all for all queues on all nodes, amqp_durable_queues = True My problem is when I shutdown 1 controller node (or 1 RabbitMQ container) (master or slave) the whole cluster becomes unstable. Some instances can not be created, it is stuck on Scheduling, Block Device Mapping, the volumes not shown or are stuck on creating, the compute node reported dead randomly,... I'm looking for documentation to know how Openstack using RabbitMQ, Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA in a stable way. Do you have any recommendation? TIA, Tan -------------- next part -------------- An HTML attachment was scrubbed... URL: From wu.wenxiang at 99cloud.net Thu Jul 21 22:29:23 2022 From: wu.wenxiang at 99cloud.net (=?UTF-8?B?5ZC05paH55u4?=) Date: Fri, 22 Jul 2022 06:29:23 +0800 Subject: [dev] directions to the right project team In-Reply-To: References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> Message-ID: <80792012-4EEA-4399-B1BC-CA69C1FD8AAD@99cloud.net> 1. Skyline UX support project pagination from the very beginning since we also meet this ?too many keystone projects? issue in previous projects 2. https://bugs.launchpad.net/skyline-apiserver/+bug/1972736 skyline SSO support is on the way, & supposed could finish before August, 15th. Thanks Best Regards Wenxiang Wu From: on behalf of Danny Webb Date: Tuesday, July 19, 2022 at 21:49 To: Julia Kreger Cc: Dmitriy Rabotyagov , openstack-discuss Subject: Re: [dev] directions to the right project team It was a conscious decision by the keystone team back in 2015 as far as we can tell. There was a large discussion on the mailing list regarding this that seemed to have left this issue unresolved (most specifically around user pagination, but this seems to have affected project / domain elements as well). https://lists.openstack.org/pipermail/openstack-dev/2015-August/thread.html#72082 Ultimately what we're seeking to do is start a discussion within the keystone / horizon (and potentially skyline) community about how we can rectify the current issues we're facing around the usability of the UX portion of keystone elements. Eg, if pagination isn't the way the keystone community wants to go should we look at instead having dynamic filtering instead in the UI? Are there any other options that people can think of that might be a better way forward? From: Julia Kreger Sent: 19 July 2022 14:21 To: Danny Webb Cc: Dmitriy Rabotyagov ; openstack-discuss Subject: Re: [dev] directions to the right project team CAUTION: This email originates from outside THG On Tue, Jul 19, 2022 at 3:44 AM Danny Webb wrote: > > Unfortunately pagination was removed from keystone in the v3 api and as far as we're aware it was never re-added. > This is quite concerning, and a quick look at the code confirms it. Mostly. There are remnants of "hints" and SQL Query filtering, but the internal limit is just a truncation which seems bad as well. https://github.com/openstack/keystone/blame/d7b1d57cae738183f8d85413e942402a8a4efb31/keystone/server/flask/common.py#L675 This seems like a fundamental performance oriented feature because the overhead in data conversion can be quite a bit when you have a large number of well... any objects being returned from a database. Does anyone know if a bug is open for this issue? Danny Webb Principal OpenStack Engineer The Hut Group Tel: Email: Danny.Webb at thehutgroup.com For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. Confidentiality Notice This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. Encryptions and Viruses Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. Monitoring Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From tony at bakeyournoodle.com Fri Jul 22 00:05:03 2022 From: tony at bakeyournoodle.com (Tony Breeds) Date: Fri, 22 Jul 2022 10:05:03 +1000 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: On Wed, 20 Jul 2022 at 09:29, Kendall Nelson wrote: > > Hello Everyone! > > We are again signed up to participate in Open Source Day at the Grace Hopper Conference. It's a virtual event, being held on Friday, September 16, 2022, from 8am to 3pm Pacific Time. > > If you are interested in mentoring for this one day event, please let me know ASAP. I am supposed to give them a list of mentors by the end of this week. I'm keen to help. It'll be good to get back into the community. Yours Tony. From satish.txt at gmail.com Fri Jul 22 04:06:12 2022 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 22 Jul 2022 00:06:12 -0400 Subject: ovn-bgp-agent installation issue Message-ID: Folks, I am trying to create lab of of ovn-bgp-agent using this blog https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ So far everything went well but I'm stuck at the bgp-agent installation and I encounter following error when running bgp-agent. Any suggestions? root at rack-1-host-2:/home/vagrant/bgp-agent# bgp-agent 2022-07-22 04:02:39.123 111551 INFO bgp_agent.config [-] Logging enabled! 2022-07-22 04:02:39.475 111551 CRITICAL bgp-agent [-] Unhandled error: AssertionError 2022-07-22 04:02:39.475 111551 ERROR bgp-agent Traceback (most recent call last): 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/bin/bgp-agent", line 10, in 2022-07-22 04:02:39.475 111551 ERROR bgp-agent sys.exit(start()) 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 76, in start 2022-07-22 04:02:39.475 111551 ERROR bgp-agent bgp_agent_launcher = service.launch(config.CONF, BGPAgent()) 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 44, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self.agent_driver = driver_api.AgentDriverBase.get_instance( 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/driver_api.py", line 25, in get_instance 2022-07-22 04:02:39.475 111551 ERROR bgp-agent agent_driver = stevedore_driver.DriverManager( 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 54, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(DriverManager, self).__init__( 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 78, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent extensions = self._load_plugins(invoke_on_load, 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 221, in _load_plugins 2022-07-22 04:02:39.475 111551 ERROR bgp-agent ext = self._load_one_plugin(ep, 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 156, in _load_one_plugin 2022-07-22 04:02:39.475 111551 ERROR bgp-agent return super(NamedExtensionManager, self)._load_one_plugin( 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 257, in _load_one_plugin 2022-07-22 04:02:39.475 111551 ERROR bgp-agent obj = plugin(*invoke_args, **invoke_kwds) 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/ovn_bgp_driver.py", line 64, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self._sb_idl = ovn.OvnSbIdl( 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", line 62, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnSbIdl, self).__init__( 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", line 31, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnIdl, self).__init__(remote, schema) 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 283, in __init__ 2022-07-22 04:02:39.475 111551 ERROR bgp-agent schema = schema_helper.get_idl_schema() 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2323, in get_idl_schema 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self._keep_table_columns(schema, table, columns)) 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2330, in _keep_table_columns 2022-07-22 04:02:39.475 111551 ERROR bgp-agent assert table_name in schema.tables 2022-07-22 04:02:39.475 111551 ERROR bgp-agent AssertionError 2022-07-22 04:02:39.475 111551 ERROR bgp-agent After googling I found one more agent at https://opendev.org/x/ovn-bgp-agent and its also throwing an error. Which agent should I be using? root at rack-1-host-2:~# ovn-bgp-agent 2022-07-22 04:04:36.780 111761 INFO ovn_bgp_agent.config [-] Logging enabled! 2022-07-22 04:04:37.247 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' stopped 2022-07-22 04:04:37.248 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' starting 2022-07-22 04:04:37.248 111761 INFO ovn_bgp_agent.drivers.openstack.utils.frr [-] Add VRF leak for VRF ovn-bgp-vrf on router bgp 64999 2022-07-22 04:04:37.248 111761 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'privsep-helper', '--privsep_context', 'ovn_bgp_agent.privileged.vtysh_cmd', '--privsep_sock_path', '/tmp/tmp4cie9eiz/privsep.sock'] 2022-07-22 04:04:37.687 111761 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap 2022-07-22 04:04:37.598 111769 INFO oslo.privsep.daemon [-] privsep daemon starting 2022-07-22 04:04:37.613 111769 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep daemon running as pid 111769 2022-07-22 04:04:37.987 111769 ERROR ovn_bgp_agent.privileged.vtysh [-] Unable to execute vtysh with ['/usr/bin/vtysh', '--vty_socket', '/run/frr/', '-c', 'copy /tmp/tmpiz5s_wvs running-config']. Exception: Unexpected error while running command. Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config Exit code: 1 Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' Stderr: '' Traceback (most recent call last): File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/privileged/vtysh.py", line 30, in run_vtysh_config return processutils.execute(*full_args) File "/usr/local/lib/python3.8/dist-packages/oslo_concurrency/processutils.py", line 438, in execute raise ProcessExecutionError(exit_code=_returncode, oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config Exit code: 1 Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' Stderr: '' 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service [-] Error starting thread.: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config Exit code: 1 Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' Stderr: '' 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Traceback (most recent call last): 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/oslo_service/service.py", line 806, in run_service 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service service.start() 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/agent.py", line 50, in start 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service self.agent_driver.start() 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/ovn_bgp_driver.py", line 73, in start 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service frr.vrf_leak(constants.OVN_BGP_VRF, CONF.bgp_AS, CONF.bgp_router_id) 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", line 110, in vrf_leak 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service _run_vtysh_config_with_tempfile(vrf_config) 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", line 93, in _run_vtysh_config_with_tempfile 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service ovn_bgp_agent.privileged.vtysh.run_vtysh_config(f.name) 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/oslo_privsep/priv_context.py", line 271, in _wrap 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service return self.channel.remote_call(name, args, kwargs, 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/oslo_privsep/daemon.py", line 215, in remote_call 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service raise exc_type(*result[2]) 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Exit code: 1 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stderr: '' 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service 2022-07-22 04:04:37.993 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' stopping 2022-07-22 04:04:37.994 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' stopped -------------- next part -------------- An HTML attachment was scrubbed... URL: From ltomasbo at redhat.com Fri Jul 22 06:40:55 2022 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Fri, 22 Jul 2022 08:40:55 +0200 Subject: ovn-bgp-agent installation issue In-Reply-To: References: Message-ID: Hi Satish, The one to use should be https://opendev.org/x/ovn-bgp-agent. The one on my personal github repo was the initial PoC for it. But the opendev one is the upstream effort to develop it, and is the one being maintained/updated. Looking at your second logs, it seems you are missing FRR (and its shell, vtysh) in the node. Actually, thinking about this: "Unexpected error while running command. Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config" The ovn-bgp-agent has been developed with "deploying on containers" in mind, meaning it is assuming there is a frr container running, and the container running the agent is trying to connect to the same socket so that it can run the vtysh commands. Perhaps in your case the frr socket is in a different location than /run/frr/ On Fri, Jul 22, 2022 at 6:27 AM Satish Patel wrote: > Folks, > > I am trying to create lab of of ovn-bgp-agent using this blog > https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ > > So far everything went well but I'm stuck at the bgp-agent > installation and I encounter following error when running bgp-agent. > Any suggestions? > > root at rack-1-host-2:/home/vagrant/bgp-agent# bgp-agent > 2022-07-22 04:02:39.123 111551 INFO bgp_agent.config [-] Logging enabled! > 2022-07-22 04:02:39.475 111551 CRITICAL bgp-agent [-] Unhandled error: > AssertionError > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent Traceback (most recent call > last): > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/bin/bgp-agent", line 10, in > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent sys.exit(start()) > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 76, in > start > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent bgp_agent_launcher = > service.launch(config.CONF, BGPAgent()) > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 44, in > __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self.agent_driver = > driver_api.AgentDriverBase.get_instance( > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/driver_api.py", > line 25, in get_instance > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent agent_driver = > stevedore_driver.DriverManager( > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 54, in > __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(DriverManager, > self).__init__( > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 78, in > __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent extensions = > self._load_plugins(invoke_on_load, > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 221, > in _load_plugins > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent ext = > self._load_one_plugin(ep, > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 156, in > _load_one_plugin > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent return > super(NamedExtensionManager, self)._load_one_plugin( > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 257, > in _load_one_plugin > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent obj = > plugin(*invoke_args, **invoke_kwds) > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/ovn_bgp_driver.py", > line 64, in __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self._sb_idl = > ovn.OvnSbIdl( > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", > line 62, in __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnSbIdl, > self).__init__( > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", > line 31, in __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnIdl, > self).__init__(remote, schema) > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 283, in > __init__ > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent schema = > schema_helper.get_idl_schema() > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2323, in > get_idl_schema > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent > self._keep_table_columns(schema, table, columns)) > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File > "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2330, in > _keep_table_columns > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent assert table_name in > schema.tables > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent AssertionError > 2022-07-22 04:02:39.475 111551 ERROR bgp-agent > > > > > After googling I found one more agent at > https://opendev.org/x/ovn-bgp-agent and its also throwing an error. Which > agent should I be using? > > root at rack-1-host-2:~# ovn-bgp-agent > 2022-07-22 04:04:36.780 111761 INFO ovn_bgp_agent.config [-] Logging > enabled! > 2022-07-22 04:04:37.247 111761 INFO ovn_bgp_agent.agent [-] Service > 'BGPAgent' stopped > 2022-07-22 04:04:37.248 111761 INFO ovn_bgp_agent.agent [-] Service > 'BGPAgent' starting > 2022-07-22 04:04:37.248 111761 INFO > ovn_bgp_agent.drivers.openstack.utils.frr [-] Add VRF leak for VRF > ovn-bgp-vrf on router bgp 64999 > 2022-07-22 04:04:37.248 111761 INFO oslo.privsep.daemon [-] Running > privsep helper: ['sudo', 'privsep-helper', '--privsep_context', > 'ovn_bgp_agent.privileged.vtysh_cmd', '--privsep_sock_path', > '/tmp/tmp4cie9eiz/privsep.sock'] > 2022-07-22 04:04:37.687 111761 INFO oslo.privsep.daemon [-] Spawned new > privsep daemon via rootwrap > 2022-07-22 04:04:37.598 111769 INFO oslo.privsep.daemon [-] privsep daemon > starting > 2022-07-22 04:04:37.613 111769 INFO oslo.privsep.daemon [-] privsep > process running with uid/gid: 0/0 > 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep > process running with capabilities (eff/prm/inh): > CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none > 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep daemon > running as pid 111769 > 2022-07-22 04:04:37.987 111769 ERROR ovn_bgp_agent.privileged.vtysh [-] > Unable to execute vtysh with ['/usr/bin/vtysh', '--vty_socket', > '/run/frr/', '-c', 'copy /tmp/tmpiz5s_wvs running-config']. Exception: > Unexpected error while running command. > Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs > running-config > Exit code: 1 > Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' > Stderr: '' > Traceback (most recent call last): > File > "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/privileged/vtysh.py", > line 30, in run_vtysh_config > return processutils.execute(*full_args) > File > "/usr/local/lib/python3.8/dist-packages/oslo_concurrency/processutils.py", > line 438, in execute > raise ProcessExecutionError(exit_code=_returncode, > oslo_concurrency.processutils.ProcessExecutionError: Unexpected error > while running command. > Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs > running-config > Exit code: 1 > Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' > Stderr: '' > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service [-] Error > starting thread.: oslo_concurrency.processutils.ProcessExecutionError: > Unexpected error while running command. > Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs > running-config > Exit code: 1 > Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' > Stderr: '' > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Traceback (most > recent call last): > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/oslo_service/service.py", line 806, > in run_service > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > service.start() > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/agent.py", line 50, > in start > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > self.agent_driver.start() > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/ovn_bgp_driver.py", > line 73, in start > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > frr.vrf_leak(constants.OVN_BGP_VRF, CONF.bgp_AS, CONF.bgp_router_id) > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", > line 110, in vrf_leak > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > _run_vtysh_config_with_tempfile(vrf_config) > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", > line 93, in _run_vtysh_config_with_tempfile > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > ovn_bgp_agent.privileged.vtysh.run_vtysh_config(f.name) > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/oslo_privsep/priv_context.py", line > 271, in _wrap > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service return > self.channel.remote_call(name, args, kwargs, > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File > "/usr/local/lib/python3.8/dist-packages/oslo_privsep/daemon.py", line 215, > in remote_call > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service raise > exc_type(*result[2]) > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while > running command. > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Command: > /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs > running-config > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Exit code: 1 > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stdout: '% > Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stderr: '' > 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service > 2022-07-22 04:04:37.993 111761 INFO ovn_bgp_agent.agent [-] Service > 'BGPAgent' stopping > 2022-07-22 04:04:37.994 111761 INFO ovn_bgp_agent.agent [-] Service > 'BGPAgent' stopped > -- LUIS TOM?S BOL?VAR Principal Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahendra.paipuri at cnrs.fr Fri Jul 22 06:49:19 2022 From: mahendra.paipuri at cnrs.fr (Mahendra Paipuri) Date: Fri, 22 Jul 2022 08:49:19 +0200 Subject: RAM and Storage requirements for Openstack cloud Message-ID: Hello all, We are going to deploy Openstack cloud with researchers as primary target users. We will have around 100 GPUs with 12-15 servers with Infiniband interconnect. We still do not know the exact spec of servers nor GPUs but mostly we will have A100s and Intel Xeon processors. What sort of RAM and Storage requirements we need for a cluster of this size? Of course, this depends a lot on use cases and this cloud will be primarily used for HPC and AI. For the storage, we are mainly interested in the block storage requirements for provisioning VMs. We will most probably have a shared parallel file system as scratch and project spaces. Is there any rule of thumb to get to the RAM and storage requirement numbers based on compute infrastructure we will have? If we can estimate a sort of "lower bound" that would be really helpful for us. If anyone have clusters of this size at your organizations and if you can share the RAM ans storage details of your clouds, that would be very useful for us too. Thanks a lot and have a great day!! Regards Mahendra From gthiemonge at redhat.com Fri Jul 22 07:44:35 2022 From: gthiemonge at redhat.com (Gregory Thiemonge) Date: Fri, 22 Jul 2022 09:44:35 +0200 Subject: [Octavia] next 2 weekly meetings cancelled Message-ID: Hi Folks, As discussed during the weekly meeting, the meetings for the next 2 weeks are cancelled. The next meeting will be on Aug 10th, Greg -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Fri Jul 22 08:21:01 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Fri, 22 Jul 2022 09:21:01 +0100 Subject: [dev] directions to the right project team In-Reply-To: <80792012-4EEA-4399-B1BC-CA69C1FD8AAD@99cloud.net> References: <37E41E49-D563-49A0-8E78-D5BD7041EEAF@gmail.com> <80792012-4EEA-4399-B1BC-CA69C1FD8AAD@99cloud.net> Message-ID: <7B25DB6F-78AD-4C05-9032-04183E2A7239@gmail.com> It sounds like Skyline meets our requirements more completely, and we will probably make the switch. Is any help needed with the SSO implementation or anything else ? Best Regards, Sergey Drozdov Software Engineer The Hut Group > On 21 Jul 2022, at 23:29, ??? wrote: > > 1. Skyline UX support project pagination from the very beginning since we also meet this ?too many keystone projects? issue in previous projects > 2. https://bugs.launchpad.net/skyline-apiserver/+bug/1972736 skyline SSO support is on the way, & supposed could finish before August, 15th. > > Thanks > > Best Regards > Wenxiang Wu > > From: on behalf of Danny Webb > Date: Tuesday, July 19, 2022 at 21:49 > To: Julia Kreger > Cc: Dmitriy Rabotyagov , openstack-discuss > Subject: Re: [dev] directions to the right project team > > It was a conscious decision by the keystone team back in 2015 as far as we can tell. There was a large discussion on the mailing list regarding this that seemed to have left this issue unresolved (most specifically around user pagination, but this seems to have affected project / domain elements as well). > > https://lists.openstack.org/pipermail/openstack-dev/2015-August/thread.html#72082 > > Ultimately what we're seeking to do is start a discussion within the keystone / horizon (and potentially skyline) community about how we can rectify the current issues we're facing around the usability of the UX portion of keystone elements. Eg, if pagination isn't the way the keystone community wants to go should we look at instead having dynamic filtering instead in the UI? Are there any other options that people can think of that might be a better way forward? > From: Julia Kreger > Sent: 19 July 2022 14:21 > To: Danny Webb > Cc: Dmitriy Rabotyagov ; openstack-discuss > Subject: Re: [dev] directions to the right project team > > CAUTION: This email originates from outside THG > > On Tue, Jul 19, 2022 at 3:44 AM Danny Webb wrote: > > > > Unfortunately pagination was removed from keystone in the v3 api and as far as we're aware it was never re-added. > > > > This is quite concerning, and a quick look at the code confirms it. > Mostly. There are remnants of "hints" and SQL Query filtering, but the > internal limit is just a truncation which seems bad as well. > > https://github.com/openstack/keystone/blame/d7b1d57cae738183f8d85413e942402a8a4efb31/keystone/server/flask/common.py#L675 > > This seems like a fundamental performance oriented feature because the > overhead in data conversion can be quite a bit when you have a large > number of well... any objects being returned from a database. > > Does anyone know if a bug is open for this issue? > Danny Webb > Principal OpenStack Engineer > The Hut Group > > Tel: > Email: Danny.Webb at thehutgroup.com > > For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. > > Confidentiality Notice > This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. > > Encryptions and Viruses > Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. > > Monitoring > Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. > > hgvyjuv -------------- next part -------------- An HTML attachment was scrubbed... URL: From doug at stackhpc.com Fri Jul 22 10:29:54 2022 From: doug at stackhpc.com (Doug Szumski) Date: Fri, 22 Jul 2022 11:29:54 +0100 Subject: [kolla] RabbitMQ High Availability In-Reply-To: References: Message-ID: On 21/07/2022 11:32, Tan Tran Trong wrote: > Hello, > I'm trying to figure out how to configure RabbitMQ to make it high > available. I have 3 controller nodes and 2 compute nodes, deployed > with kolla with mostly default configuration. The RabbitMQ set > to?ha-all for all queues on all nodes, amqp_durable_queues = True > My problem is when I shutdown 1 controller node (or 1 > RabbitMQ?container) (master or slave) the whole cluster becomes > unstable. Some instances?can not be created, it is stuck on > Scheduling, Block Device Mapping, the volumes not shown or are stuck > on creating, the compute node reported dead randomly,... > I'm looking for documentation to know how Openstack using RabbitMQ, > Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA > in a stable way. Do you have any recommendation? Would it be possible to compare with this approach of running a clustered Rabbit service, but without mirrored (and durable) queues? https://review.opendev.org/c/openstack/kolla-ansible/+/824994 It won't solve all failure scenarios, but we have seen it help with controlled shutdowns. We'd be interested in any failure scenarios you find with those settings. > > TIA, > Tan From satish.txt at gmail.com Fri Jul 22 12:55:49 2022 From: satish.txt at gmail.com (Satish Patel) Date: Fri, 22 Jul 2022 08:55:49 -0400 Subject: ovn-bgp-agent installation issue In-Reply-To: References: Message-ID: <0EC25301-AA10-44E7-A86C-231F8FFA04AC@gmail.com> Hi Luis, Thank you for reply, Let me tell you that your blog is wonderful. I have used your method to install frr without any docker container. What is the workaround here? Can I tell ovn-bgp-agent to not look for container of frr? I have notice one more thing that it?s trying to run ?copy /tmp/blah running-config? but that command isn?t supported. I have tried to run copy command manually on vtysh shell to see what options are available for copy command but there is only one command available with copy which is ?copy running-config startup-config? do you think I have wrong version of frr running? I?m running 7.2. Please let me know if any workaround here. Thank you Sent from my iPhone > On Jul 22, 2022, at 2:41 AM, Luis Tomas Bolivar wrote: > > ? > Hi Satish, > > The one to use should be https://opendev.org/x/ovn-bgp-agent. The one on my personal github repo was the initial PoC for it. But the opendev one is the upstream effort to develop it, and is the one being maintained/updated. > > Looking at your second logs, it seems you are missing FRR (and its shell, vtysh) in the node. > > Actually, thinking about this: > "Unexpected error while running command. > Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config" > > The ovn-bgp-agent has been developed with "deploying on containers" in mind, meaning it is assuming there is a frr container running, and the container running the agent is trying to connect to the same socket so that it can run the vtysh commands. Perhaps in your case the frr socket is in a different location than /run/frr/ > >> On Fri, Jul 22, 2022 at 6:27 AM Satish Patel wrote: >> Folks, >> >> I am trying to create lab of of ovn-bgp-agent using this blog https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ >> >> So far everything went well but I'm stuck at the bgp-agent installation and I encounter following error when running bgp-agent. Any suggestions? >> >> root at rack-1-host-2:/home/vagrant/bgp-agent# bgp-agent >> 2022-07-22 04:02:39.123 111551 INFO bgp_agent.config [-] Logging enabled! >> 2022-07-22 04:02:39.475 111551 CRITICAL bgp-agent [-] Unhandled error: AssertionError >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent Traceback (most recent call last): >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/bin/bgp-agent", line 10, in >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent sys.exit(start()) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 76, in start >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent bgp_agent_launcher = service.launch(config.CONF, BGPAgent()) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 44, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self.agent_driver = driver_api.AgentDriverBase.get_instance( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/driver_api.py", line 25, in get_instance >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent agent_driver = stevedore_driver.DriverManager( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 54, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(DriverManager, self).__init__( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 78, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent extensions = self._load_plugins(invoke_on_load, >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 221, in _load_plugins >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent ext = self._load_one_plugin(ep, >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 156, in _load_one_plugin >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent return super(NamedExtensionManager, self)._load_one_plugin( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 257, in _load_one_plugin >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent obj = plugin(*invoke_args, **invoke_kwds) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/ovn_bgp_driver.py", line 64, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self._sb_idl = ovn.OvnSbIdl( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", line 62, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnSbIdl, self).__init__( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", line 31, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnIdl, self).__init__(remote, schema) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 283, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent schema = schema_helper.get_idl_schema() >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2323, in get_idl_schema >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self._keep_table_columns(schema, table, columns)) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2330, in _keep_table_columns >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent assert table_name in schema.tables >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent AssertionError >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent >> >> >> >> >> After googling I found one more agent at https://opendev.org/x/ovn-bgp-agent and its also throwing an error. Which agent should I be using? >> >> root at rack-1-host-2:~# ovn-bgp-agent >> 2022-07-22 04:04:36.780 111761 INFO ovn_bgp_agent.config [-] Logging enabled! >> 2022-07-22 04:04:37.247 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' stopped >> 2022-07-22 04:04:37.248 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' starting >> 2022-07-22 04:04:37.248 111761 INFO ovn_bgp_agent.drivers.openstack.utils.frr [-] Add VRF leak for VRF ovn-bgp-vrf on router bgp 64999 >> 2022-07-22 04:04:37.248 111761 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'privsep-helper', '--privsep_context', 'ovn_bgp_agent.privileged.vtysh_cmd', '--privsep_sock_path', '/tmp/tmp4cie9eiz/privsep.sock'] >> 2022-07-22 04:04:37.687 111761 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap >> 2022-07-22 04:04:37.598 111769 INFO oslo.privsep.daemon [-] privsep daemon starting >> 2022-07-22 04:04:37.613 111769 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0 >> 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none >> 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep daemon running as pid 111769 >> 2022-07-22 04:04:37.987 111769 ERROR ovn_bgp_agent.privileged.vtysh [-] Unable to execute vtysh with ['/usr/bin/vtysh', '--vty_socket', '/run/frr/', '-c', 'copy /tmp/tmpiz5s_wvs running-config']. Exception: Unexpected error while running command. >> Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config >> Exit code: 1 >> Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> Stderr: '' >> Traceback (most recent call last): >> File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/privileged/vtysh.py", line 30, in run_vtysh_config >> return processutils.execute(*full_args) >> File "/usr/local/lib/python3.8/dist-packages/oslo_concurrency/processutils.py", line 438, in execute >> raise ProcessExecutionError(exit_code=_returncode, >> oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. >> Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config >> Exit code: 1 >> Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> Stderr: '' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service [-] Error starting thread.: oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. >> Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config >> Exit code: 1 >> Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> Stderr: '' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Traceback (most recent call last): >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/oslo_service/service.py", line 806, in run_service >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service service.start() >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/agent.py", line 50, in start >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service self.agent_driver.start() >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/ovn_bgp_driver.py", line 73, in start >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service frr.vrf_leak(constants.OVN_BGP_VRF, CONF.bgp_AS, CONF.bgp_router_id) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", line 110, in vrf_leak >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service _run_vtysh_config_with_tempfile(vrf_config) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", line 93, in _run_vtysh_config_with_tempfile >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service ovn_bgp_agent.privileged.vtysh.run_vtysh_config(f.name) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/oslo_privsep/priv_context.py", line 271, in _wrap >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service return self.channel.remote_call(name, args, kwargs, >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File "/usr/local/lib/python3.8/dist-packages/oslo_privsep/daemon.py", line 215, in remote_call >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service raise exc_type(*result[2]) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command. >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs running-config >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Exit code: 1 >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stderr: '' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> 2022-07-22 04:04:37.993 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' stopping >> 2022-07-22 04:04:37.994 111761 INFO ovn_bgp_agent.agent [-] Service 'BGPAgent' stopped > > > -- > LUIS TOM?S BOL?VAR > Principal Software Engineer > Red Hat > Madrid, Spain > ltomasbo at redhat.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Fri Jul 22 14:54:22 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Fri, 22 Jul 2022 20:24:22 +0530 Subject: [all][TC] Bare rechecks In-Reply-To: <8925819.fYGDMWaf8X@p1> References: <8925819.fYGDMWaf8X@p1> Message-ID: Hi Slawek, The number of "bare" rechecks reported for cinder are very high. | cinder | 137 | 163 | 84.05 | Cinder has documented the policy of not doing blind rechecks and all the core reviewers are following it for a long time. Driver vendors do comment "run-" (eg: "run-Yadro Tatlin Unified CI") to trigger their CI but it doesn't have the keyword "recheck" in it so shouldn't be counted but not sure how we evaluate it. Can you give more insight on how the script works because I don't think the number for cinder are correct here. On Thu, Jul 21, 2022 at 2:06 PM Slawek Kaplonski wrote: > Hi, > > New stats from last 7 days about bare rechecks in each team are available > in [1]: > > +--------------------+---------------+--------------+-------------------+ > | Team | Bare rechecks | All Rechecks | Bare rechecks [%] | > +--------------------+---------------+--------------+-------------------+ > | skyline | 2 | 2 | 100.0 | > | sahara | 2 | 2 | 100.0 | > | trove | 4 | 4 | 100.0 | > | tacker | 6 | 6 | 100.0 | > | horizon | 25 | 25 | 100.0 | > | magnum | 2 | 2 | 100.0 | > | masakari | 2 | 2 | 100.0 | > | Telemetry | 1 | 1 | 100.0 | > | kolla | 71 | 74 | 95.95 | > | OpenStack Charms | 16 | 17 | 94.12 | > | requirements | 33 | 37 | 89.19 | > | cinder | 137 | 163 | 84.05 | > | kuryr | 5 | 6 | 83.33 | > | OpenStack-Helm | 23 | 29 | 79.31 | > | tripleo | 97 | 125 | 77.6 | > | glance | 17 | 22 | 77.27 | > | Puppet OpenStack | 23 | 30 | 76.67 | > | ironic | 43 | 57 | 75.44 | > | keystone | 6 | 8 | 75.0 | > | octavia | 32 | 44 | 72.73 | > | swift | 8 | 11 | 72.73 | > | manila | 27 | 38 | 71.05 | > | OpenStackSDK | 17 | 24 | 70.83 | > | oslo | 12 | 17 | 70.59 | > | ec2-api | 2 | 3 | 66.67 | > | Quality Assurance | 13 | 20 | 65.0 | > | neutron | 33 | 52 | 63.46 | > | nova | 84 | 133 | 63.16 | > | heat | 3 | 5 | 60.0 | > | Release Management | 1 | 2 | 50.0 | > | designate | 4 | 8 | 50.0 | > | barbican | 2 | 12 | 16.67 | > | OpenStackAnsible | 0 | 1 | 0.0 | > +--------------------+---------------+--------------+-------------------+ > > [1] https://etherpad.opendev.org/p/recheck-weekly-summary[1] > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From ozzzo at yahoo.com Fri Jul 22 15:53:38 2022 From: ozzzo at yahoo.com (Albert Braden) Date: Fri, 22 Jul 2022 15:53:38 +0000 (UTC) Subject: [kolla] RabbitMQ High Availability In-Reply-To: References: Message-ID: <534360793.921334.1658505218178@mail.yahoo.com> The default RMQ config is broken. You're on the right track with setting durable_queues, but there's more to do. I'm running kolla Train with mirrored/durable queues and my clusters work fine with a controller down. One issue that we faced after setting durable was that we weren't running redis, and then when we tried to run it the network was blocking the port, but eventually we got it working. Some have recommended not mirroring queues; I haven't tried that. If anyone has successfully setup HA without mirrored queues, I'd be interested to hear about how you did it. Here are some helpful links: https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit https://lists.openstack.org/pipermail/openstack-discuss/2021-November/026074.html https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016362.html https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016524.html https://review.opendev.org/c/openstack/kolla-ansible/+/822191 https://review.opendev.org/c/openstack/kolla-ansible/+/824994 On Thursday, July 21, 2022, 02:42:42 PM EDT, Tan Tran Trong wrote: Hello,I'm trying to figure out how to configure RabbitMQ to make it high available. I have 3 controller nodes and 2 compute nodes, deployed with kolla with mostly default configuration. The RabbitMQ set to?ha-all for all queues on all nodes, amqp_durable_queues = TrueMy problem is when I shutdown 1 controller node (or 1?RabbitMQ?container) (master or slave) the whole cluster becomes unstable. Some instances?can not be created, it is stuck on Scheduling, Block Device Mapping, the volumes not shown or are stuck on creating, the compute node reported dead randomly,...I'm looking for documentation to know how Openstack using RabbitMQ, Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA in a stable way. Do you have any recommendation? TIA,Tan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jul 22 15:58:46 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 22 Jul 2022 10:58:46 -0500 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Excellent! I will add you to the list! -Kendall On Thu, Jul 21, 2022 at 8:31 AM Sofia Enriquez wrote: > Hi Kendall, > I'm interested in mentoring again! > Sofia > > On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson > wrote: > >> Hello Everyone! >> >> We are again signed up to participate in Open Source Day at the Grace >> Hopper Conference. It's a virtual event, being held on Friday, September >> 16, 2022, from 8am to 3pm Pacific Time. >> >> If you are interested in mentoring for this one day event, please let me >> know ASAP. I am supposed to give them a list of mentors by the end of this >> week. >> >> Day of, we will essentially get participants to setup a dev environment >> (gerrit, etc) and work on a bug and get it pushed. At this point I was >> thinking of making use of gaps in the SDK/OSC, but if your project has some >> low hanging fruit that you want to bring along, that works too! >> >> Looking forward to working with you!! >> >> -Kendall (diablo_rojo) >> >> >> > > > -- > > Sof?a Enriquez > > she/her > > Software Engineer > > Red Hat PnT > > IRC: @enriquetaso > @RedHat Red Hat > Red Hat > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jul 22 15:59:03 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 22 Jul 2022 10:59:03 -0500 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Awesome! Would love to have you on board again!! -Kendall On Thu, Jul 21, 2022 at 10:15 AM Ashley Rodriguez wrote: > Hey Kendall, > Sign me up as well. I had a great time last year and I'm excited to > participate again. > Best, > Ashley Rodriguez > > On Thu, Jul 21, 2022 at 9:48 AM Sofia Enriquez > wrote: > >> Hi Kendall, >> I'm interested in mentoring again! >> Sofia >> >> On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson >> wrote: >> >>> Hello Everyone! >>> >>> We are again signed up to participate in Open Source Day at the Grace >>> Hopper Conference. It's a virtual event, being held on Friday, >>> September 16, 2022, from 8am to 3pm Pacific Time. >>> >>> If you are interested in mentoring for this one day event, please let me >>> know ASAP. I am supposed to give them a list of mentors by the end of this >>> week. >>> >>> Day of, we will essentially get participants to setup a dev environment >>> (gerrit, etc) and work on a bug and get it pushed. At this point I was >>> thinking of making use of gaps in the SDK/OSC, but if your project has some >>> low hanging fruit that you want to bring along, that works too! >>> >>> Looking forward to working with you!! >>> >>> -Kendall (diablo_rojo) >>> >>> >>> >> >> >> -- >> >> Sof?a Enriquez >> >> she/her >> >> Software Engineer >> >> Red Hat PnT >> >> IRC: @enriquetaso >> @RedHat Red Hat >> Red Hat >> >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jul 22 15:59:22 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 22 Jul 2022 10:59:22 -0500 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: The more the merrier! -Kendall On Thu, Jul 21, 2022 at 11:49 AM Fernando Ferraz wrote: > Hi Kendall, > > I?m interested in mentoring this year too. :) > > Fernando > > On Thu, 21 Jul 2022 at 12:35 Ashley Rodriguez wrote: > >> Hey Kendall, >> Sign me up as well. I had a great time last year and I'm excited to >> participate again. >> Best, >> Ashley Rodriguez >> >> On Thu, Jul 21, 2022 at 9:48 AM Sofia Enriquez >> wrote: >> >>> Hi Kendall, >>> I'm interested in mentoring again! >>> Sofia >>> >>> On Tue, Jul 19, 2022 at 8:29 PM Kendall Nelson >>> wrote: >>> >>>> Hello Everyone! >>>> >>>> We are again signed up to participate in Open Source Day at the Grace >>>> Hopper Conference. It's a virtual event, being held on Friday, >>>> September 16, 2022, from 8am to 3pm Pacific Time. >>>> >>>> If you are interested in mentoring for this one day event, please let >>>> me know ASAP. I am supposed to give them a list of mentors by the end of >>>> this week. >>>> >>>> Day of, we will essentially get participants to setup a dev environment >>>> (gerrit, etc) and work on a bug and get it pushed. At this point I was >>>> thinking of making use of gaps in the SDK/OSC, but if your project has some >>>> low hanging fruit that you want to bring along, that works too! >>>> >>>> Looking forward to working with you!! >>>> >>>> -Kendall (diablo_rojo) >>>> >>>> >>>> >>> >>> >>> -- >>> >>> Sof?a Enriquez >>> >>> she/her >>> >>> Software Engineer >>> >>> Red Hat PnT >>> >>> IRC: @enriquetaso >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jul 22 15:59:56 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 22 Jul 2022 10:59:56 -0500 Subject: Mentors Needed - Grace Hopper Open Source Day + OpenStack In-Reply-To: References: Message-ID: Excellent! I will add you to my growing list of mentors :) -Kendall (diablo_rojo) On Thu, Jul 21, 2022 at 7:05 PM Tony Breeds wrote: > On Wed, 20 Jul 2022 at 09:29, Kendall Nelson > wrote: > > > > Hello Everyone! > > > > We are again signed up to participate in Open Source Day at the Grace > Hopper Conference. It's a virtual event, being held on Friday, September > 16, 2022, from 8am to 3pm Pacific Time. > > > > If you are interested in mentoring for this one day event, please let me > know ASAP. I am supposed to give them a list of mentors by the end of this > week. > > I'm keen to help. It'll be good to get back into the community. > > Yours Tony. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Jul 22 17:34:26 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 22 Jul 2022 10:34:26 -0700 Subject: [dev][designate][dns] Adding private DNS feature In-Reply-To: References: <485F0C96-63D7-49F1-9860-655EAF837974@gmail.com> Message-ID: Hi Danny, Ok, I think I have a bit better understanding of what you are interested in accomplishing. I see two different "features" in there, both of which have been talked about in the designate community. 1. Shared zones - Setup a zone that can be shared across projects. 2. DNS Views/Split horizon - Zones that return different answers based on ACLs such that an "internal" query may get a private address, but an "external" query may get a public address answer. Shared zones have some proposed patches and are close to ready. It just needs to be updated to account for the new "secure RBAC" community goal[1] and some review/test work. At the PTG we agreed that this patch set should be a priority to finish up, but many of us have had downstream work that has postponed starting work on this. DNS Views has a specification and some patches, but based on community feedback this approach is not going to work (major performance impact and will not work for many deployment scenarios). The patches have been abandoned by the developer. I think we need to restart the specification process on this feature before moving forward with it. [1] https://governance.openstack.org/tc/goals/selected/consistent-and-secure-rbac.html So, yes, there is community interest. I look forward to seeing what you are proposing and to see if we can align those needs with the above two features. Michael On Wed, Jul 20, 2022 at 3:46 PM Danny Webb wrote: > > We're thinking more of a private view available to individual or shared amongst a defined set of tenants. Loosely something akin to having amphora that serve up internal DNS that can be shared among one or more tenants with a deep integration into nova/neutron. Use case would be for example a enterprise that utilises many projects for various teams but wants to offer a single DNS domain across projects that isn't externally facing. We'll flush out a better use case and proposed architecture in the coming weeks, we're just putting some feelers out to see if this kind of thing was of any interest or use to others. > ________________________________ > From: Michael Johnson > Sent: 20 July 2022 22:59 > To: Sergey Drozdov > Cc: openstack-discuss > Subject: Re: [dev][designate][dns] Adding private DNS feature > > CAUTION: This email originates from outside THG > > Hi Sergey, > > Can you tell me a little bit more about what you want to accomplish? > Private DNS can mean different things, such as DNS-over-TLS, > DNS-over-HTTPS, split views, etc. > > Michael > > On Wed, Jul 20, 2022 at 12:51 PM Sergey Drozdov > wrote: > > > > Dear Sir/Madam, > > > > We are running OpenStack at scale and now have a requirement to have private DNS and were wondering if the designate team have any appetite for this? If yes, then further discussion is warranted as we would be happy to get the ball rolling on this. > > > > Best Regards, > > Sergey Drozdov > > Software Engineer > > The Hut Group > > Danny Webb > Principal OpenStack Engineer > The Hut Group > > Tel: > Email: Danny.Webb at thehutgroup.com > > > For the purposes of this email, the "company" means The Hut Group Limited, a company registered in England and Wales (company number 6539496) whose registered office is at Fifth Floor, Voyager House, Chicago Avenue, Manchester Airport, M90 3DQ and/or any of its respective subsidiaries. > > Confidentiality Notice > This e-mail is confidential and intended for the use of the named recipient only. If you are not the intended recipient please notify us by telephone immediately on +44(0)1606 811888 or return it to us by e-mail. Please then delete it from your system and note that any use, dissemination, forwarding, printing or copying is strictly prohibited. Any views or opinions are solely those of the author and do not necessarily represent those of the company. > > Encryptions and Viruses > Please note that this e-mail and any attachments have not been encrypted. They may therefore be liable to be compromised. Please also note that it is your responsibility to scan this e-mail and any attachments for viruses. We do not, to the extent permitted by law, accept any liability (whether in contract, negligence or otherwise) for any virus infection and/or external compromise of security and/or confidentiality in relation to transmissions sent by e-mail. > > Monitoring > Activity and use of the company's systems is monitored to secure its effective use and operation and for other lawful business purposes. Communications using these systems will also be monitored and may be recorded to secure effective use and operation and for other lawful business purposes. > > hgvyjuv From adivya1.singh at gmail.com Fri Jul 22 19:04:28 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Sat, 23 Jul 2022 00:34:28 +0530 Subject: Regarding Designate as a DNS in Open Stack Xena Message-ID: hi Team, I have a use case where Customer want to map the Full qualified domain name to the FLoating IP to connect to the Server. Besides creating a designation lxc Container using ansible and Create Zones, Do we need to do anything else from our side. I mean how the records are manager using designate Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Fri Jul 22 19:08:04 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Sat, 23 Jul 2022 00:38:04 +0530 Subject: Regarding Application Credential in Open Stack XENA Message-ID: hi Team, There is a use case, where a user wants to Create CI/CD pipeline using Application Credentials of Open Stack, Henceforth we tried Create an application with a role with a secret Key. but it is failing with the below output, The user wants to push the qcow2 image from his system to Open Stack. Using auth plugin: v3 application credential /usr/lib/python3.7/socket.py:660: ResourceWarning: unclosed self._sock = None ResourceWarning: Enable tracemalloc to get the object allocation traceback Not Found (HTTP 404) (Request-ID: req-217c9f28-d6a6-4649-adf9-2d9a4b965b3f) END return value: 1 it failed with a below result Regards Adivya Singh -------------- next part -------------- An HTML attachment was scrubbed... URL: From johnsomor at gmail.com Fri Jul 22 22:33:07 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Fri, 22 Jul 2022 15:33:07 -0700 Subject: Regarding Designate as a DNS in Open Stack Xena In-Reply-To: References: Message-ID: Hi Adivya, I think the information you are looking for is available in the Neutron networking guide: https://docs.openstack.org/neutron/yoga/admin/config-dns-int.html https://docs.openstack.org/neutron/yoga/admin/config-dns-int-ext-serv.html Michael On Fri, Jul 22, 2022 at 12:35 PM Adivya Singh wrote: > > hi Team, > > I have a use case where Customer want to map the Full qualified domain name to the FLoating IP to connect to the Server. > > Besides creating a designation lxc Container using ansible and Create Zones, Do we need to do anything else from our side. > > I mean how the records are manager using designate > > Regards > Adivya Singh From satish.txt at gmail.com Sat Jul 23 04:27:46 2022 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 23 Jul 2022 00:27:46 -0400 Subject: ovn-bgp-agent installation issue In-Reply-To: <0EC25301-AA10-44E7-A86C-231F8FFA04AC@gmail.com> References: <0EC25301-AA10-44E7-A86C-231F8FFA04AC@gmail.com> Message-ID: Hi Luis, As you suggested, checkout 5 commits back to avoid the Load_Balancer issue that made good progress. But now i am seeing very odd behavior where floating IP can get exposed to BGP but when i am trying to expose VM Tenant IP then I get a strange error. Here is the full logs output; https://paste.opendev.org/show/buRbY415guvHFUtSapFK/ On Fri, Jul 22, 2022 at 8:55 AM Satish Patel wrote: > Hi Luis, > > Thank you for reply, > > Let me tell you that your blog is wonderful. I have used your method to > install frr without any docker container. > > What is the workaround here? Can I tell ovn-bgp-agent to not look for > container of frr? > > I have notice one more thing that it?s trying to run ?copy /tmp/blah > running-config? but that command isn?t supported. > > I have tried to run copy command manually on vtysh shell to see what > options are available for copy command but there is only one command > available with copy which is ?copy running-config startup-config? do you > think I have wrong version of frr running? I?m running 7.2. > > Please let me know if any workaround here. Thank you > > Sent from my iPhone > > On Jul 22, 2022, at 2:41 AM, Luis Tomas Bolivar > wrote: > > ? > Hi Satish, > > The one to use should be https://opendev.org/x/ovn-bgp-agent. The one on > my personal github repo was the initial PoC for it. But the opendev one is > the upstream effort to develop it, and is the one being maintained/updated. > > Looking at your second logs, it seems you are missing FRR (and its shell, > vtysh) in the node. > > Actually, thinking about this: > "Unexpected error while running command. > Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs > running-config" > > The ovn-bgp-agent has been developed with "deploying on containers" in > mind, meaning it is assuming there is a frr container running, and the > container running the agent is trying to connect to the same socket so that > it can run the vtysh commands. Perhaps in your case the frr socket is in a > different location than /run/frr/ > > On Fri, Jul 22, 2022 at 6:27 AM Satish Patel wrote: > >> Folks, >> >> I am trying to create lab of of ovn-bgp-agent using this blog >> https://ltomasbo.wordpress.com/2021/02/04/ovn-bgp-agent-testing-setup/ >> >> So far everything went well but I'm stuck at the bgp-agent >> installation and I encounter following error when running bgp-agent. >> Any suggestions? >> >> root at rack-1-host-2:/home/vagrant/bgp-agent# bgp-agent >> 2022-07-22 04:02:39.123 111551 INFO bgp_agent.config [-] Logging enabled! >> 2022-07-22 04:02:39.475 111551 CRITICAL bgp-agent [-] Unhandled error: >> AssertionError >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent Traceback (most recent >> call last): >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/bin/bgp-agent", line 10, in >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent sys.exit(start()) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 76, in >> start >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent bgp_agent_launcher = >> service.launch(config.CONF, BGPAgent()) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/bgp_agent/agent.py", line 44, in >> __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self.agent_driver = >> driver_api.AgentDriverBase.get_instance( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/driver_api.py", >> line 25, in get_instance >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent agent_driver = >> stevedore_driver.DriverManager( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/stevedore/driver.py", line 54, in >> __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(DriverManager, >> self).__init__( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 78, in >> __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent extensions = >> self._load_plugins(invoke_on_load, >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 221, >> in _load_plugins >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent ext = >> self._load_one_plugin(ep, >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/stevedore/named.py", line 156, in >> _load_one_plugin >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent return >> super(NamedExtensionManager, self)._load_one_plugin( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/stevedore/extension.py", line 257, >> in _load_one_plugin >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent obj = >> plugin(*invoke_args, **invoke_kwds) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/ovn_bgp_driver.py", >> line 64, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent self._sb_idl = >> ovn.OvnSbIdl( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", >> line 62, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnSbIdl, >> self).__init__( >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/bgp_agent/platform/osp/utils/ovn.py", >> line 31, in __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent super(OvnIdl, >> self).__init__(remote, schema) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 283, in >> __init__ >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent schema = >> schema_helper.get_idl_schema() >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2323, in >> get_idl_schema >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent >> self._keep_table_columns(schema, table, columns)) >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent File >> "/usr/local/lib/python3.8/dist-packages/ovs/db/idl.py", line 2330, in >> _keep_table_columns >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent assert table_name in >> schema.tables >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent AssertionError >> 2022-07-22 04:02:39.475 111551 ERROR bgp-agent >> >> >> >> >> After googling I found one more agent at >> https://opendev.org/x/ovn-bgp-agent and its also throwing an error. >> Which agent should I be using? >> >> root at rack-1-host-2:~# ovn-bgp-agent >> 2022-07-22 04:04:36.780 111761 INFO ovn_bgp_agent.config [-] Logging >> enabled! >> 2022-07-22 04:04:37.247 111761 INFO ovn_bgp_agent.agent [-] Service >> 'BGPAgent' stopped >> 2022-07-22 04:04:37.248 111761 INFO ovn_bgp_agent.agent [-] Service >> 'BGPAgent' starting >> 2022-07-22 04:04:37.248 111761 INFO >> ovn_bgp_agent.drivers.openstack.utils.frr [-] Add VRF leak for VRF >> ovn-bgp-vrf on router bgp 64999 >> 2022-07-22 04:04:37.248 111761 INFO oslo.privsep.daemon [-] Running >> privsep helper: ['sudo', 'privsep-helper', '--privsep_context', >> 'ovn_bgp_agent.privileged.vtysh_cmd', '--privsep_sock_path', >> '/tmp/tmp4cie9eiz/privsep.sock'] >> 2022-07-22 04:04:37.687 111761 INFO oslo.privsep.daemon [-] Spawned new >> privsep daemon via rootwrap >> 2022-07-22 04:04:37.598 111769 INFO oslo.privsep.daemon [-] privsep >> daemon starting >> 2022-07-22 04:04:37.613 111769 INFO oslo.privsep.daemon [-] privsep >> process running with uid/gid: 0/0 >> 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep >> process running with capabilities (eff/prm/inh): >> CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none >> 2022-07-22 04:04:37.617 111769 INFO oslo.privsep.daemon [-] privsep >> daemon running as pid 111769 >> 2022-07-22 04:04:37.987 111769 ERROR ovn_bgp_agent.privileged.vtysh [-] >> Unable to execute vtysh with ['/usr/bin/vtysh', '--vty_socket', >> '/run/frr/', '-c', 'copy /tmp/tmpiz5s_wvs running-config']. Exception: >> Unexpected error while running command. >> Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs >> running-config >> Exit code: 1 >> Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> Stderr: '' >> Traceback (most recent call last): >> File >> "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/privileged/vtysh.py", >> line 30, in run_vtysh_config >> return processutils.execute(*full_args) >> File >> "/usr/local/lib/python3.8/dist-packages/oslo_concurrency/processutils.py", >> line 438, in execute >> raise ProcessExecutionError(exit_code=_returncode, >> oslo_concurrency.processutils.ProcessExecutionError: Unexpected error >> while running command. >> Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs >> running-config >> Exit code: 1 >> Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> Stderr: '' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service [-] Error >> starting thread.: oslo_concurrency.processutils.ProcessExecutionError: >> Unexpected error while running command. >> Command: /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs >> running-config >> Exit code: 1 >> Stdout: '% Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> Stderr: '' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Traceback (most >> recent call last): >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/oslo_service/service.py", line 806, >> in run_service >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> service.start() >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/agent.py", line 50, >> in start >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> self.agent_driver.start() >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/ovn_bgp_driver.py", >> line 73, in start >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> frr.vrf_leak(constants.OVN_BGP_VRF, CONF.bgp_AS, CONF.bgp_router_id) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", >> line 110, in vrf_leak >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> _run_vtysh_config_with_tempfile(vrf_config) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/ovn_bgp_agent/drivers/openstack/utils/frr.py", >> line 93, in _run_vtysh_config_with_tempfile >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> ovn_bgp_agent.privileged.vtysh.run_vtysh_config(f.name) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/oslo_privsep/priv_context.py", line >> 271, in _wrap >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service return >> self.channel.remote_call(name, args, kwargs, >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service File >> "/usr/local/lib/python3.8/dist-packages/oslo_privsep/daemon.py", line 215, >> in remote_call >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service raise >> exc_type(*result[2]) >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while >> running command. >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Command: >> /usr/bin/vtysh --vty_socket /run/frr/ -c copy /tmp/tmpiz5s_wvs >> running-config >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Exit code: 1 >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stdout: '% >> Unknown command: copy /tmp/tmpiz5s_wvs running-config\n' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service Stderr: '' >> 2022-07-22 04:04:37.990 111761 ERROR oslo_service.service >> 2022-07-22 04:04:37.993 111761 INFO ovn_bgp_agent.agent [-] Service >> 'BGPAgent' stopping >> 2022-07-22 04:04:37.994 111761 INFO ovn_bgp_agent.agent [-] Service >> 'BGPAgent' stopped >> > > > -- > LUIS TOM?S BOL?VAR > Principal Software Engineer > Red Hat > Madrid, Spain > ltomasbo at redhat.com > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Jul 23 06:01:59 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 23 Jul 2022 08:01:59 +0200 Subject: [Openstack] sdn multisite??? Message-ID: Hello All, is there any sdn solution for openstack multi site sdn ? I read something about network to network api extension for neutron presented in an openstack summit but it seems no more adopted. Any help, please? We have 3 separated openstack installations and I wonder if there is some solution for controlling network (vxlan to vxlan connection, sec groups airchestration and so on) Openstack cascading seems only a dream. Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From alsotoes at gmail.com Sat Jul 23 07:07:24 2022 From: alsotoes at gmail.com (Alvaro Soto) Date: Sat, 23 Jul 2022 02:07:24 -0500 Subject: [Openstack] sdn multisite??? In-Reply-To: References: Message-ID: tungsten fabric maybe? https://tungsten.io/ https://www.openstack.org/videos/summits/shanghai-2019/integration-of-tungsten-fabric-with-openstack Take a look at this also. https://tungstenfabric.github.io/website/Tungsten-Fabric-Architecture.html Cheers! On Sat, Jul 23, 2022 at 1:14 AM Ignazio Cassano wrote: > Hello All, > is there any sdn solution for openstack multi site sdn ? > I read something about network to network api extension for neutron > presented in an openstack summit but it seems no more adopted. > Any help, please? > We have 3 separated openstack installations and I wonder if there is some > solution for controlling network (vxlan to vxlan connection, sec groups > airchestration and so on) > Openstack cascading seems only a dream. > Ignazio > -- Alvaro Soto *Note: My work hours may not be your work hours. Please do not feel the need to respond during a time that is not convenient for you.* ---------------------------------------------------------- Great people talk about ideas, ordinary people talk about things, small people talk... about other people. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sat Jul 23 07:32:47 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 23 Jul 2022 09:32:47 +0200 Subject: [Openstack] sdn multisite??? In-Reply-To: References: Message-ID: Thanks, Alvaro Il Sab 23 Lug 2022, 09:07 Alvaro Soto ha scritto: > tungsten fabric maybe? > > https://tungsten.io/ > > https://www.openstack.org/videos/summits/shanghai-2019/integration-of-tungsten-fabric-with-openstack > > Take a look at this also. > > https://tungstenfabric.github.io/website/Tungsten-Fabric-Architecture.html > > Cheers! > > On Sat, Jul 23, 2022 at 1:14 AM Ignazio Cassano > wrote: > >> Hello All, >> is there any sdn solution for openstack multi site sdn ? >> I read something about network to network api extension for neutron >> presented in an openstack summit but it seems no more adopted. >> Any help, please? >> We have 3 separated openstack installations and I wonder if there is some >> solution for controlling network (vxlan to vxlan connection, sec groups >> airchestration and so on) >> Openstack cascading seems only a dream. >> Ignazio >> > > > -- > > Alvaro Soto > > *Note: My work hours may not be your work hours. Please do not feel the > need to respond during a time that is not convenient for you.* > ---------------------------------------------------------- > Great people talk about ideas, > ordinary people talk about things, > small people talk... about other people. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From gk.coltech at gmail.com Sat Jul 23 17:18:55 2022 From: gk.coltech at gmail.com (Tan Tran Trong) Date: Sun, 24 Jul 2022 00:18:55 +0700 Subject: [kolla] RabbitMQ High Availability In-Reply-To: <534360793.921334.1658505218178@mail.yahoo.com> References: <534360793.921334.1658505218178@mail.yahoo.com> Message-ID: Hello, Thank you guys for your links. Actually I moved from no durable queues + no HA policy to durable queues + ha-all policy. The result is still the same. Tried to turning using https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit but still missing something I guess. @Albert: Have you tested the case when you shutdown 1 controller -> thing works -> power it on -> shutdown another controller? In my case the cluster is not stable after that. And by "work fine" you mean you don't have to do anything (restart rabbitmq, restart openstack services) when 1 controller is down, do you? I know it sounds silly, but we end up using internal keepalived VIP only for all transport settings which remove loadbalancing but keep my cluster stable when 1 node down, really don't know if it will cause trouble later when the cluster grows. Regards, Tan On Fri, Jul 22, 2022 at 10:53 PM Albert Braden wrote: > The default RMQ config is broken. You're on the right track with setting > durable_queues, but there's more to do. I'm running kolla Train with > mirrored/durable queues and my clusters work fine with a controller down. > One issue that we faced after setting durable was that we weren't running > redis, and then when we tried to run it the network was blocking the port, > but eventually we got it working. > > Some have recommended not mirroring queues; I haven't tried that. If > anyone has successfully setup HA without mirrored queues, I'd be interested > to hear about how you did it. > > Here are some helpful links: > > https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit > > https://lists.openstack.org/pipermail/openstack-discuss/2021-November/026074.html > > https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016362.html > > https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016524.html > https://review.opendev.org/c/openstack/kolla-ansible/+/822191 > https://review.opendev.org/c/openstack/kolla-ansible/+/824994 > On Thursday, July 21, 2022, 02:42:42 PM EDT, Tan Tran Trong < > gk.coltech at gmail.com> wrote: > > > Hello, > I'm trying to figure out how to configure RabbitMQ to make it high > available. I have 3 controller nodes and 2 compute nodes, deployed with > kolla with mostly default configuration. The RabbitMQ set to ha-all for all > queues on all nodes, amqp_durable_queues = True > My problem is when I shutdown 1 controller node (or 1 RabbitMQ container) > (master or slave) the whole cluster becomes unstable. Some instances can > not be created, it is stuck on Scheduling, Block Device Mapping, the > volumes not shown or are stuck on creating, the compute node reported dead > randomly,... > I'm looking for documentation to know how Openstack using RabbitMQ, > Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA in a > stable way. Do you have any recommendation? > > TIA, > Tan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From satish.txt at gmail.com Sat Jul 23 18:20:17 2022 From: satish.txt at gmail.com (Satish Patel) Date: Sat, 23 Jul 2022 14:20:17 -0400 Subject: [kolla] RabbitMQ High Availability In-Reply-To: References: Message-ID: Something is wrong with your version or rabbitMQ version. Make sure you are not dealing with bug. I have 3 node cluster and it always survive if I shutdown one of controller node. It works prefect fine without issue. Even with HA or nonHA config. What version of openstack and rabbitMQ are you running ? Sent from my iPhone > On Jul 23, 2022, at 1:29 PM, Tan Tran Trong wrote: > > ? > Hello, > Thank you guys for your links. Actually I moved from no durable queues + no HA policy to durable queues + ha-all policy. The result is still the same. Tried to turning using https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit but still missing something I guess. > @Albert: Have you tested the case when you shutdown 1 controller -> thing works -> power it on -> shutdown another controller? In my case the cluster is not stable after that. > And by "work fine" you mean you don't have to do anything (restart rabbitmq, restart openstack services) when 1 controller is down, do you? I know it sounds silly, but we end up using internal keepalived VIP only for all transport settings which remove loadbalancing but keep my cluster stable when 1 node down, really don't know if it will cause trouble later when the cluster grows. > > Regards, > Tan > > >> On Fri, Jul 22, 2022 at 10:53 PM Albert Braden wrote: >> The default RMQ config is broken. You're on the right track with setting durable_queues, but there's more to do. I'm running kolla Train with mirrored/durable queues and my clusters work fine with a controller down. One issue that we faced after setting durable was that we weren't running redis, and then when we tried to run it the network was blocking the port, but eventually we got it working. >> >> Some have recommended not mirroring queues; I haven't tried that. If anyone has successfully setup HA without mirrored queues, I'd be interested to hear about how you did it. >> >> Here are some helpful links: >> >> https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit >> https://lists.openstack.org/pipermail/openstack-discuss/2021-November/026074.html >> https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016362.html >> https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016524.html >> https://review.opendev.org/c/openstack/kolla-ansible/+/822191 >> https://review.opendev.org/c/openstack/kolla-ansible/+/824994 >> On Thursday, July 21, 2022, 02:42:42 PM EDT, Tan Tran Trong wrote: >> >> >> Hello, >> I'm trying to figure out how to configure RabbitMQ to make it high available. I have 3 controller nodes and 2 compute nodes, deployed with kolla with mostly default configuration. The RabbitMQ set to ha-all for all queues on all nodes, amqp_durable_queues = True >> My problem is when I shutdown 1 controller node (or 1 RabbitMQ container) (master or slave) the whole cluster becomes unstable. Some instances can not be created, it is stuck on Scheduling, Block Device Mapping, the volumes not shown or are stuck on creating, the compute node reported dead randomly,... >> I'm looking for documentation to know how Openstack using RabbitMQ, Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA in a stable way. Do you have any recommendation? >> >> TIA, >> Tan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gk.coltech at gmail.com Mon Jul 25 03:43:05 2022 From: gk.coltech at gmail.com (Tan Tran Trong) Date: Mon, 25 Jul 2022 10:43:05 +0700 Subject: [kolla] RabbitMQ High Availability In-Reply-To: References: Message-ID: Hi, My RMQ version is: 3.8.32 I deployed the xena version using kolla-ansible on Ubuntu 20.04. Right now my cluster running no ha + amqp_durable_queues = False, when I shut 1 controller and create instance I got the error on nova-scheduler: 2022-07-25 10:36:41.496 688 ERROR root oslo_messaging.exceptions.MessageDeliveryFailure: Unable to connect to AMQP server on x.x.x.x:5672 after inf tries: Queue.declare: (404) NOT_FOUND - home node 'rabbit at control02' of durable queue 'scheduler' in vhost '/' is down or inaccessible Regards, Tan On Sun, Jul 24, 2022 at 1:20 AM Satish Patel wrote: > Something is wrong with your version or rabbitMQ version. Make sure you > are not dealing with bug. I have 3 node cluster and it always survive if I > shutdown one of controller node. It works prefect fine without issue. Even > with HA or nonHA config. > > What version of openstack and rabbitMQ are you running ? > > Sent from my iPhone > > On Jul 23, 2022, at 1:29 PM, Tan Tran Trong wrote: > > ? > Hello, > Thank you guys for your links. Actually I moved from no durable queues + > no HA policy to durable queues + ha-all policy. The result is still the > same. Tried to turning using > https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit but > still missing something I guess. > @Albert: Have you tested the case when you shutdown 1 controller -> thing > works -> power it on -> shutdown another controller? In my case the cluster > is not stable after that. > And by "work fine" you mean you don't have to do anything (restart > rabbitmq, restart openstack services) when 1 controller is down, do you? I > know it sounds silly, but we end up using internal keepalived VIP only for > all transport settings which remove loadbalancing but keep my cluster > stable when 1 node down, really don't know if it will cause trouble later > when the cluster grows. > > Regards, > Tan > > > On Fri, Jul 22, 2022 at 10:53 PM Albert Braden wrote: > >> The default RMQ config is broken. You're on the right track with setting >> durable_queues, but there's more to do. I'm running kolla Train with >> mirrored/durable queues and my clusters work fine with a controller down. >> One issue that we faced after setting durable was that we weren't running >> redis, and then when we tried to run it the network was blocking the port, >> but eventually we got it working. >> >> Some have recommended not mirroring queues; I haven't tried that. If >> anyone has successfully setup HA without mirrored queues, I'd be interested >> to hear about how you did it. >> >> Here are some helpful links: >> >> https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit >> >> https://lists.openstack.org/pipermail/openstack-discuss/2021-November/026074.html >> >> https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016362.html >> >> https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016524.html >> https://review.opendev.org/c/openstack/kolla-ansible/+/822191 >> https://review.opendev.org/c/openstack/kolla-ansible/+/824994 >> On Thursday, July 21, 2022, 02:42:42 PM EDT, Tan Tran Trong < >> gk.coltech at gmail.com> wrote: >> >> >> Hello, >> I'm trying to figure out how to configure RabbitMQ to make it high >> available. I have 3 controller nodes and 2 compute nodes, deployed with >> kolla with mostly default configuration. The RabbitMQ set to ha-all for all >> queues on all nodes, amqp_durable_queues = True >> My problem is when I shutdown 1 controller node (or 1 RabbitMQ container) >> (master or slave) the whole cluster becomes unstable. Some instances can >> not be created, it is stuck on Scheduling, Block Device Mapping, the >> volumes not shown or are stuck on creating, the compute node reported dead >> randomly,... >> I'm looking for documentation to know how Openstack using RabbitMQ, >> Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA in a >> stable way. Do you have any recommendation? >> >> TIA, >> Tan >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Mon Jul 25 05:00:31 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Mon, 25 Jul 2022 10:30:31 +0530 Subject: Regarding Application Credential in Open Stack XENA In-Reply-To: References: Message-ID: hi Team, Any feedback on this, I am also exploring from my side Regards Adivya Singh On Sat, Jul 23, 2022 at 12:38 AM Adivya Singh wrote: > hi Team, > > There is a use case, where a user wants to Create CI/CD pipeline using > Application Credentials of Open Stack, Henceforth we tried Create an > application with a role with a secret Key. > > but it is failing with the below output, The user wants to push the qcow2 > image from his system to Open Stack. > > Using auth plugin: v3 application credential > /usr/lib/python3.7/socket.py:660: ResourceWarning: unclosed > self._sock = None > ResourceWarning: Enable tracemalloc to get the object allocation traceback > Not Found (HTTP 404) (Request-ID: req-217c9f28-d6a6-4649-adf9-2d9a4b965b3f) > END return value: 1 > > it failed with a below result > > Regards > > Adivya Singh > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From obondarev at mirantis.com Mon Jul 25 11:47:31 2022 From: obondarev at mirantis.com (Oleg Bondarev) Date: Mon, 25 Jul 2022 15:47:31 +0400 Subject: [neutron] Bug Deputy Report Jul 18 - Jul 24 Message-ID: Hello Neutron Team, Bug report for the week of Jul 18 is below: Critical: ---------- - https://bugs.launchpad.net/neutron/+bug/1981963 - Some jobs broken post pyroute2 update to 0.7.1 - In progress - https://review.opendev.org/c/openstack/requirements/+/850295 is merged - https://bugs.launchpad.net/neutron/+bug/1982206 - stable: Neutron unit tests timeout on stable/ussuri and stable/victoria (perhaps wallaby also) - Fix released - https://review.opendev.org/c/openstack/neutron/+/850670 High: ------- - https://bugs.launchpad.net/neutron/+bug/1982367 - Race condition between DHCP and L2 agent during evacuate vms may cause VM ends up in ERROR state - In Progress - Assigned to Slawek Medium: ------------ - https://bugs.launchpad.net/neutron/+bug/1982569 - [OVN] metadata does not work when using neutron-dhcp-agent for baremetal ports - Confirmed - Unasigned - https://bugs.launchpad.net/neutron/+bug/1982110 - [taap-as-a-service] Project requires "webtest" library for testing - Fix committed - https://review.opendev.org/c/openstack/neutron-vpnaas/+/850335 RFEs: --------- - https://bugs.launchpad.net/neutron/+bug/1982287 - [rfe][ovn] Support address group for ovn driver - Approved on drivers meeting - Assigned to liushy - https://bugs.launchpad.net/neutron/+bug/1982541 - [RFE] OVN E/W routing for external (baremetal) VLAN ports - New - Assigned to Michal Nasiadka Undecided: --------------- - https://bugs.launchpad.net/neutron/+bug/1982373 - nova/neutron ignore and overwrite custom device owner fields - Opinion - most probably it's in Nova scope Thanks, Oleg -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Mon Jul 25 09:42:11 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Mon, 25 Jul 2022 15:12:11 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Brendan, Apologies for this delay, i had to redo the setup to reach this point, and also this time just to eliminate my Doubt i removed SSL for overcloud. Now I am only using DNS Server. In this case also I am getting the same error. | 0:13:20.198877 | 1.86s 2022-07-25 14:37:29.657118 | 525400a7-0932-2ed1-d313-000000007193 | TASK | Create identity internal endpoint 2022-07-25 14:37:31.995131 | 525400a7-0932-2ed1-d313-000000007193 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: http://[fd00:fd00:fd00:9900::a0]:5000/v3/services, The request you have made requires authentication."} To answer your question please note: "OS_CLOUD=overcloud openstack endpoint list" [root at GGNLABPM4 ~]# ssh stack at 10.0.1.29 stack at 10.0.1.29's password: Activate the web console with: systemctl enable --now cockpit.socket Last login: Mon Jul 25 14:38:44 2022 from 10.0.1.4 [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ | 1ecd328b5ea1426bb411d157b8339dd2 | regionOne | keystone | identity | True | public | http://[fd00:fd00:fd00:9900::a0]:5000 | | 518cfa0f2ece43b684710006c9fa5b25 | regionOne | keystone | identity | True | admin | http://30.30.30.181:35357 | | 8cda413052c24718b073578bb497f483 | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::a0]:5000 | +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ [stack at undercloud ~]$ it is giving us only keystone endpoints. Also note that I am trying to deploy the end to end setup with FQDN only. and in this case as well I am facing the same issue as old. thanks once again for your inputs. -Lokendra On Wed, Jul 20, 2022 at 3:07 PM Brendan Shephard wrote: > Hey, > > I think it's weird that you got a response at all when you run the > openstack endpoint list, since you said haproxy isn't running. So there > should be nothing serving that endpoint. > > I noticed you have the stackrc file sourced. Try it again without that > file sourced, so: > $ su - stack > $ OS_CLOUD=overcloud openstack endpoint list > > I would suspect that nothing should be responding. It could be the stackrc > file causing issues with some of the environment variables. If the above > command doesn't return anything, then my suggestion would be to re-run the > deployment like this: > > $ su - stack > $ export OS_CLOUD=undercloud > # Then run your deployment script again > $ bash overcloud_deploy.sh > > The OS_CLOUD variable tells the openstackclient to lookup the details > about that cloud from your clouds.yaml file. Which will be located in > /home/stack/.config/openstack/clouds.yaml. > > This method is preferable to the sourcing of RC files. > > Reference: > > https://docs.openstack.org/openstacksdk/latest/user/guides/connect_from_config.html > > Regarding the HAProxy warnings. I don't think they should be fatal. afaik, > HAProxy should still be starting. If it's not, there might be another error > that you will need to look for in the log files under > /var/log/containers/haproxy/ > > I wasn't able to reproduce that warning by following the documentation for > enabling TLS though. So it seems like an odd error to be getting. > > Brendan Shephard > > Software Engineer > > Red Hat APAC > > 193 N Quay > > Brisbane City QLD 4000 > @RedHat Red Hat > Red Hat > > > > > > On Wed, Jul 20, 2022 at 7:02 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Brendan / Team, >> Any lead for the issue raised? >> >> -Lokendra >> >> >> >> On Tue, Jul 19, 2022 at 11:46 AM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Brendan,, >>> Thanks for the inputs. >>> when i run the command as you suggested I get this: >>> >>> (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack >>> endpoint list >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>> | ID | Region | Service Name | Service >>> Type | Enabled | Interface | URL | >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>> | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | identity >>> | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | >>> | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | identity >>> | True | admin | http://30.30.30.173:35357 | >>> | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | identity >>> | True | public | https://overcloud-hsc.com:13000 | >>> >>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>> (undercloud) [stack at undercloud ~]$ >>> >>> >>> On the other note that i notices was as below: >>> >>> - HAproxy container is not running. >>> - [root at overcloud-controller-2 stdouts]# podman ps -a | grep >>> haproxy >>> e91dbde042db >>> undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo >>> 24 hours ago Exited (1) Less than a >>> second ago container-puppet-haproxy\ >>> - Checking logs: >>> - 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= >>> 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] >>> 2022-07-19T08:47:00.496323705+05:30 stderr F + . >>> kolla_extend_start >>> 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running >>> command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper >>> ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; >>> else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' >>> 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: >>> 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec >>> /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec >>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' >>> 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c '$*' >>> -- eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec >>> /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec >>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi >>> 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind >>> fd00:fd00:fd00:9900::81:13776' : >>> 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> automatically2022-07-19T08:47:00.513967576+05:30 stderr F >>> [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind >>> fd00:fd00:fd00:9900::81:13292' : >>> 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind >>> fd00:fd00:fd00:9900::81:13004' : >>> 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind >>> fd00:fd00:fd00:9900::81:13005' : >>> 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind >>> fd00:fd00:fd00:2000::326:443' : >>> - 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library >>> will use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind >>> fd00:fd00:fd00:9900::81:13000' : >>> 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind >>> fd00:fd00:fd00:9900::81:13696' : >>> 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind >>> fd00:fd00:fd00:9900::81:13080' : >>> 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind >>> fd00:fd00:fd00:9900::81:13774' : >>> 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind >>> fd00:fd00:fd00:9900::81:13778' : >>> 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] 199/084700 >>> (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind >>> fd00:fd00:fd00:9900::81:13808' : >>> 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load >>> default 1024 bits DH parameter for certificate >>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>> 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library will >>> use an automatically generated DH parameter. >>> 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] 199/084700 >>> (7) : Setting tune.ssl.default-dh-param to 1024 by default, if your >>> workload permits it you should set it to at least 2048. Please set a value >>> >= 1024 to make this warning disappear. >>> - pcs status also show that proxy is down for the controller with >>> VIP: >>> - Failed Resource Actions: >>> * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 >>> 'error' (1): call=139, status='complete', exitreason='podman failed to >>> launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', >>> queued=0ms, exec=1222ms >>> * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 >>> 'error' (1): call=191, status='complete', exitreason='podman failed to >>> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', >>> queued=0ms, exec=1171ms >>> * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 >>> 'error' (1): call=193, status='complete', exitreason='podman failed to >>> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', >>> queued=0ms, exec=1256ms >>> >>> do let me know in case we need anything more around it. >>> thanks once again for the support. >>> -Lokendra >>> >>> On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard >>> wrote: >>> >>>> Hey, >>>> >>>> Doesn't look like there is anything wrong with the certificate there. >>>> You would be getting a TLS error if that was the problem. >>>> >>>> What does your clouds.yaml file look like now? What happens if you run >>>> this command from the Undercloud node: >>>> $ OS_CLOUD=overcloud openstack endpoint list >>>> >>>> Do you get the same error? >>>> >>>> Brendan Shephard >>>> >>>> Software Engineer >>>> >>>> Red Hat APAC >>>> >>>> 193 N Quay >>>> >>>> Brisbane City QLD 4000 >>>> @RedHat Red Hat >>>> Red Hat >>>> >>>> >>>> >>>> >>>> >>>> On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Swogat and Vikarna, >>>>> We have tried adding the DNS entry for the overcloud domain. we are >>>>> getting the same error: >>>>> >>>>> 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | >>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>> undercloud | 0:11:18.785769 | 2.16s >>>>> 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | >>>>> TASK | Create identity internal endpoint >>>>> 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | >>>>> FATAL | Create identity internal endpoint | undercloud | >>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>> request you have made requires authentication.", "response": >>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>> services: Client Error for url: >>>>> https://overcloud-hsc.com:13000/v3/services, The request you have >>>>> made requires authentication."} >>>>> 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | >>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>>>> undercloud | 0:11:21.074605 | 2. >>>>> >>>>> >>>>> Certificate configs: >>>>> >>>>> [stack at undercloud oc-domain-name]$ cat server.csr.cnf >>>>> [req] >>>>> default_bits = 2048 >>>>> prompt = no >>>>> default_md = sha256 >>>>> distinguished_name = dn >>>>> [dn] >>>>> C=IN >>>>> ST=UTTAR PRADESH >>>>> L=NOIDA >>>>> O=HSC >>>>> OU=HSC >>>>> emailAddress=demo at demo.com >>>>> CN=overcloud-hsc.com >>>>> [stack at undercloud oc-domain-name]$ cat v3.ext >>>>> authorityKeyIdentifier=keyid,issuer >>>>> basicConstraints=CA:FALSE >>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>> dataEncipherment >>>>> subjectAltName = @alt_names >>>>> [alt_names] >>>>> DNS.1=overcloud-hsc.com >>>>> [stack at undercloud oc-domain-name]$ >>>>> >>>>> the difference we see from others is that we are using self-signed >>>>> certificates. >>>>> >>>>> please let me know in case we need to check something else. Somehow >>>>> this issue remains stuck. >>>>> >>>>> >>>>> On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan < >>>>> swogatpradhan22 at gmail.com> wrote: >>>>> >>>>>> I was facing a similar kind of issue. >>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >>>>>> Here is the solution that helped me fix it. >>>>>> Also make sure the cn that you will use is reachable from undercloud >>>>>> (maybe) script should take care of it. >>>>>> >>>>>> Also please follow Mr. Tathe's mail to add the cn first. >>>>>> >>>>>> With regards >>>>>> Swogat Pradhan >>>>>> >>>>>> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe >>>>>> wrote: >>>>>> >>>>>>> Hi Lokendra, >>>>>>> >>>>>>> The CN field is missing. Can you add that and generate the >>>>>>> certificate again. >>>>>>> >>>>>>> CN=ipaddress >>>>>>> >>>>>>> Also add dns.1=ipaddress under alt_names for precaution. >>>>>>> >>>>>>> Vikarna >>>>>>> >>>>>>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, < >>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>> >>>>>>>> HI Vikarna, >>>>>>>> Thanks for the inputs. >>>>>>>> I am note able to access any tabs in GUI. >>>>>>>> [image: image.png] >>>>>>>> >>>>>>>> to re-state, we are failing at the time of deployment at step4 : >>>>>>>> >>>>>>>> >>>>>>>> PLAY [External deployment step 4] >>>>>>>> ********************************************** >>>>>>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>>>>> TASK | External deployment step 4 >>>>>>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | >>>>>>>> OK | External deployment step 4 | undercloud -> localhost | result={ >>>>>>>> "changed": false, >>>>>>>> "msg": "Use --start-at-task 'External deployment step 4' to >>>>>>>> resume from this task" >>>>>>>> } >>>>>>>> [WARNING]: ('undercloud -> localhost', >>>>>>>> '525400ae-089b-870a-fab6-0000000000d7') >>>>>>>> missing from stats >>>>>>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | >>>>>>>> TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>>>>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | >>>>>>>> INCLUDED | >>>>>>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>>>>>> | undercloud >>>>>>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>>> TASK | Clean up legacy Cinder keystone catalog entries >>>>>>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>>>>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:24.204562 | 2.48s >>>>>>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>>> OK | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>>>>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:26.122584 | 4.40s >>>>>>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | >>>>>>>> TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | >>>>>>>> 0:11:26.124296 | 4.40s >>>>>>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | >>>>>>>> TASK | Manage Keystone resources for OpenStack services >>>>>>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | >>>>>>>> TIMING | Manage Keystone resources for OpenStack services | undercloud >>>>>>>> | 0:11:26.169842 | 0.03s >>>>>>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | >>>>>>>> TASK | Gather variables for each operating system >>>>>>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | >>>>>>>> TIMING | tripleo_keystone_resources : Gather variables for each >>>>>>>> operating system | undercloud | 0:11:26.253383 | 0.04s >>>>>>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | >>>>>>>> TASK | Create Keystone Admin resources >>>>>>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | >>>>>>>> TIMING | tripleo_keystone_resources : Create Keystone Admin resources | >>>>>>>> undercloud | 0:11:26.299608 | 0.03s >>>>>>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | >>>>>>>> INCLUDED | >>>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>>>>>> undercloud >>>>>>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>>>> TASK | Create default domain >>>>>>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>>>> OK | Create default domain | undercloud >>>>>>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | >>>>>>>> TIMING | tripleo_keystone_resources : Create default domain | >>>>>>>> undercloud | 0:11:28.437360 | 2.09s >>>>>>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | >>>>>>>> TASK | Create admin and service projects >>>>>>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | >>>>>>>> TIMING | tripleo_keystone_resources : Create admin and service projects >>>>>>>> | undercloud | 0:11:28.483468 | 0.03s >>>>>>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | >>>>>>>> INCLUDED | >>>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>>>>>> undercloud >>>>>>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>>> TASK | Async creation of Keystone project >>>>>>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>>> CHANGED | Async creation of Keystone project | undercloud | item=admin >>>>>>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>> project | undercloud | 0:11:29.238078 | 0.72s >>>>>>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>>> CHANGED | Async creation of Keystone project | undercloud | item=service >>>>>>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>> project | undercloud | 0:11:29.586587 | 1.06s >>>>>>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | >>>>>>>> TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>> project | undercloud | 0:11:29.587916 | 1.07s >>>>>>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> TASK | Check Keystone project status >>>>>>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> WAITING | Check Keystone project status | undercloud | 30 retries left >>>>>>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> OK | Check Keystone project status | undercloud | item=admin >>>>>>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>> undercloud | 0:11:35.260666 | 5.66s >>>>>>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> OK | Check Keystone project status | undercloud | item=service >>>>>>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>> undercloud | 0:11:35.494729 | 5.89s >>>>>>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | >>>>>>>> TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>> undercloud | 0:11:35.498771 | 5.89s >>>>>>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | >>>>>>>> TASK | Create admin role >>>>>>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | >>>>>>>> OK | Create admin role | undercloud >>>>>>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | >>>>>>>> TIMING | tripleo_keystone_resources : Create admin role | undercloud | >>>>>>>> 0:11:37.725949 | 2.20s >>>>>>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>>>> TASK | Create _member_ role >>>>>>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>>>> SKIPPED | Create _member_ role | undercloud >>>>>>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | >>>>>>>> TIMING | tripleo_keystone_resources : Create _member_ role | undercloud >>>>>>>> | 0:11:37.783369 | 0.04s >>>>>>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>>>> TASK | Create admin user >>>>>>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>>>> CHANGED | Create admin user | undercloud >>>>>>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | >>>>>>>> TIMING | tripleo_keystone_resources : Create admin user | undercloud | >>>>>>>> 0:11:41.145472 | 3.34s >>>>>>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>>>> TASK | Assign admin role to admin project for admin user >>>>>>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>>>> OK | Assign admin role to admin project for admin user | undercloud >>>>>>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | >>>>>>>> TIMING | tripleo_keystone_resources : Assign admin role to admin >>>>>>>> project for admin user | undercloud | 0:11:44.288848 | 3.13s >>>>>>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>>>> TASK | Assign _member_ role to admin project for admin user >>>>>>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>>>> SKIPPED | Assign _member_ role to admin project for admin user | >>>>>>>> undercloud >>>>>>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | >>>>>>>> TIMING | tripleo_keystone_resources : Assign _member_ role to admin >>>>>>>> project for admin user | undercloud | 0:11:44.346479 | 0.04s >>>>>>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>>>> TASK | Create identity service >>>>>>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>>>> OK | Create identity service | undercloud >>>>>>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | >>>>>>>> TIMING | tripleo_keystone_resources : Create identity service | >>>>>>>> undercloud | 0:11:46.022362 | 1.66s >>>>>>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>>>> TASK | Create identity public endpoint >>>>>>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>>>> OK | Create identity public endpoint | undercloud >>>>>>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | >>>>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>>>> undercloud | 0:11:48.233349 | 2.19s >>>>>>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>>>> TASK | Create identity internal endpoint >>>>>>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>>> request you have made requires authentication.", "response": >>>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>> The request you have made requires authentication."} >>>>>>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | >>>>>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint >>>>>>>> | undercloud | 0:11:50.660654 | 2.41s >>>>>>>> >>>>>>>> PLAY RECAP >>>>>>>> ********************************************************************* >>>>>>>> localhost : ok=1 changed=0 unreachable=0 >>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>> undercloud : ok=39 changed=7 unreachable=0 >>>>>>>> failed=1 skipped=6 rescued=0 ignored=0 >>>>>>>> >>>>>>>> Also : >>>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>>>>>> [req] >>>>>>>> default_bits = 2048 >>>>>>>> prompt = no >>>>>>>> default_md = sha256 >>>>>>>> distinguished_name = dn >>>>>>>> [dn] >>>>>>>> C=IN >>>>>>>> ST=UTTAR PRADESH >>>>>>>> L=NOIDA >>>>>>>> O=HSC >>>>>>>> OU=HSC >>>>>>>> emailAddress=demo at demo.com >>>>>>>> >>>>>>>> v3.ext: >>>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>>>>>> authorityKeyIdentifier=keyid,issuer >>>>>>>> basicConstraints=CA:FALSE >>>>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>>>> dataEncipherment >>>>>>>> subjectAltName = @alt_names >>>>>>>> [alt_names] >>>>>>>> IP.1=fd00:fd00:fd00:9900::81 >>>>>>>> >>>>>>>> Using these files we create other certificates. >>>>>>>> Please check and let me know in case we need anything else. >>>>>>>> >>>>>>>> >>>>>>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe < >>>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Lokendra, >>>>>>>>> >>>>>>>>> Are you able to access all the tabs in the OpenStack dashboard >>>>>>>>> without any error? If not, please retry generating the certificate. Also, >>>>>>>>> share the openssl.cnf or server.cnf. >>>>>>>>> >>>>>>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Team, >>>>>>>>>> Any input on this case raised. >>>>>>>>>> >>>>>>>>>> Thanks, >>>>>>>>>> Lokendra >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Shephard/Swogat, >>>>>>>>>>> I tried changing the setting as suggested and it looks like it >>>>>>>>>>> has failed at step 4 with error: >>>>>>>>>>> >>>>>>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | >>>>>>>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>>>>>>> undercloud | 0:24:47.736198 | 2.21s >>>>>>>>>>> 2022-07-12 21:31:32.185594 | >>>>>>>>>>> 525400ae-089b-fb79-67ac-0000000072cf | TASK | Create identity >>>>>>>>>>> internal endpoint >>>>>>>>>>> 2022-07-12 21:31:34.468996 | >>>>>>>>>>> 525400ae-089b-fb79-67ac-0000000072cf | FATAL | Create identity >>>>>>>>>>> internal endpoint | undercloud | error={"changed": false, "extra_data": >>>>>>>>>>> {"data": null, "details": "The request you have made requires >>>>>>>>>>> authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The >>>>>>>>>>> request you have made requires >>>>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>>>>> The request you have made requires authentication."} >>>>>>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> Checking further the endpoint list: >>>>>>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>>>>>> >>>>>>>>>>> DeprecationWarning >>>>>>>>>>> >>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>> | ID | Region | Service Name | >>>>>>>>>>> Service Type | Enabled | Interface | URL >>>>>>>>>>> | >>>>>>>>>>> >>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>>>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>>>>>>> | >>>>>>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>>>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>>>>>>> | >>>>>>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>>>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>>>>>>> | >>>>>>>>>>> >>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> it looks like something related to the SSL, we have also >>>>>>>>>>> verified that the GUI login screen shows that Certificates are applied. >>>>>>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>>>>>> observation would be of great help. >>>>>>>>>>> thanks again for the support. >>>>>>>>>>> >>>>>>>>>>> Best Regards, >>>>>>>>>>> Lokendra >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> I had faced a similar kind of issue, for ip based setup you >>>>>>>>>>>> need to specify the domain name as the ip that you are going to use, this >>>>>>>>>>>> error is showing up because the ssl is ip based but the fqdns seems to be >>>>>>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>>>>>> >>>>>>>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>>>>>>> address for overcloud? because it seems he has not specified the >>>>>>>>>>>> clouddomain parameter and overcloud.example.com is the default >>>>>>>>>>>> domain for overcloud.example.com. >>>>>>>>>>>> >>>>>>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> What is the domain name you have specified in the >>>>>>>>>>>>> undercloud.conf file? >>>>>>>>>>>>> And what is the fqdn name used for the generation of the SSL >>>>>>>>>>>>> cert? >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>> We were trying to install overcloud with SSL enabled for >>>>>>>>>>>>>> which the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>>>>>> >>>>>>>>>>>>>> ERROR >>>>>>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): >>>>>>>>>>>>>> Max retries exceeded with url: / (Caused by >>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", >>>>>>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>>> 2022-07-08 17:03:23.606739 | >>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder >>>>>>>>>>>>>> keystone catalog entries | undercloud | item={'service_name': 'cinderv3', >>>>>>>>>>>>>> 'service_type': 'volume'} | error={"ansible_index_var": >>>>>>>>>>>>>> "cinder_api_service", "ansible_loop_var": "item", "changed": false, >>>>>>>>>>>>>> "cinder_api_service": 1, "item": {"service_name": "cinderv3", >>>>>>>>>>>>>> "service_type": "volume"}, "module_stderr": "Failed to discover available >>>>>>>>>>>>>> identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>>>>>> server_hostname)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>>>>>> (most recent call last):\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>> last):\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>> last):\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>>>>>> in _send_request\n raise >>>>>>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>>>>>> run_globals)\n File >>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>> line 185, in \n File >>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>> line 181, in main\n File >>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>>>>>> line 407, in __call__\n File >>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>> line 141, in run\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>>>>>> File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. >>>>>>>>>>>>>> %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not >>>>>>>>>>>>>> find versioned identity endpoints when attempting to authenticate. Please >>>>>>>>>>>>>> check that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": >>>>>>>>>>>>>> "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>>> 2022-07-08 17:03:23.609354 | >>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s >>>>>>>>>>>>>> 2022-07-08 17:03:23.611094 | >>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s >>>>>>>>>>>>>> >>>>>>>>>>>>>> PLAY RECAP >>>>>>>>>>>>>> ********************************************************************* >>>>>>>>>>>>>> localhost : ok=0 changed=0 >>>>>>>>>>>>>> unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 >>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 >>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 >>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 >>>>>>>>>>>>>> unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>>>>>> undercloud : ok=28 changed=7 >>>>>>>>>>>>>> unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>>>>>> 2022-07-08 17:03:23.647270 | >>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information >>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>> 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>> Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> in the deploy.sh: >>>>>>>>>>>>>> >>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>> --networks-file >>>>>>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>>>>>> --baremetal-deployment >>>>>>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>>>>>> --network-config \ >>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>>>>>> \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>> -e >>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>> >>>>>>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>>>>>> modifications: >>>>>>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>>>>>> Passed as is in the defaults. >>>>>>>>>>>>>> enable-tls.yaml: >>>>>>>>>>>>>> >>>>>>>>>>>>>> # >>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>> # This file was created automatically by the sample >>>>>>>>>>>>>> environment >>>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>>> update it. >>>>>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>>>>> instead >>>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>>> # >>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>>>>>> # description: | >>>>>>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>>>>>> deployments. >>>>>>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>>>>>> # environments must also be used. >>>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>>>>>>> # Type: boolean >>>>>>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>>>>>> >>>>>>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>>>>>> services in the public network. >>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>> PublicTLSCAFile: >>>>>>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>>>>>> >>>>>>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>>>>>> format. >>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>> >>>>>>>>>>>>>> SSLCertificate: | >>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>>>>>>> format. >>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>>>>>> >>>>>>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>> SSLKey: | >>>>>>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>>>>>> >>>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>>> # The filepath of the certificate as it will be stored in >>>>>>>>>>>>>> the controller. >>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>>>>>> >>>>>>>>>>>>>> # ********************* >>>>>>>>>>>>>> # End static parameters >>>>>>>>>>>>>> # ********************* >>>>>>>>>>>>>> >>>>>>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>>>>>> >>>>>>>>>>>>>> # >>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>> # This file was created automatically by the sample >>>>>>>>>>>>>> environment >>>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>>> update it. >>>>>>>>>>>>>> # Users are recommended to make changes to a copy of the file >>>>>>>>>>>>>> instead >>>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>>> # >>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>>>>>> # description: | >>>>>>>>>>>>>> # When using an SSL certificate signed by a CA that is not >>>>>>>>>>>>>> in the default >>>>>>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>>>>>> certificate to >>>>>>>>>>>>>> # the overcloud nodes. >>>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>>> # The content of a CA's SSL certificate file in PEM format. >>>>>>>>>>>>>> This is evaluated on the client side. >>>>>>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>> >>>>>>>>>>>>>> resource_registry: >>>>>>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation >>>>>>>>>>>>>> (openstack.org) >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>>>>>> >>>>>>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>>>>>> >>>>>>>>>>>>>> -- >>>>>>>>>>>>>> skype: lokendrarathour >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> >>>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> ~ Lokendra >>>>>>>> skype: lokendrarathour >>>>>>>> >>>>>>>> >>>>>>>> >>>>> >>>>> -- >>>>> ~ Lokendra >>>>> skype: lokendrarathour >>>>> >>>>> >>>>> >>> >>> -- >>> ~ Lokendra >>> skype: lokendrarathour >>> >>> >>> >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From gael.therond at bitswalk.com Mon Jul 25 13:14:13 2022 From: gael.therond at bitswalk.com (=?UTF-8?Q?Ga=C3=ABl_THEROND?=) Date: Mon, 25 Jul 2022 15:14:13 +0200 Subject: [IRONIC] - Various questions around network features. In-Reply-To: References: Message-ID: Sorry for the late answer I was out for summer vacation :-) Thanks a lot for those complementary information, I?ll for sure submit few documentation fixes Le jeu. 14 juil. 2022 ? 18:45, Julia Kreger a ?crit : > On Wed, Jul 13, 2022 at 1:07 PM Ga?l THEROND > wrote: > > > > Hi Julia! > > > > Thanks a lot for those explanations :-) Most of it confirm my > understanding, I now have a clearer point of view that will let me select > our test users for the service. > > > > Regarding aruba switches, those are pretty cool, even if as you pointed > it, this feature can actually lead you to some weird if not dangerous > situations x) > > > > Ok noticed about the horizon issue, it can be a little bit tricky for > our end users to understand that tbh as they will for sure expect the IP > selected by neutron and display on the dashboard to be the one used by the > node even on a full flat network such as the provisioning network, but for > now we will deal with it by explaining them. > > A challenging point here is there is no true way to hint that this is > the case upfront. Nova acts as an abstraction layer in between and it > really needs that networking information piece of the puzzle to > generate metadata for an instance. > > I think, embracing it and also supporting an ML2 integrated > configuration where individual switch ports are changed, is ultimately > the most powerful configuration, but the challenge we hear from > operators upstream is generally network operations groups don't want > software toggling switchport vlan assignments. I get why as I've > worked in NetOps in the past, it is largely a trust issue, I've just > not figured out concrete ways to build the trust needed there. :( > > > > > Regarding my point 2, yeah yeah I knew the purpose of direct deploy I > just explicited it I don?t know why, my point was rather: > > > > At first, when I configured our ironic deployment I had that weird issue > where if I don?t put the pxe_filter option to noop but dnsmasq, deploying > anything is failing as the conductor doesn?t correctly erase the ?ignore ? > part of the string on the dhcp_host_filter file of dnsmasq. If I make this > filter as noop then obviously I don?t need neutron to provide the > ironic-provision-network anymore as anyone plugged on ports with my VLAN101 > set as native VLAN will be able to get an ip from the PXE dnsmasq. > > I was wondering how you were making it work! > > This explains a lot, and is really not the intended pattern of use. > But it is a pattern upstream generally sees in more "standalone", or > cases of direct interaction with Ironic's API. > > > > > > I?m still having hard time to map how ironic needs both PXE dedicated > dnsmasq for introspection and then can use neutron dnsmasq dhcp once you > want to provision a host? Is that because neutron (kinda) lack for dhcp > options support on its managed subnets ? > > > > At this point, dnsmasq for introspection is *largely* for the purposes > of discovering hardware you don't know about and supporting the oldest > introspection workflow where inspection is directly triggered with the > introspection service. Depending on the version of Ironic, and if you > have a mac address already known to Ironic, you can trigger the > inspection workflow directly with ironic directly with the state > machine, and it will populate network configuration in neutron to > perform introspection on the node. > > Neutron doesn't really lack dhcp options support on it's subnets, > although it is very dnsmasq focused. The challenge we tend to see here > is getting things properly aligned host configuration and networking > wise for PXE boot operations doesn't always align perfectly, so it > becomes just easier to get things to initially work as you did. > > > All in all it?s pretty clearer to me about the multi tenancy networking > requirements now thanks to you! > > Excellent to hear! > > If you feel like anything is missing in our documentation, we do > welcome patches! I do suspect the whole bit about introspection > dnsmasq might need to be further highlighted or delineated in the > documentation. > > -Julia > > > > > Le mar. 12 juil. 2022 ? 00:13, Julia Kreger > a ?crit : > >> > >> Greetings! Hopefully these answers help! > >> > >> On Sun, Jul 10, 2022 at 4:35 PM Ga?l THEROND > wrote: > >> > > >> > I everyone, I?m currently working back again with Ironic and it?s > amazing! > >> > > >> > However, during our demo session to our users few questions arise. > >> > > >> > We?re currently deploying nodes using a private vlan that can?t be > reached from outside of the Openstack network fabric (vlan 101 - > 192.168.101.0/24) and everything is fine with this provisioning network > as our ToR switch all know about it and other Control plan VLANs such as > the internal APIs VLAN which allow the IPA Ramdisk to correctly and > seamlessly be able to contact the internal IRONIC APIs. > >> > >> Nice, I've had my lab configured like this in the past. > >> > >> > > >> > (When you declare a port as a trunk allowed all vlan on a aruba > switch it seems it automatically analyse the CIDR your host try to reach > from your VLAN and route everything to the corresponding VLAN that match > the destination IP). > >> > > >> > >> Ugh, that... could be fun :\ > >> > >> > So know, I still get few tiny issues: > >> > > >> > 1?/- When I spawn a nova instance on a ironic host that is set to use > flat network (From horizon as a user), why does the nova wizard still ask > for a neutron network if it?s not set on the provisioned host by the IPA > ramdisk right after the whole disk image copy? Is that some missing > development on horizon or did I missed something? > >> > >> Horizon just is not aware... and you can actually have entirely > >> different DHCP pools on the same flat network, so that neutron network > >> is intended for the instance's addressing to utilize. > >> > >> Ironic does just ask from an allocation from a provisioning network, > >> which can and *should* be a different network than the tenant network. > >> > >> > > >> > 2?/- In a flat network layout deployment using direct deploy scenario > for images, am I still supposed to create a ironic provisioning network in > neutron? > >> > > >> > From my understanding (and actually my tests) we don?t, as any host > booting on the provisioning vlan will catch up an IP and initiate the bootp > sequence as the dnsmasq is just set to do that and provide the IPA ramdisk, > but it?s a bit confusing as many documentation explicitly require for this > network to exist on neutron. > >> > >> Yes. Direct is short hand for "Copy it over the network and write it > >> directly to disk". It still needs an IP address on the provisioning > >> network (think, subnet instead of distinct L2 broadcast domain). > >> > >> When you ask nova for an instance, it sends over what the machine > >> should use as a "VIF" (neutron port), however that is never actually > >> bound configuration wise into neutron until after the deployment > >> completes. > >> > >> It *could* be that your neutron config is such that it just works > >> anyway, but I suspect upstream contributors would be a bit confused if > >> you reported an issue and had no provisioning network defined. > >> > >> > > >> > 3?/- My whole Openstack network setup is using Openvswitch and vxlan > tunnels on top of a spine/leaf architecture using aruba CX8360 switches > (for both spine and leafs), am I required to use either the > networking-generic-switch driver or a vendor neutron driver ? If that?s > right, how will this driver be able to instruct the switch to assign the > host port the correct openvswitch vlan id and register the correct vxlan to > openvswitch from this port? I mean, ok neutron know the vxlan and > openvswitch the tunnel vlan id/interface but what is the glue of all that? > >> > >> If your happy with flat networks, no. > >> > >> If you want tenant isolation networking wise, yes. > >> > >> NGS and Baremetal Port aware/enabled Neutron ML2 drivers take the port > >> level local link configuration (well, Ironic includes the port > >> information (local link connection, physical network, and some other > >> details) to Neutron with the port binding request. > >> > >> Those ML2 drivers, then either request the switch configuration be > >> updated, or take locally configured credentials to modify port > >> configuration in Neutron, and logs into the switch to toggle the > >> access port's configuration which the baremetal node is attached to. > >> > >> Generally, they are not vxlan network aware, and at least with > >> networking-generic-switch vlan ID numbers are expected and allocated > >> via neutron. > >> > >> Sort of like the software is logging into the switch and running > >> something along the lines of "conf t;int gi0/21;switchport mode > >> access;switchport access vlan 391 ; wri mem" > >> > >> > > >> > 4?/- I?ve successfully used openstack cloud oriented CentOS and > debian images or snapshot of VMs to provision my hosts, this is an awesome > feature, but I?m wondering if there is a way to let those host cloud-init > instance to request for neutron metadata endpoint? > >> > > >> > >> Generally yes, you *can* use network attached metadata with neutron > >> *as long as* your switches know to direct the traffic for the metadata > >> IP to the Neutron metadata service(s). > >> > >> We know of operators who ahve done it without issues, but often that > >> additional switch configured route is not always the best hting. > >> Generally we recommend enabling and using configuration drives, so the > >> metadata is able to be picked up by cloud-init. > >> > >> > >> > I was a bit surprised about the ironic networking part as I was > expecting the IPA ramdisk to at least be able to set the host os with the > appropriate network configuration file for whole disk images that do not > use encryption by injecting those information from the neutron api into the > host disk while mounted (right after the image dd). > >> > > >> > >> IPA has no knowledge of how to modify the host OS in this regard. > >> modifying the host OS has generally been something the ironic > >> community has avoided since it is not exactly cloudy to have to do so. > >> Generally most clouds are running with DHCP, so as long as that is > >> enabled and configured, things should generally "just work". > >> > >> Hopefully that provides a little more context. Nothing prevents you > >> from writing your own hardware manager that does exactly this, for > >> what it is worth. > >> > >> > All in all I really like the ironic approach of the baremetal > provisioning process, and I?m pretty sure that I?m just missing a bit of > understanding of the networking part but it?s really the most confusing > part of it to me as I feel like if there is a missing link in between > neutron and the host HW or the switches. > >> > > >> > >> Thanks! It is definitely one of the more complex parts given there are > >> many moving parts, and everyone wants (or needs) to have their > >> networking configured just a little differently. > >> > >> Hopefully I've kind of put some of the details out there, if you need > >> more information, please feel free to reach out, and also please feel > >> free to ask questions in #openstack-ironic on irc.oftc.net. > >> > >> > Thanks a lot anyone that will take time to explain me this :-) > >> > >> :) > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sergey.drozdov.dev at gmail.com Mon Jul 25 17:20:01 2022 From: sergey.drozdov.dev at gmail.com (Sergey Drozdov) Date: Mon, 25 Jul 2022 18:20:01 +0100 Subject: [dev][horizon][keystone] Message-ID: <201996A2-79AD-4F0C-8AAA-052AB8381C38@gmail.com> To whom it may concern, I have previously sen out an email about this topic but just wanted to run this by community again incase it was missed. I have the following issue. The firm I am currently working at is running OpenStack at sale with circa 6000 different projects. Whenever we try to access a projects tab via horizon we experience a timeout (accessing through the API works fine). For the horizon team, we were hoping to propose something akin to filtering/dynamic listing in order to rectify the issue. For the keystone team, we were wondering whether there is anything we can do with the API in order to simplify the aforementioned horizon proposal; we are hoping to get both teams on board. We are ready to open up a bug report and start subsequent development should this be supported by both teams! I am available on the both Openstack-horizon and Openstack-keystone irc channels under the nickname sdrozdov, please feel free to reach out. Best Regards, Sergey Drozdov Software Engineer The Hut Group From haiwu.us at gmail.com Mon Jul 25 18:11:29 2022 From: haiwu.us at gmail.com (hai wu) Date: Mon, 25 Jul 2022 13:11:29 -0500 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: <43b7e69240f80666813945ef9aab408b85feefdb.camel@redhat.com> References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> <43b7e69240f80666813945ef9aab408b85feefdb.camel@redhat.com> Message-ID: Understand. The same concern is also raised in the following redhat KB: https://access.redhat.com/solutions/4670201. But we could also protect some critical openstack services, like neutron, libvirtd, via the same way by setting OOMScoreAdjust for those to be -1000. If we do that, we should probably be ok. We protect both critical openstack services, and all openstack VMs in this way. On Thu, Jul 21, 2022 at 6:42 AM Sean Mooney wrote: > > On Wed, 2022-07-20 at 20:25 -0500, hai wu wrote: > > You are correct, there's no way to set OOMScoreAdjust for > > machine.slice. It errored out when trying to do that, with "Unknown > > assignment" error.. > > if you mess with the cgroups behind novas back then any hope of support you have with > your vendor or updstream is gone. > > you shoudl really find out why your running out of memroy. > > it ususllay means you have not configured nova and the host correctly. > > most often this hapens becuase peopel use cpu pinning wiht out enable per > numa node memory memory tracking by setting a page size. > > it also could be because you have not allcoated enough swap. > > so before you try to adjust things with cgroups yourslef or explore other options you shoudl determin why > the host is runnign out of memroy. > > if you prevent ti from kill the gues i have see it kill ovs or nova iteslf before where the guest were > unkillable or unlkely to be killed because they used hugepages. > > so you will likely jsut shift the problem else where that will be more impactful. > > > > > On Wed, Jul 20, 2022 at 6:48 PM hai wu wrote: > > > > > > In this case there's no memory oversubscription. This oom killer event > > > happened when we did "swapoff -a; swapon -a" to push processes in swap > > > back to memory, which is very strange. > > > > > > On Wed, Jul 20, 2022 at 6:39 PM Clark Boylan wrote: > > > > > > > > On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > > > > > After installing some systemd package, and starting up machine.slice, > > > > > systemd-machined, and hard rebooting the vm from openstack side, I > > > > > could now see the VM showing up under machine.slice. all vms were > > > > > showing up under libvirtd.service, which is under system.slice. > > > > > > > > > > What are the benefits of running libvirt managed guest instances under > > > > > machine.slice? > > > > > > > > You can use machine.slice to set system resource options that each sub slice inherits. Those options are documented at https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# (per my earlier link https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I don't see OOMScoreAdjust listed there so I am unsure if you can actually set it via this method. > > > > > > > > That all said, if you are oversubscribing memory this is likely to always be an issue. If you adjust the oom score for your VMs then the oomkiller is just going to find other victims to kill. Losing your nova compute agent or NetworkManager or iscsid may be just as problematic. Instead, I suspect that you may need to stop oversubscribing memory. > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan wrote: > > > > > > > > > > > > On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > > > > > > > Is there any configuration file that is needed to ensure guest domains > > > > > > > are under systemd machine.slice? not seeing anything under > > > > > > > machine.slice .. > > > > > > > > > > > > I think that https://www.freedesktop.org/software/systemd/man/systemd.slice.html and https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > > > > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > > > > > > > wrote: > > > > > > > > > > > > > > > > I believe you can decrease OOMScoreAdjust for systemd machines.slice, under which guest domains are to reduce chances of oom killing them. > > > > > > > > > > > > > > > > ??, 20 ???. 2022 ?., 21:52 hai wu : > > > > > > > > > > > > > > > > > > nova hypervisor sometimes oom would kill some openstack guests. > > > > > > > > > > > > > > > > > > Is it possible to not allow kernel to oom kill any openstack guests? > > > > > > > > > ram is not oversubscribed much .. > > > > > > > > > > > > > > > > > > > > > > From jdratlif at globalnoc.iu.edu Mon Jul 25 18:48:56 2022 From: jdratlif at globalnoc.iu.edu (John Ratliff) Date: Mon, 25 Jul 2022 14:48:56 -0400 Subject: [nova] hw_qemu_guest_agent image metadata property Message-ID: <226F59E8-216C-43EC-8EBA-D529AC0D99F2@globalnoc.iu.edu> We forgot to set this property on our image before creating several servers in our openstack cluster. I tried adding this property to the volume of one of the hosts and shutting it down / turning it back on, but it didn?t help. I found this blog post showing one way that worked, but it requires manual database editing. I was hoping for something a bit less typo-prone, like something in the openstack cli. https://www.thenoccave.com/2021/07/openstack-qemu-guest-tools-after-creation/ I can automate the above with ansible if that?s the only way, but if someone has a better suggestion that doesn?t involve recreating our instances, I would appreciate it. Thanks. --John Ratliff -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From gmann at ghanshyammann.com Mon Jul 25 19:09:36 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Tue, 26 Jul 2022 00:39:36 +0530 Subject: [all][tc] Technical Committee next weekly meeting on 28 July 2022 at 1500 UTC Message-ID: <18236c3c8f5.ba44b1c8695919.8798555011586714253@ghanshyammann.com> Hello Everyone, The technical Committee's next weekly meeting is scheduled for 28 July 2022, at 1500 UTC. If you would like to add topics for discussion, please add them to the below wiki page by Wednesday, 27 July at 2100 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting -gmann From pierre at stackhpc.com Mon Jul 25 21:25:25 2022 From: pierre at stackhpc.com (Pierre Riteau) Date: Mon, 25 Jul 2022 23:25:25 +0200 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely In-Reply-To: References: Message-ID: Are you sure you are running a Yoga fluentd image? I am seeing higher version numbers in logs of my Wallaby image: 2022-07-25 14:03:29 +0000 [info]: parsing config file is succeeded path="/etc/td-agent/td-agent.conf" 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-calyptia-monitoring' version '0.1.3' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.2.3' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '5.2.2' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-flowcounter-simple' version '0.1.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-grok-parser' version '2.6.2' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-kafka' version '0.17.5' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-metrics-cmetrics' version '0.1.2' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-opensearch' version '1.0.4' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-parser' version '0.6.1' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.3' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-prometheus' version '2.0.2' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-prometheus_pushgateway' version '0.1.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-record-modifier' version '2.1.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' version '2.4.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-s3' version '1.6.1' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-sd-dns' version '0.1.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-systemd' version '1.0.5' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-utmpx' version '0.5.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-webhdfs' version '1.5.0' 2022-07-25 14:03:29 +0000 [info]: gem 'fluentd' version '1.14.6' 2022-07-25 14:03:29 +0000 [info]: gem 'fluentd' version '0.12.43' On Mon, 25 Jul 2022 at 19:50, Vishwanath wrote: > Hi Pierre, > > As far as i know we are not using any custom configs, Attached is > td-agent.conf file from one of the controller nodes. Please check attached. > thanks > > > > > Thanks > Vish > (408)-471-8579 > > > On Thu, Jul 7, 2022 at 1:49 AM Pierre Riteau wrote: > >> Hello Vish, >> >> Are you using a custom configuration for fluentd? Could you please share >> your generated td-agent.conf? >> >> Best wishes, >> Pierre Riteau (priteau) >> >> On Wed, 6 Jul 2022 at 23:34, Vishwanath wrote: >> >>> Hello all, >>> >>> I have upgraded openstack, we are currently running on 14.1.0. I noticed >>> fluentd container restarting indefinitely. The message I see under >>> /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to >>> fix this ? I noticed a similar issue back in 2017 from this post - >>> https://bugs.launchpad.net/kolla-ansible/+bug/1663126 >>> >>> *error log:* >>> *2022-07-06 21:20:42 +0000 [error]: config error >>> file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError >>> error="'format' parameter is required"* >>> >>> >>> *full logs:* >>> 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded >>> path="/etc/td-agent/td-agent.conf" >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' >>> version '5.2.3' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' >>> version '4.1.1' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version >>> '0.3.4' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' >>> version '2.6.2' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version >>> '0.14.1' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version >>> '0.6.1' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version >>> '2.0.3' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version >>> '1.8.2' >>> 2022-07-06 21:20:42 +0000 [info]: gem >>> 'fluent-plugin-prometheus_pushgateway' version '0.0.2' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' >>> version '2.1.0' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' >>> version '2.4.0' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' >>> version '2.3.0' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version >>> '1.0.2' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version >>> '1.2.5' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' >>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, >>> "", "apache_access", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, >>> "", "wsgi_access", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(sahara-api|sahara-engine)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(magnum-conductor|magnum-api)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(keystone)$/, "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(heat-engine|heat-api|heat-api-cfn)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(glance-api)$/, "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(ceilometer-polling|ceilometer-agent-notification)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(mistral-server|mistral-engine|mistral-executor)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(murano-api|murano-engine)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(freezer-api|freezer-api_access|freezer-manage)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(kuryr-server)$/, "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(ironic-api|ironic-conductor|ironic-inspector)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(tacker-server|tacker-conductor)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(blazar-api|blazar-manager)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, >>> /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, >>> "", "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /^(masakari-engine|masakari-api)$/, "", >>> "openstack_python", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> programname >>> [#>> @keys="programname">, /.+/, "", "unmatched", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> Payload >>> [#>> @keys="Payload">, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] >>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>> Payload >>> [#>> @keys="Payload">, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] >>> *2022-07-06 21:20:42 +0000 [error]: config error >>> file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError >>> error="'format' parameter is required"* >>> >>> >>> Thanks >>> Vish >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Mon Jul 25 22:06:02 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 25 Jul 2022 18:06:02 -0400 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> <43b7e69240f80666813945ef9aab408b85feefdb.camel@redhat.com> Message-ID: How much are you reserving for Openstack vs the VM? On Mon, Jul 25, 2022 at 2:19 PM hai wu wrote: > Understand. The same concern is also raised in the following redhat > KB: https://access.redhat.com/solutions/4670201. > > But we could also protect some critical openstack services, like > neutron, libvirtd, via the same way by setting OOMScoreAdjust for > those to be -1000. If we do that, we should probably be ok. We protect > both critical openstack services, and all openstack VMs in this way. > > On Thu, Jul 21, 2022 at 6:42 AM Sean Mooney wrote: > > > > On Wed, 2022-07-20 at 20:25 -0500, hai wu wrote: > > > You are correct, there's no way to set OOMScoreAdjust for > > > machine.slice. It errored out when trying to do that, with "Unknown > > > assignment" error.. > > > > if you mess with the cgroups behind novas back then any hope of support > you have with > > your vendor or updstream is gone. > > > > you shoudl really find out why your running out of memroy. > > > > it ususllay means you have not configured nova and the host correctly. > > > > most often this hapens becuase peopel use cpu pinning wiht out enable per > > numa node memory memory tracking by setting a page size. > > > > it also could be because you have not allcoated enough swap. > > > > so before you try to adjust things with cgroups yourslef or explore > other options you shoudl determin why > > the host is runnign out of memroy. > > > > if you prevent ti from kill the gues i have see it kill ovs or nova > iteslf before where the guest were > > unkillable or unlkely to be killed because they used hugepages. > > > > so you will likely jsut shift the problem else where that will be more > impactful. > > > > > > > > On Wed, Jul 20, 2022 at 6:48 PM hai wu wrote: > > > > > > > > In this case there's no memory oversubscription. This oom killer > event > > > > happened when we did "swapoff -a; swapon -a" to push processes in > swap > > > > back to memory, which is very strange. > > > > > > > > On Wed, Jul 20, 2022 at 6:39 PM Clark Boylan > wrote: > > > > > > > > > > On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > > > > > > After installing some systemd package, and starting up > machine.slice, > > > > > > systemd-machined, and hard rebooting the vm from openstack side, > I > > > > > > could now see the VM showing up under machine.slice. all vms were > > > > > > showing up under libvirtd.service, which is under system.slice. > > > > > > > > > > > > What are the benefits of running libvirt managed guest instances > under > > > > > > machine.slice? > > > > > > > > > > You can use machine.slice to set system resource options that each > sub slice inherits. Those options are documented at > https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# > (per my earlier link > https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I > don't see OOMScoreAdjust listed there so I am unsure if you can actually > set it via this method. > > > > > > > > > > That all said, if you are oversubscribing memory this is likely to > always be an issue. If you adjust the oom score for your VMs then the > oomkiller is just going to find other victims to kill. Losing your nova > compute agent or NetworkManager or iscsid may be just as problematic. > Instead, I suspect that you may need to stop oversubscribing memory. > > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan < > cboylan at sapwetik.org> wrote: > > > > > > > > > > > > > > On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > > > > > > > > Is there any configuration file that is needed to ensure > guest domains > > > > > > > > are under systemd machine.slice? not seeing anything under > > > > > > > > machine.slice .. > > > > > > > > > > > > > > I think that > https://www.freedesktop.org/software/systemd/man/systemd.slice.html and > https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > > > > > > > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > I believe you can decrease OOMScoreAdjust for systemd > machines.slice, under which guest domains are to reduce chances of oom > killing them. > > > > > > > > > > > > > > > > > > ??, 20 ???. 2022 ?., 21:52 hai wu : > > > > > > > > > > > > > > > > > > > > nova hypervisor sometimes oom would kill some openstack > guests. > > > > > > > > > > > > > > > > > > > > Is it possible to not allow kernel to oom kill any > openstack guests? > > > > > > > > > > ram is not oversubscribed much .. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurentfdumont at gmail.com Mon Jul 25 22:07:01 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Mon, 25 Jul 2022 18:07:01 -0400 Subject: [nova] hw_qemu_guest_agent image metadata property In-Reply-To: <226F59E8-216C-43EC-8EBA-D529AC0D99F2@globalnoc.iu.edu> References: <226F59E8-216C-43EC-8EBA-D529AC0D99F2@globalnoc.iu.edu> Message-ID: I don't think this is possible. It might be possible to add it to the image and then do a rebuild? But that's pretty close to recreating your instances. On Mon, Jul 25, 2022 at 2:59 PM John Ratliff wrote: > We forgot to set this property on our image before creating several > servers in our openstack cluster. I tried adding this property to the > volume of one of the hosts and shutting it down / turning it back on, but > it didn?t help. > > > > I found this blog post showing one way that worked, but it requires manual > database editing. I was hoping for something a bit less typo-prone, like > something in the openstack cli. > > > https://www.thenoccave.com/2021/07/openstack-qemu-guest-tools-after-creation/ > > > > I can automate the above with ansible if that?s the only way, but if > someone has a better suggestion that doesn?t involve recreating our > instances, I would appreciate it. > > > > Thanks. > > > > --John Ratliff > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vishwanath.ne at gmail.com Mon Jul 25 17:49:53 2022 From: vishwanath.ne at gmail.com (Vishwanath) Date: Mon, 25 Jul 2022 10:49:53 -0700 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely In-Reply-To: References: Message-ID: Hi Pierre, As far as i know we are not using any custom configs, Attached is td-agent.conf file from one of the controller nodes. Please check attached. thanks Thanks Vish (408)-471-8579 On Thu, Jul 7, 2022 at 1:49 AM Pierre Riteau wrote: > Hello Vish, > > Are you using a custom configuration for fluentd? Could you please share > your generated td-agent.conf? > > Best wishes, > Pierre Riteau (priteau) > > On Wed, 6 Jul 2022 at 23:34, Vishwanath wrote: > >> Hello all, >> >> I have upgraded openstack, we are currently running on 14.1.0. I noticed >> fluentd container restarting indefinitely. The message I see under >> /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to >> fix this ? I noticed a similar issue back in 2017 from this post - >> https://bugs.launchpad.net/kolla-ansible/+bug/1663126 >> >> *error log:* >> *2022-07-06 21:20:42 +0000 [error]: config error >> file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError >> error="'format' parameter is required"* >> >> >> *full logs:* >> 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded >> path="/etc/td-agent/td-agent.conf" >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' >> version '5.2.3' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' >> version '4.1.1' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' version >> '2.6.2' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version >> '0.14.1' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version >> '0.6.1' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version >> '2.0.3' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' version >> '1.8.2' >> 2022-07-06 21:20:42 +0000 [info]: gem >> 'fluent-plugin-prometheus_pushgateway' version '0.0.2' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' >> version '2.1.0' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' >> version '2.4.0' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' >> version '2.3.0' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version >> '1.0.2' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version >> '1.2.5' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' >> 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, >> "", "apache_access", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, >> "", "wsgi_access", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(sahara-api|sahara-engine)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(magnum-conductor|magnum-api)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(keystone)$/, "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(heat-engine|heat-api|heat-api-cfn)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(glance-api)$/, "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(ceilometer-polling|ceilometer-agent-notification)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(mistral-server|mistral-engine|mistral-executor)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(murano-api|murano-engine)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(freezer-api|freezer-api_access|freezer-manage)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(kuryr-server)$/, "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(ironic-api|ironic-conductor|ironic-inspector)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(tacker-server|tacker-conductor)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(blazar-api|blazar-manager)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, >> /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, >> "", "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /^(masakari-engine|masakari-api)$/, "", >> "openstack_python", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >> programname >> [#> @keys="programname">, /.+/, "", "unmatched", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload >> [#> @keys="Payload">, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] >> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: Payload >> [#> @keys="Payload">, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] >> *2022-07-06 21:20:42 +0000 [error]: config error >> file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError >> error="'format' parameter is required"* >> >> >> Thanks >> Vish >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: td-agent.conf Type: application/octet-stream Size: 15152 bytes Desc: not available URL: From smooney at redhat.com Mon Jul 25 23:00:35 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jul 2022 00:00:35 +0100 Subject: [nova] nova hypervisor oom killed some openstack guest In-Reply-To: References: <54bd7ad1-1bf4-4a5c-a23b-f017511db9f0@www.fastmail.com> <7681c62f-6010-4b0f-98ba-045639c889c3@www.fastmail.com> <43b7e69240f80666813945ef9aab408b85feefdb.camel@redhat.com> Message-ID: <06e459559296da16f57e9e7d7cc7a610da278a14.camel@redhat.com> On Mon, 2022-07-25 at 18:06 -0400, Laurent Dumont wrote: > How much are you reserving for Openstack vs the VM? that is a very good question many people fail to account for the qemu overhead and fail to allocate swap. even if you are not using memory over subscripion you should ahve 8-16GB fo swap on any nova compute host. in addtion to how much is being reserved its also imporant to ensure taht if you are doing memory over subscrtion that there is enough swap to cover that and to understand that the kernel oom reaper runs per numa node so even if there is plent of free memory on numa 1 if the kernel need memory on numa 0 then it will trigger an OOM reaping cycle. so if you are using hugepages its imporant to ensure that you still have enough memory one all numa nodes where kernel proceess can run. > > On Mon, Jul 25, 2022 at 2:19 PM hai wu wrote: > > > Understand. The same concern is also raised in the following redhat > > KB: https://access.redhat.com/solutions/4670201. just be aware that ^ is not something that is supproted in the redhat openstack product and implementing it woudl void your support for the vms. knolwadge base articals are generally writen by support engineers when debugging a problem with possibel solutions they tried. The are not part of our product docs, are not review for correctness by the engineri teams that maintain openstack upstream or downstream. so take anything you find there with a grain of salt. libvirt hooks are not and never have been supported upstream or downstream. but if you are maintaining and or operating the cloud your self then that might work for you. > > > > But we could also protect some critical openstack services, like > > neutron, libvirtd, via the same way by setting OOMScoreAdjust for > > those to be -1000. If we do that, we should probably be ok. We protect > > both critical openstack services, and all openstack VMs in this way. > > > > On Thu, Jul 21, 2022 at 6:42 AM Sean Mooney wrote: > > > > > > On Wed, 2022-07-20 at 20:25 -0500, hai wu wrote: > > > > You are correct, there's no way to set OOMScoreAdjust for > > > > machine.slice. It errored out when trying to do that, with "Unknown > > > > assignment" error.. > > > > > > if you mess with the cgroups behind novas back then any hope of support > > you have with > > > your vendor or updstream is gone. > > > > > > you shoudl really find out why your running out of memroy. > > > > > > it ususllay means you have not configured nova and the host correctly. > > > > > > most often this hapens becuase peopel use cpu pinning wiht out enable per > > > numa node memory memory tracking by setting a page size. > > > > > > it also could be because you have not allcoated enough swap. > > > > > > so before you try to adjust things with cgroups yourslef or explore > > other options you shoudl determin why > > > the host is runnign out of memroy. > > > > > > if you prevent ti from kill the gues i have see it kill ovs or nova > > iteslf before where the guest were > > > unkillable or unlkely to be killed because they used hugepages. > > > > > > so you will likely jsut shift the problem else where that will be more > > impactful. > > > > > > > > > > > On Wed, Jul 20, 2022 at 6:48 PM hai wu wrote: > > > > > > > > > > In this case there's no memory oversubscription. This oom killer > > event > > > > > happened when we did "swapoff -a; swapon -a" to push processes in > > swap > > > > > back to memory, which is very strange. > > > > > > > > > > On Wed, Jul 20, 2022 at 6:39 PM Clark Boylan > > wrote: > > > > > > > > > > > > On Wed, Jul 20, 2022, at 4:04 PM, hai wu wrote: > > > > > > > After installing some systemd package, and starting up > > machine.slice, > > > > > > > systemd-machined, and hard rebooting the vm from openstack side, > > I > > > > > > > could now see the VM showing up under machine.slice. all vms were > > > > > > > showing up under libvirtd.service, which is under system.slice. > > > > > > > > > > > > > > What are the benefits of running libvirt managed guest instances > > under > > > > > > > machine.slice? > > > > > > > > > > > > You can use machine.slice to set system resource options that each > > sub slice inherits. Those options are documented at > > https://www.freedesktop.org/software/systemd/man/systemd.resource-control.html# > > (per my earlier link > > https://www.freedesktop.org/software/systemd/man/systemd.slice.html). I > > don't see OOMScoreAdjust listed there so I am unsure if you can actually > > set it via this method. > > > > > > > > > > > > That all said, if you are oversubscribing memory this is likely to > > always be an issue. If you adjust the oom score for your VMs then the > > oomkiller is just going to find other victims to kill. Losing your nova > > compute agent or NetworkManager or iscsid may be just as problematic. > > Instead, I suspect that you may need to stop oversubscribing memory. > > > > > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 5:53 PM Clark Boylan < > > cboylan at sapwetik.org> wrote: > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022, at 3:17 PM, hai wu wrote: > > > > > > > > > Is there any configuration file that is needed to ensure > > guest domains > > > > > > > > > are under systemd machine.slice? not seeing anything under > > > > > > > > > machine.slice .. > > > > > > > > > > > > > > > > I think that > > https://www.freedesktop.org/software/systemd/man/systemd.slice.html and > > https://libvirt.org/cgroups.html covers this for libvirt managed VMs. > > > > > > > > > > > > > > > > > > > > > > > > > > On Wed, Jul 20, 2022 at 3:33 PM Dmitriy Rabotyagov > > > > > > > > > wrote: > > > > > > > > > > > > > > > > > > > > I believe you can decrease OOMScoreAdjust for systemd > > machines.slice, under which guest domains are to reduce chances of oom > > killing them. > > > > > > > > > > > > > > > > > > > > ??, 20 ???. 2022 ?., 21:52 hai wu : > > > > > > > > > > > > > > > > > > > > > > nova hypervisor sometimes oom would kill some openstack > > guests. > > > > > > > > > > > > > > > > > > > > > > Is it possible to not allow kernel to oom kill any > > openstack guests? > > > > > > > > > > > ram is not oversubscribed much .. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > From smooney at redhat.com Mon Jul 25 23:03:21 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jul 2022 00:03:21 +0100 Subject: [nova] hw_qemu_guest_agent image metadata property In-Reply-To: References: <226F59E8-216C-43EC-8EBA-D529AC0D99F2@globalnoc.iu.edu> Message-ID: <638555685d8506074e3d586e0d3f716c89059567.camel@redhat.com> On Mon, 2022-07-25 at 18:07 -0400, Laurent Dumont wrote: > I don't think this is possible. It might be possible to add it to the image > and then do a rebuild? But that's pretty close to recreating your instances. thats the only offically supported way to do this yes. the hack to avoid that is to insert the mising image properties into the db directly then hard reboot the guest if it does not change schduleing. the qemu guest agent does not. for other image properties you might need to cold/live migrate or shelve and unshleve to place it on a valid host after modifying the db. rebuild is the prefered approch in this case if the guest agent is requried. > > On Mon, Jul 25, 2022 at 2:59 PM John Ratliff > wrote: > > > We forgot to set this property on our image before creating several > > servers in our openstack cluster. I tried adding this property to the > > volume of one of the hosts and shutting it down / turning it back on, but > > it didn?t help. > > > > > > > > I found this blog post showing one way that worked, but it requires manual > > database editing. I was hoping for something a bit less typo-prone, like > > something in the openstack cli. > > > > > > https://www.thenoccave.com/2021/07/openstack-qemu-guest-tools-after-creation/ > > > > > > > > I can automate the above with ansible if that?s the only way, but if > > someone has a better suggestion that doesn?t involve recreating our > > instances, I would appreciate it. > > > > > > > > Thanks. > > > > > > > > --John Ratliff > > > > > > From sbauza at redhat.com Tue Jul 26 07:54:23 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 26 Jul 2022 09:54:23 +0200 Subject: [nova] Hold your rechecks Message-ID: We're currently investigating a quite impacting issue in both nova-next and ovs-hybrid-plug jobs that prevent Zuul to be happy. https://bugs.launchpad.net/nova/+bug/1940425 In the meantime, as the failure rate is pretty high, doing rechecks (against this bug number of course, you already know to NOT do blind rechecks [1]) doen't help and just consumes constrained CI resources for a very little and unpredictable gain. Thanks, -Sylvain -------------- next part -------------- An HTML attachment was scrubbed... URL: From sbauza at redhat.com Tue Jul 26 07:57:04 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 26 Jul 2022 09:57:04 +0200 Subject: [nova] Hold your rechecks In-Reply-To: References: Message-ID: Le mar. 26 juil. 2022 ? 09:54, Sylvain Bauza a ?crit : > We're currently investigating a quite impacting issue in both nova-next > and ovs-hybrid-plug jobs that prevent Zuul to be happy. > https://bugs.launchpad.net/nova/+bug/1940425 > > In the meantime, as the failure rate is pretty high, doing rechecks > (against this bug number of course, you already know to NOT do blind > rechecks [1]) doen't help and just consumes constrained CI resources for a > very little and unpredictable gain. > > Aaaaaand I forgot to add the footnote : [1] https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures Thanks, > -Sylvain > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stephenfin at redhat.com Tue Jul 26 11:33:09 2022 From: stephenfin at redhat.com (Stephen Finucane) Date: Tue, 26 Jul 2022 12:33:09 +0100 Subject: [dev][horizon][keystone] In-Reply-To: <201996A2-79AD-4F0C-8AAA-052AB8381C38@gmail.com> References: <201996A2-79AD-4F0C-8AAA-052AB8381C38@gmail.com> Message-ID: <30f9387dfcc53d0ea797c06026ae235fd3a8f16b.camel@redhat.com> On Mon, 2022-07-25 at 18:20 +0100, Sergey Drozdov wrote: > To whom it may concern, > > I have previously sen out an email about this topic but just wanted to run this by community again incase it was missed. > > I have the following issue. The firm I am currently working at is running OpenStack at sale with circa 6000 different projects. Whenever we try to access a projects tab via horizon we experience a timeout (accessing through the API works fine). > > For the horizon team, we were hoping to propose something akin to filtering/dynamic listing in order to rectify the issue. I suspect you'll have a difficult time getting this into Horizon since I'm unsure how many people are actively working on the project. Something AJAX'y whereby you only load the first N results and insist on more filtering (or use of the API) for anything more would be a good start? > For the keystone team, we were wondering whether there is anything we can do with the API in order to simplify the aforementioned horizon proposal; we are hoping to get both teams on board. Based on Julia's reply, it seems pagination for the "users" API might still be a non-starter. You could probably look to implement it for other resources like projects and domains though. Once available in Keystone, it should be pretty easy to add to Horizon (pagination is available elsewhere). I know Keystone is undergoing a bit of a revival right now so there's a chance this will get more traction...though of course you'll still need to add stuff to Horizon at a later date. > We are ready to open up a bug report and start subsequent development should this be supported by both teams! Horizon uses blueprints, as discussed in the Horizon contributors guide [1]. Creating an initial blueprint and some PoC patches might be a good start? Keystone uses blueprints and specs [2][3] so if you want to extend this then you'll probably need to draft a spec then the PoC patches to prove out the idea. Stephen [1] https://docs.openstack.org/horizon/latest/contributor/contributing.html [2] https://docs.openstack.org/keystone/latest/contributor/contributing.html [3] https://opendev.org/openstack/keystone-specs > > I am available on the both Openstack-horizon and Openstack-keystone irc channels under the nickname sdrozdov, please feel free to reach out. > Best Regards, > Sergey Drozdov > Software Engineer > The Hut Group > From katonalala at gmail.com Tue Jul 26 12:18:41 2022 From: katonalala at gmail.com (Lajos Katona) Date: Tue, 26 Jul 2022 14:18:41 +0200 Subject: [Neutron][Octavia][ovn-octavia-provider] proposing Fernando Royo for ovn-octavia-provider core reviewer Message-ID: Hi I would like to propose Fernando Royo (froyo) as a core reviewer to the ovn-octavia-provider project. Fernando is very active in the project (see [1] and [2]). As ovn-octavia-provider is a link between Neutron and Octavia I ask both Neutron and Octavia cores to vote by answering to this thread, to have a final decision. Thanks for your consideration. [1]: https://review.opendev.org/q/owner:froyo%2540redhat.com [2]: https://www.stackalytics.io/report/contribution?module=neutron-group&project_type=openstack&days=60 Cheers Lajos -------------- next part -------------- An HTML attachment was scrubbed... URL: From dev.faz at gmail.com Tue Jul 26 12:23:32 2022 From: dev.faz at gmail.com (Fabian Zimmermann) Date: Tue, 26 Jul 2022 14:23:32 +0200 Subject: [kolla] RabbitMQ High Availability In-Reply-To: References: Message-ID: Hi, try to upgrade your rabbitmq - as experience tells, that rabbitmq gets better in every version. In the above posts on the list, we did some tests and the results got written into https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit To summarize, the most stable config seems to be: durable queues + ha-replication *only* for long living queues and non-replicated for the short living ones. Its also the config used by the openstack-ansible project. This is mostly reached by rabbitmq-policies as documented in the wiki. If you also have issues while all 3 nodes are running, it may be usefull to clear your rabbitmq/vhost/mnesia and start from a clean data-dir. Fabian Am Mo., 25. Juli 2022 um 06:01 Uhr schrieb Tan Tran Trong : > > Hi, > My RMQ version is: 3.8.32 > I deployed the xena version using kolla-ansible on Ubuntu 20.04. > Right now my cluster running no ha + amqp_durable_queues = False, when I shut 1 controller and create instance I got the error on nova-scheduler: > 2022-07-25 10:36:41.496 688 ERROR root oslo_messaging.exceptions.MessageDeliveryFailure: Unable to connect to AMQP server on x.x.x.x:5672 after inf tries: Queue.declare: (404) NOT_FOUND - home node 'rabbit at control02' of durable queue > 'scheduler' in vhost '/' is down or inaccessible > > Regards, > Tan > > On Sun, Jul 24, 2022 at 1:20 AM Satish Patel wrote: >> >> Something is wrong with your version or rabbitMQ version. Make sure you are not dealing with bug. I have 3 node cluster and it always survive if I shutdown one of controller node. It works prefect fine without issue. Even with HA or nonHA config. >> >> What version of openstack and rabbitMQ are you running ? >> >> Sent from my iPhone >> >> On Jul 23, 2022, at 1:29 PM, Tan Tran Trong wrote: >> >> ? >> Hello, >> Thank you guys for your links. Actually I moved from no durable queues + no HA policy to durable queues + ha-all policy. The result is still the same. Tried to turning using https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit but still missing something I guess. >> @Albert: Have you tested the case when you shutdown 1 controller -> thing works -> power it on -> shutdown another controller? In my case the cluster is not stable after that. >> And by "work fine" you mean you don't have to do anything (restart rabbitmq, restart openstack services) when 1 controller is down, do you? I know it sounds silly, but we end up using internal keepalived VIP only for all transport settings which remove loadbalancing but keep my cluster stable when 1 node down, really don't know if it will cause trouble later when the cluster grows. >> >> Regards, >> Tan >> >> >> On Fri, Jul 22, 2022 at 10:53 PM Albert Braden wrote: >>> >>> The default RMQ config is broken. You're on the right track with setting durable_queues, but there's more to do. I'm running kolla Train with mirrored/durable queues and my clusters work fine with a controller down. One issue that we faced after setting durable was that we weren't running redis, and then when we tried to run it the network was blocking the port, but eventually we got it working. >>> >>> Some have recommended not mirroring queues; I haven't tried that. If anyone has successfully setup HA without mirrored queues, I'd be interested to hear about how you did it. >>> >>> Here are some helpful links: >>> >>> https://wiki.openstack.org/wiki/Large_Scale_Configuration_Rabbit >>> https://lists.openstack.org/pipermail/openstack-discuss/2021-November/026074.html >>> https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016362.html >>> https://lists.openstack.org/pipermail/openstack-discuss/2020-August/016524.html >>> https://review.opendev.org/c/openstack/kolla-ansible/+/822191 >>> https://review.opendev.org/c/openstack/kolla-ansible/+/824994 >>> On Thursday, July 21, 2022, 02:42:42 PM EDT, Tan Tran Trong wrote: >>> >>> >>> Hello, >>> I'm trying to figure out how to configure RabbitMQ to make it high available. I have 3 controller nodes and 2 compute nodes, deployed with kolla with mostly default configuration. The RabbitMQ set to ha-all for all queues on all nodes, amqp_durable_queues = True >>> My problem is when I shutdown 1 controller node (or 1 RabbitMQ container) (master or slave) the whole cluster becomes unstable. Some instances can not be created, it is stuck on Scheduling, Block Device Mapping, the volumes not shown or are stuck on creating, the compute node reported dead randomly,... >>> I'm looking for documentation to know how Openstack using RabbitMQ, Openstack behavior when RabbitMQ node down and way to make RabbitMQ HA in a stable way. Do you have any recommendation? >>> >>> TIA, >>> Tan From sbauza at redhat.com Tue Jul 26 12:24:01 2022 From: sbauza at redhat.com (Sylvain Bauza) Date: Tue, 26 Jul 2022 14:24:01 +0200 Subject: [nova] Hold your rechecks In-Reply-To: References: Message-ID: Le mar. 26 juil. 2022 ? 09:57, Sylvain Bauza a ?crit : > > Le mar. 26 juil. 2022 ? 09:54, Sylvain Bauza a ?crit : > >> We're currently investigating a quite impacting issue in both nova-next >> and ovs-hybrid-plug jobs that prevent Zuul to be happy. >> https://bugs.launchpad.net/nova/+bug/1940425 >> >> So, we eventually found the issue, it was because of a new os-vif release. We don't know yet which patch from os-vif 3.0.0 is creating this problem, but until we fix it, we will not accept this release : https://review.opendev.org/c/openstack/requirements/+/851002 https://review.opendev.org/c/openstack/nova/+/850998 Until the both changes are merged, please continue to not recheck, or rebase your change. -Sylvain > In the meantime, as the failure rate is pretty high, doing rechecks >> (against this bug number of course, you already know to NOT do blind >> rechecks [1]) doen't help and just consumes constrained CI resources for a >> very little and unpredictable gain. >> >> > > Aaaaaand I forgot to add the footnote : > > [1] > https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures > > Thanks, >> -Sylvain >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ralonsoh at redhat.com Tue Jul 26 12:52:05 2022 From: ralonsoh at redhat.com (Rodolfo Alonso Hernandez) Date: Tue, 26 Jul 2022 14:52:05 +0200 Subject: [Neutron][Octavia][ovn-octavia-provider] proposing Fernando Royo for ovn-octavia-provider core reviewer In-Reply-To: References: Message-ID: +1 from me, he is a great collaborator to Octavia and Neutron. On Tue, Jul 26, 2022 at 2:48 PM Lajos Katona wrote: > Hi > > I would like to propose Fernando Royo (froyo) as a core reviewer to > the ovn-octavia-provider project. > Fernando is very active in the project (see [1] and [2]). > > As ovn-octavia-provider is a link between Neutron and Octavia I ask both > Neutron and Octavia cores to vote by answering to this thread, to have a > final decision. > Thanks for your consideration. > > [1]: > https://review.opendev.org/q/owner:froyo%2540redhat.com > [2]: > > https://www.stackalytics.io/report/contribution?module=neutron-group&project_type=openstack&days=60 > > Cheers > Lajos > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michal.arbet at ultimum.io Tue Jul 26 12:56:10 2022 From: michal.arbet at ultimum.io (Michal Arbet) Date: Tue, 26 Jul 2022 14:56:10 +0200 Subject: Need help on rabbitmq In-Reply-To: References: Message-ID: Hi, We also had issues with rabbitmq and heartbeats, did you investigate if this is bug ? Or was it regular issue in your case ? Thanks Michal Arbet Openstack Engineer Ultimum Technologies a.s. Na Po???? 1047/26, 11000 Praha 1 Czech Republic +420 604 228 897 michal.arbet at ultimum.io *https://ultimum.io * LinkedIn | Twitter | Facebook ?t 21. 6. 2022 v 9:57 odes?latel AJ_ sunny napsal: > Hi team > > I am using kolla-ansible based openstack infra and getting below error in > logs seems frequent rabbitmq disconnections > > <0.5023.16> closing AMQP connection <0.5023.16> (10.80.0.13:40356 -> > 10.80.0.13:5672 - mod_wsgi:19:d3196668-57e5-46dc-8b69-78d73b5873a0): > missed heartbeats from client, timeout: 60s > AMQP server on 10.80.0.13:5672 is unreachable: [Errno 104] Connection > reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno > 104] Connection reset by peer > > Is this bug or any resolution for fixing this issue? > > > Thanks > Arihant Jain > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jul 26 13:08:26 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 26 Jul 2022 15:08:26 +0200 Subject: [Neutron][Octavia][ovn-octavia-provider] proposing Fernando Royo for ovn-octavia-provider core reviewer In-Reply-To: References: Message-ID: <2642162.mvXUDI8C0e@p1> Hi, Dnia wtorek, 26 lipca 2022 14:18:41 CEST Lajos Katona pisze: > Hi > > I would like to propose Fernando Royo (froyo) as a core reviewer to > the ovn-octavia-provider project. > Fernando is very active in the project (see [1] and [2]). > > As ovn-octavia-provider is a link between Neutron and Octavia I ask both > Neutron and Octavia cores to vote by answering to this thread, to have a > final decision. > Thanks for your consideration. > > [1]: > https://review.opendev.org/q/owner:froyo%2540redhat.com > [2]: > https://www.stackalytics.io/report/contribution?module=neutron-group&project_type=openstack&days=60 > > Cheers > Lajos > Definitely +1 for Fernando :) -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From satish.txt at gmail.com Tue Jul 26 13:39:47 2022 From: satish.txt at gmail.com (Satish Patel) Date: Tue, 26 Jul 2022 09:39:47 -0400 Subject: Need help on rabbitmq In-Reply-To: References: Message-ID: It?s hard to guess without release and version info. I had issue like that which fixed by upgrade of ampq library in wallaby release. Sent from my iPhone > On Jul 26, 2022, at 9:05 AM, Michal Arbet wrote: > > ? > Hi, > > We also had issues with rabbitmq and heartbeats, did you investigate if this is bug ? Or was it regular issue in your case ? > > Thanks > Michal Arbet > Openstack Engineer > > Ultimum Technologies a.s. > Na Po???? 1047/26, 11000 Praha 1 > Czech Republic > > +420 604 228 897 > michal.arbet at ultimum.io > https://ultimum.io > > LinkedIn | Twitter | Facebook > > > ?t 21. 6. 2022 v 9:57 odes?latel AJ_ sunny napsal: >> Hi team >> >> I am using kolla-ansible based openstack infra and getting below error in logs seems frequent rabbitmq disconnections >> >> <0.5023.16> closing AMQP connection <0.5023.16> (10.80.0.13:40356 -> 10.80.0.13:5672 - mod_wsgi:19:d3196668-57e5-46dc-8b69-78d73b5873a0): >> missed heartbeats from client, timeout: 60s >> >> AMQP server on 10.80.0.13:5672 is unreachable: [Errno 104] Connection reset by peer. Trying again in 1 seconds.: ConnectionResetError: [Errno 104] Connection reset by peer >> >> >> Is this bug or any resolution for fixing this issue? >> >> >> Thanks >> Arihant Jain -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Tue Jul 26 13:47:46 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Tue, 26 Jul 2022 15:47:46 +0200 Subject: [nova][neutron] Hold your rechecks In-Reply-To: References: Message-ID: <5845893.lOV4Wx5bFT@p1> Hi, Dnia wtorek, 26 lipca 2022 09:54:23 CEST Sylvain Bauza pisze: > We're currently investigating a quite impacting issue in both nova-next and > ovs-hybrid-plug jobs that prevent Zuul to be happy. > https://bugs.launchpad.net/nova/+bug/1940425 > > In the meantime, as the failure rate is pretty high, doing rechecks > (against this bug number of course, you already know to NOT do blind > rechecks [1]) doen't help and just consumes constrained CI resources for a > very little and unpredictable gain. > > Thanks, > -Sylvain > Adding neutron to the topic as this same issue impacts neutron-ovs multinode jobs which are failing pretty often recently. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From ltomasbo at redhat.com Tue Jul 26 15:37:35 2022 From: ltomasbo at redhat.com (Luis Tomas Bolivar) Date: Tue, 26 Jul 2022 17:37:35 +0200 Subject: [Neutron][Octavia][ovn-octavia-provider] proposing Fernando Royo for ovn-octavia-provider core reviewer In-Reply-To: <2642162.mvXUDI8C0e@p1> References: <2642162.mvXUDI8C0e@p1> Message-ID: +1 from me too! He is doing a great job on the ovn-octavia side! On Tue, Jul 26, 2022 at 3:28 PM Slawek Kaplonski wrote: > Hi, > > Dnia wtorek, 26 lipca 2022 14:18:41 CEST Lajos Katona pisze: > > Hi > > > > I would like to propose Fernando Royo (froyo) as a core reviewer to > > the ovn-octavia-provider project. > > Fernando is very active in the project (see [1] and [2]). > > > > As ovn-octavia-provider is a link between Neutron and Octavia I ask both > > Neutron and Octavia cores to vote by answering to this thread, to have a > > final decision. > > Thanks for your consideration. > > > > [1]: > > https://review.opendev.org/q/owner:froyo%2540redhat.com > > [2]: > > > https://www.stackalytics.io/report/contribution?module=neutron-group&project_type=openstack&days=60 > > > > Cheers > > Lajos > > > > Definitely +1 for Fernando :) > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -- LUIS TOM?S BOL?VAR Principal Software Engineer Red Hat Madrid, Spain ltomasbo at redhat.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Jul 26 15:38:26 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 26 Jul 2022 15:38:26 +0000 Subject: [dev][security-sig][tc] Please follow up on privately reported defects Message-ID: <20220726153826.hdi7ycshtdo57xwr@yuggoth.org> First, a huge thank you to everyone who is staying on top of reports of suspected security vulnerabilities! Unfortunately, not everyone has been, which is the reason for this E-mail. It's common practice that, if someone finds a problem in software which they think might be an exploitable security vulnerability, they report it initially in private in order to give the project's maintainers an opportunity to correct things and have patches ready before it becomes common knowledge. This works okay as long as people actually look at these privately reported bugs (or at the project's bugs at all). For OpenStack deliverables whose maintainers opt them into VMT oversight[*], these private reports are initially handled by a vulnerability coordinator in order to make sure that they're probably reported against the correct project, that the project maintainers who have volunteered to handle those sorts of reports are correctly subscribed, and that everyone is reminded of the ground rules and timetable for resolving reports under such an embargo. For other OpenStack deliverables, VMT members may still weigh in on those private reports and offer assistance or guidance on handling and reporting procedures. Our VMT members do not, however, have sufficient time in their day to keep individually reaching out to project maintainers in order to remind them to do their part. OpenStack is a community which has optimized around transparency and public collaboration, so it's not surprising that confirming bugs and reviewing changes in private is clunky and unpleasant. This is, if anything, a reason to prioritize triaging private bug reports in order to make sure they're really a bug (not just a misunderstanding or misconfiguration), and represent a severe enough risk to warrant continued handling in secret. Many of the private bug reports currently pending could probably be switched to public and even perhaps closed today, if maintainers for their projects would just find a moment to take a look at them. For the ones which can't be handled right away, at least leave a quick comment letting the reporter and the VMT members know you're taking a look, or any first impressions or questions you might have. If you're interested in helping a project resolve reported vulnerabilities and aren't yet a member of their security review team in the appropriate bug tracker (usually *-coresec in LP or openstack-security-* in SB), then please reach out to the appropriate PTL and let them know. If you're a PTL and you were never made a member of the security review team for your project or are having trouble adding willing volunteers, please follow up here on the ML or feel free to reach out to me directly for assistance. For those who read this far, thank you for your time, and please remember to follow up on those bugs! [*] https://security.openstack.org/repos-overseen.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From levonmelikbekjan at yahoo.de Tue Jul 26 15:59:45 2022 From: levonmelikbekjan at yahoo.de (Levon Melikbekjan) Date: Tue, 26 Jul 2022 15:59:45 +0000 Subject: Customization of scheduler manager References: Message-ID: Hi all, as part of my thesis, I modified the Openstack version Train with the intention of sharing resources, which are not in use with other users to ensure maximum utilization. So far everything is working fine except for the last step. Let me first explain my work. The system works according to the following rules: 1. Users who own compute hosts within the private cloud have the highest priority on their hosts. 2. Users who do not own hosts within the private cloud are low priority users who can instantiate their virtual machine on unused resources (on the hosts that have an owner). 3. If the owner wants to use his resources that are currently occupied, the foreign VM must be suspended to free up resources for the owners VM. 4. An owner is a low priority user on foreign hosts. Everything works automatically and generically, but in step 3 I do not suspend those VMs, I delete them. I want the VMs to be suspended to be able to restart them with the intention of being able to continue processes that are paused, and I know there is maybe a REST API functions that provides this functionality. A user should be able to continue his work after resources become free again. It would be annoying if long-running processes were killed. My question is this: Is the suspend function the right choice? Are the resources released when I use the suspend function? Thank you & Best regards, Levon Melikbekjan -------------- next part -------------- An HTML attachment was scrubbed... URL: From smooney at redhat.com Tue Jul 26 16:42:40 2022 From: smooney at redhat.com (Sean Mooney) Date: Tue, 26 Jul 2022 17:42:40 +0100 Subject: Customization of scheduler manager In-Reply-To: References: Message-ID: <7a64ec3a3c847563f34556dc7eb16745489484f0.camel@redhat.com> On Tue, 2022-07-26 at 15:59 +0000, Levon Melikbekjan wrote: > Hi all, > > as part of my thesis, I modified the Openstack version Train with the intention of sharing resources, which are not in use with other users to ensure maximum utilization. So far everything is working fine except for the last step. Let me first explain my work. > > The system works according to the following rules: > > > 1. Users who own compute hosts within the private cloud have the highest priority on their hosts. > 2. Users who do not own hosts within the private cloud are low priority users who can instantiate their virtual machine on unused resources (on the hosts that have an owner). > 3. If the owner wants to use his resources that are currently occupied, the foreign VM must be suspended to free up resources for the owners VM. > 4. An owner is a low priority user on foreign hosts. > > Everything works automatically and generically, but in step 3 I do not suspend those VMs, I delete them. I want the VMs to be suspended to be able to restart them with the intention of being able to continue processes that are paused, and I know there is maybe a REST API functions that provides this functionality. A user should be able to continue his work after resources become free again. It would be annoying if long-running processes were killed. > > My question is this: > Is the suspend function the right choice? Are the resources released when I use the suspend function? no the resouces are not releassed when you suspend. if i was to do this i woudl shelve the instance so that the user can unshelve it to a differnt host if needed. what you are discirbing is somthing we have previously considerd call premetible instances or spot instnaces to use aws terminology. shelve will preserve the vms ports, volumes and root disk creating a snapthot storign it to glance. when the user wants to resume there low priority instance they can unshleve it and it will go to a differnt host. note that due to how nova and placment works you cant share resouce in nova the way you are trying to do becasue placment will still prevent the oversubsctiion and in traint placment is not optional. so you will never exceed the overallocation ratio unless you have altered that by say setting it very high or not creating allcoations for the low priority instances. > > Thank you & Best regards, > > Levon Melikbekjan From ignaziocassano at gmail.com Tue Jul 26 18:46:43 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Tue, 26 Jul 2022 20:46:43 +0200 Subject: Openstack routed provider network Message-ID: Hello All, I am reading documentation about routed provider network. It reports: " Routed provider networks imply that compute nodes reside on different segments. " What does mean ? What is a segment it this case ? Thanks for helping me" Ignazio -------------- next part -------------- An HTML attachment was scrubbed... URL: From jdratlif at globalnoc.iu.edu Tue Jul 26 19:13:32 2022 From: jdratlif at globalnoc.iu.edu (John Ratliff) Date: Tue, 26 Jul 2022 15:13:32 -0400 Subject: [nova] hw_qemu_guest_agent image metadata property In-Reply-To: <638555685d8506074e3d586e0d3f716c89059567.camel@redhat.com> References: <226F59E8-216C-43EC-8EBA-D529AC0D99F2@globalnoc.iu.edu> <638555685d8506074e3d586e0d3f716c89059567.camel@redhat.com> Message-ID: Thanks, that's what we thought as well. We went with adding the properties to the DB and will restart the images as we move forward. --John ?On 7/25/22, 7:04 PM, "Sean Mooney" wrote: On Mon, 2022-07-25 at 18:07 -0400, Laurent Dumont wrote: > I don't think this is possible. It might be possible to add it to the image > and then do a rebuild? But that's pretty close to recreating your instances. thats the only offically supported way to do this yes. the hack to avoid that is to insert the mising image properties into the db directly then hard reboot the guest if it does not change schduleing. the qemu guest agent does not. for other image properties you might need to cold/live migrate or shelve and unshleve to place it on a valid host after modifying the db. rebuild is the prefered approch in this case if the guest agent is requried. > > On Mon, Jul 25, 2022 at 2:59 PM John Ratliff > wrote: > > > We forgot to set this property on our image before creating several > > servers in our openstack cluster. I tried adding this property to the > > volume of one of the hosts and shutting it down / turning it back on, but > > it didn?t help. > > > > > > > > I found this blog post showing one way that worked, but it requires manual > > database editing. I was hoping for something a bit less typo-prone, like > > something in the openstack cli. > > > > > > https://www.thenoccave.com/2021/07/openstack-qemu-guest-tools-after-creation/ > > > > > > > > I can automate the above with ansible if that?s the only way, but if > > someone has a better suggestion that doesn?t involve recreating our > > instances, I would appreciate it. > > > > > > > > Thanks. > > > > > > > > --John Ratliff > > > > > > -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5210 bytes Desc: not available URL: From johnsomor at gmail.com Tue Jul 26 19:33:14 2022 From: johnsomor at gmail.com (Michael Johnson) Date: Tue, 26 Jul 2022 12:33:14 -0700 Subject: [Neutron][Octavia][ovn-octavia-provider] proposing Fernando Royo for ovn-octavia-provider core reviewer In-Reply-To: References: <2642162.mvXUDI8C0e@p1> Message-ID: +1 from me. He has done great work getting the status updates working in the OVN provider. Michael On Tue, Jul 26, 2022 at 8:58 AM Luis Tomas Bolivar wrote: > > +1 from me too! He is doing a great job on the ovn-octavia side! > > On Tue, Jul 26, 2022 at 3:28 PM Slawek Kaplonski wrote: >> >> Hi, >> >> Dnia wtorek, 26 lipca 2022 14:18:41 CEST Lajos Katona pisze: >> > Hi >> > >> > I would like to propose Fernando Royo (froyo) as a core reviewer to >> > the ovn-octavia-provider project. >> > Fernando is very active in the project (see [1] and [2]). >> > >> > As ovn-octavia-provider is a link between Neutron and Octavia I ask both >> > Neutron and Octavia cores to vote by answering to this thread, to have a >> > final decision. >> > Thanks for your consideration. >> > >> > [1]: >> > https://review.opendev.org/q/owner:froyo%2540redhat.com >> > [2]: >> > https://www.stackalytics.io/report/contribution?module=neutron-group&project_type=openstack&days=60 >> > >> > Cheers >> > Lajos >> > >> >> Definitely +1 for Fernando :) >> >> -- >> Slawek Kaplonski >> Principal Software Engineer >> Red Hat > > > > -- > LUIS TOM?S BOL?VAR > Principal Software Engineer > Red Hat > Madrid, Spain > ltomasbo at redhat.com > From vishwanath.ne at gmail.com Tue Jul 26 20:57:48 2022 From: vishwanath.ne at gmail.com (Vish) Date: Tue, 26 Jul 2022 13:57:48 -0700 Subject: [Kolla][14.1.0][Yoga][Fluentd] fluentd container restarting indefinitely In-Reply-To: References: Message-ID: Pierre, You are right, I was running an old fluentd image, after building and deploying a new fluentd image for yoga everything started working. Thank you for your suggestion. Thanks Vish On Mon, Jul 25, 2022 at 2:26 PM Pierre Riteau wrote: > Are you sure you are running a Yoga fluentd image? I am seeing higher > version numbers in logs of my Wallaby image: > > 2022-07-25 14:03:29 +0000 [info]: parsing config file is succeeded > path="/etc/td-agent/td-agent.conf" > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-calyptia-monitoring' > version '0.1.3' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-elasticsearch' > version '5.2.3' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-elasticsearch' > version '5.2.2' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-flowcounter-simple' > version '0.1.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-grep' version '0.3.4' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-grok-parser' version > '2.6.2' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-kafka' version > '0.17.5' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-metrics-cmetrics' > version '0.1.2' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-opensearch' version > '1.0.4' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-parser' version > '0.6.1' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-prometheus' version > '2.0.3' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-prometheus' version > '2.0.2' > 2022-07-25 14:03:29 +0000 [info]: gem > 'fluent-plugin-prometheus_pushgateway' version '0.1.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-record-modifier' > version '2.1.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-rewrite-tag-filter' > version '2.4.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-s3' version '1.6.1' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-sd-dns' version > '0.1.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-systemd' version > '1.0.5' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-utmpx' version '0.5.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluent-plugin-webhdfs' version > '1.5.0' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluentd' version '1.14.6' > 2022-07-25 14:03:29 +0000 [info]: gem 'fluentd' version '0.12.43' > > On Mon, 25 Jul 2022 at 19:50, Vishwanath wrote: > >> Hi Pierre, >> >> As far as i know we are not using any custom configs, Attached is >> td-agent.conf file from one of the controller nodes. Please check attached. >> thanks >> >> >> >> >> Thanks >> Vish >> (408)-471-8579 >> >> >> On Thu, Jul 7, 2022 at 1:49 AM Pierre Riteau wrote: >> >>> Hello Vish, >>> >>> Are you using a custom configuration for fluentd? Could you please share >>> your generated td-agent.conf? >>> >>> Best wishes, >>> Pierre Riteau (priteau) >>> >>> On Wed, 6 Jul 2022 at 23:34, Vishwanath wrote: >>> >>>> Hello all, >>>> >>>> I have upgraded openstack, we are currently running on 14.1.0. I >>>> noticed fluentd container restarting indefinitely. The message I see under >>>> /var/log/kolla/fluentd/fluentd.log is as follows, Any thoughts on how to >>>> fix this ? I noticed a similar issue back in 2017 from this post - >>>> https://bugs.launchpad.net/kolla-ansible/+bug/1663126 >>>> >>>> *error log:* >>>> *2022-07-06 21:20:42 +0000 [error]: config error >>>> file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError >>>> error="'format' parameter is required"* >>>> >>>> >>>> *full logs:* >>>> 2022-07-06 21:20:42 +0000 [info]: parsing config file is succeeded >>>> path="/etc/td-agent/td-agent.conf" >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' >>>> version '5.2.3' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-elasticsearch' >>>> version '4.1.1' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grep' version >>>> '0.3.4' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-grok-parser' >>>> version '2.6.2' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-kafka' version >>>> '0.14.1' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-parser' version >>>> '0.6.1' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' >>>> version '2.0.3' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-prometheus' >>>> version '1.8.2' >>>> 2022-07-06 21:20:42 +0000 [info]: gem >>>> 'fluent-plugin-prometheus_pushgateway' version '0.0.2' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-record-modifier' >>>> version '2.1.0' >>>> 2022-07-06 21:20:42 +0000 [info]: gem >>>> 'fluent-plugin-rewrite-tag-filter' version '2.4.0' >>>> 2022-07-06 21:20:42 +0000 [info]: gem >>>> 'fluent-plugin-rewrite-tag-filter' version '2.3.0' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-s3' version '1.4.0' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-systemd' version >>>> '1.0.2' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-td' version '1.1.0' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluent-plugin-webhdfs' version >>>> '1.2.5' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '1.11.2' >>>> 2022-07-06 21:20:42 +0000 [info]: gem 'fluentd' version '0.12.43' >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(cinder-api-access|cloudkitty-api-access|gnocchi-api-access|horizon-access|keystone-apache-admin-access|keystone-apache-public-access|monasca-api-access|octavia-api-access|placement-api-access)$/, >>>> "", "apache_access", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(aodh_wsgi_access|barbican_api_uwsgi_access|zun_api_wsgi_access|vitrage_wsgi_access)$/, >>>> "", "wsgi_access", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(nova-api|nova-compute|nova-compute-ironic|nova-conductor|nova-manage|nova-novncproxy|nova-scheduler|nova-placement-api|placement-api|privsep-helper)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(sahara-api|sahara-engine)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(neutron-server|neutron-openvswitch-agent|neutron-ns-metadata-proxy|neutron-metadata-agent|neutron-l3-agent|neutron-dhcp-agent)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(magnum-conductor|magnum-api)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(keystone)$/, "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(heat-engine|heat-api|heat-api-cfn)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(glance-api)$/, "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(cloudkitty-storage-init|cloudkitty-processor|cloudkitty-dbsync|cloudkitty-api)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(ceilometer-polling|ceilometer-agent-notification)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(barbican-api|barbican-worker|barbican-keystone-listener|barbican-db-manage|app)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(aodh-notifier|aodh-listener|aodh-evaluator|aodh-dbsync)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(cyborg-api|cyborg-conductor|cyborg-agent)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(cinder-api|cinder-scheduler|cinder-manage|cinder-volume|cinder-backup|privsep-helper)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(mistral-server|mistral-engine|mistral-executor)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(designate-api|designate-central|designate-manage|designate-mdns|designate-sink|designate-worker)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(manila-api|manila-data|manila-manage|manila-share|manila-scheduler)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(trove-api|trove-conductor|trove-manage|trove-taskmanager)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(murano-api|murano-engine)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(senlin-api|senlin-conductor|senlin-engine|senlin-health-manager)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(watcher-api|watcher-applier|watcher-db-manage|watcher-decision-engine)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(freezer-api|freezer-api_access|freezer-manage)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(octavia-api|octavia-health-manager|octavia-housekeeping|octavia-worker)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(zun-api|zun-compute|zun-cni-daemon)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(kuryr-server)$/, "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(gnocchi-api|gnocchi-statsd|gnocchi-metricd|gnocchi-upgrade)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(ironic-api|ironic-conductor|ironic-inspector)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(tacker-server|tacker-conductor)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(vitrage-ml|vitrage-notifier|vitrage-graph|vitrage-persistor)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(blazar-api|blazar-manager)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, >>>> /^(monasca-api|monasca-notification|monasca-persister|agent-collector|agent-forwarder|agent-statsd)$/, >>>> "", "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /^(masakari-engine|masakari-api)$/, "", >>>> "openstack_python", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> programname >>>> [#>>> @keys="programname">, /.+/, "", "unmatched", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> Payload >>>> [#>>> @keys="Payload">, /^\d{6}/, "", "infra.mariadb.mysqld_safe", nil] >>>> 2022-07-06 21:20:42 +0000 [info]: adding rewrite_tag_filter rule: >>>> Payload >>>> [#>>> @keys="Payload">, /^\d{4}-\d{2}-\d{2}/, "", "infra.mariadb.mysqld", nil] >>>> *2022-07-06 21:20:42 +0000 [error]: config error >>>> file="/etc/td-agent/td-agent.conf" error_class=Fluent::ConfigError >>>> error="'format' parameter is required"* >>>> >>>> >>>> Thanks >>>> Vish >>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Wed Jul 27 07:27:08 2022 From: katonalala at gmail.com (Lajos Katona) Date: Wed, 27 Jul 2022 09:27:08 +0200 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: Hi, I suppose you referenced this document: https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html In Neutron terminology segments appear on different layers, on the API a segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). What is interesting here that this segment has to be a representation on the compute where l2-agent (ovs-agent) can know which segment is the one it can bind ports. That cfg option is in ml2_conf.ini, and bridge_mappings, where the admin/deployer can state which bridge (like br-ex) is connected to which provider network (out of Openstack's control). So for example a sample config in ml_conf.ini like this: bridge_mappings = public:br-ex,physnet1:br0 Means that on that compute VM ports can be bound which has a network segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: 101, network_id: 1234-56..) More computes can have the same bridge-physnet mapping, the deployer's responsibility is to have these connected to the same switch, whatever. [1]: https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments Ignazio Cassano ezt ?rta (id?pont: 2022. j?l. 26., K, 21:04): > Hello All, I am reading documentation about routed provider network. > It reports: " > Routed provider networks imply that compute nodes reside on different > segments. " > > What does mean ? > What is a segment it this case ? > Thanks for helping me" > Ignazio > -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Wed Jul 27 07:55:12 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Wed, 27 Jul 2022 12:25:12 +0430 Subject: openstack router with flat Message-ID: hello is it possible to assign valid ip to openstack router directly without nat ? as I know openstack need invalid fix ip when using router and valid ip should be added as float ip . -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Jul 27 07:57:33 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 27 Jul 2022 09:57:33 +0200 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: Hello, thanks for your reply. The segment id is the vlan id (in your example 101) ? My understanding is that some compute nodes in a rack are connected to a vlan, and other on another vlan. Then I can create a network (segmentation1) and scheduler put the vm on the compute node where vlan is present. So for users exists only segmentaion1 network and they do not know it is splitted in more vlans. Is it correct ? Ignazio Il giorno mer 27 lug 2022 alle ore 09:27 Lajos Katona ha scritto: > Hi, > I suppose you referenced this document: > https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html > > In Neutron terminology segments appear on different layers, on the API a > segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). > What is interesting here that this segment has to be a representation on > the compute where l2-agent (ovs-agent) can know which segment is the one it > can bind ports. > That cfg option is in ml2_conf.ini, and bridge_mappings, where the > admin/deployer can state which bridge (like br-ex) is connected to which > provider network (out of Openstack's control). > So for example a sample config in ml_conf.ini like this: > > bridge_mappings = public:br-ex,physnet1:br0 > > Means that on that compute VM ports can be bound which has a network > segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: > 101, network_id: 1234-56..) > More computes can have the same bridge-physnet mapping, the deployer's > responsibility is to have these connected to the same switch, whatever. > > [1]: > https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments > > Ignazio Cassano ezt ?rta (id?pont: 2022. j?l. > 26., K, 21:04): > >> Hello All, I am reading documentation about routed provider network. >> It reports: " >> Routed provider networks imply that compute nodes reside on different >> segments. " >> >> What does mean ? >> What is a segment it this case ? >> Thanks for helping me" >> Ignazio >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rdhasman at redhat.com Wed Jul 27 08:56:41 2022 From: rdhasman at redhat.com (Rajat Dhasmana) Date: Wed, 27 Jul 2022 14:26:41 +0530 Subject: [cinder] This week's meeting will be in video+IRC Message-ID: Hello Argonauts, This week's meeting (today) will be held in video + IRC mode with details as follows: Date: 27th July, 2022 Time: 1400 UTC Meeting link: https://bluejeans.com/556681290 IRC Channel: #openstack-meeting-alt Make sure you're connected to both the bluejeans meeting and IRC since we do roll call and also (sometimes) summarize the discussion points on IRC. Thanks and regards Rajat Dhasmana -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Wed Jul 27 09:28:36 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Wed, 27 Jul 2022 11:28:36 +0200 Subject: openstack router with flat In-Reply-To: References: Message-ID: <2271172.GsbXASJkJ2@p1> Hi, Dnia ?roda, 27 lipca 2022 09:55:12 CEST Parsa Aminian pisze: > hello > is it possible to assign valid ip to openstack router directly without nat > ? as I know openstack need invalid fix ip when using router and valid ip > should be added as float ip . > I don't understand what do You mean by "invalid" and "valid" IP address. Can You explain it? If You want to have public IP address (external) directly in Your instnance, You can plug Your instance directly into the provider network, without using neutron router at all. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From zigo at debian.org Wed Jul 27 09:56:38 2022 From: zigo at debian.org (Thomas Goirand) Date: Wed, 27 Jul 2022 11:56:38 +0200 Subject: RAM and Storage requirements for Openstack cloud In-Reply-To: References: Message-ID: <9fed5a49-0813-4ed2-0619-1640ff6e2164@debian.org> On 7/22/22 08:49, Mahendra Paipuri wrote: > Hello all, > > We are going to deploy Openstack cloud with researchers as primary > target users. We will have around 100 GPUs with 12-15 servers with > Infiniband interconnect. We still do not know the exact spec of servers > nor GPUs but mostly we will have A100s and Intel Xeon processors. What > sort of RAM and Storage requirements we need for a cluster of this size? > Of course, this depends a lot on use cases and this cloud will be > primarily used for HPC and AI. For the storage, we are mainly interested > in the block storage requirements for provisioning VMs. We will most > probably have a shared parallel file system as scratch and project spaces. > > Is there any rule of thumb to get to the RAM and storage requirement > numbers based on compute infrastructure we will have? If we can estimate > a sort of "lower bound" that would be really helpful for us. If anyone > have clusters of this size at your organizations and if you can share > the RAM ans storage details of your clouds, that would be very useful > for us too. > > Thanks a lot and have a great day!! > > Regards > > Mahendra Hi, Usually, on compute nodes, we set the Nova reserved RAM to 16 or 32 GB of RAM (depending on the amount of instance), and the rest of is for your VMs. So it really depends on your workload. If you're about to setup 15 servers with 100 A100s, that's 6 GPU board per server, so probably you will want to run one VM per GPU. In such case, just estimate how much RAM you want to assign to each VM, and multiply by the number GPU (6 in your case?). Let's say you want 32 GB of RAM per VM, then you'll be good with 256 GB of RAM per server. Now, for storage, the recommendation is 8 GB per OSD (and usually, you'll set 2 OSD per drive). Of course, more RAM is ok. You may want to read this: https://docs.ceph.com/en/quincy/start/hardware-recommendations/ Does this help? Cheers, Thomas Goirand (zigo) From senrique at redhat.com Wed Jul 27 11:00:00 2022 From: senrique at redhat.com (Sofia Enriquez) Date: Wed, 27 Jul 2022 08:00:00 -0300 Subject: [cinder] Bug deputy report for week of 07-27-2022 Message-ID: This is a bug report from 07-20-2022 to 07-27-2022. Agenda: https://etherpad.opendev.org/p/cinder-bug-squad-meeting ----------------------------------------------------------------------------------------- Medium - https://bugs.launchpad.net/cinder/+bug/1982436 "Reimage volume API image is invalid, the status should not be downloading." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1982848 "Creating from source volume tries to create it multiple times if rekeying fails." Unassigned. Low - https://bugs.launchpad.net/cinder/+bug/1982350 "Infinidat Cinder driver multi-attach feature is broken." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1982405 "Infinidat Cinder driver generic volume migration feature is broken." Fix proposed to master. - https://bugs.launchpad.net/cinder/+bug/1982568 "PowerMax doesn?t work with the workload in extra specs." Unassigned. - https://bugs.launchpad.net/cinder/+bug/1982891 "NFS Attach Encrypted fails only on the first attempt." Unassigned. Cheers, -- Sof?a Enriquez she/her Software Engineer Red Hat PnT IRC: @enriquetaso @RedHat Red Hat Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Wed Jul 27 13:02:57 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Wed, 27 Jul 2022 13:02:57 +0000 Subject: RAM and Storage requirements for Openstack cloud In-Reply-To: <9fed5a49-0813-4ed2-0619-1640ff6e2164@debian.org> References: <9fed5a49-0813-4ed2-0619-1640ff6e2164@debian.org> Message-ID: <20220727130256.4d5zrwhg2e7wxae6@yuggoth.org> On 7/22/22 08:49, Mahendra Paipuri wrote: > We are going to deploy Openstack cloud with researchers as primary > target users. [...] Aside from the technical recommendations, you might consider participating in the Scientific SIG or (depending on the areas of research you and your users are involved in) the Cloud Research SIG: https://governance.openstack.org/sigs/ If you're interested, you'll probably want to reach out to the chairs at the addresses listed there since their meetings may not be held with as much regularity as the linked documents suggest. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From amotoki at gmail.com Wed Jul 27 14:26:36 2022 From: amotoki at gmail.com (Akihiro Motoki) Date: Wed, 27 Jul 2022 23:26:36 +0900 Subject: [dev][horizon][keystone] In-Reply-To: <30f9387dfcc53d0ea797c06026ae235fd3a8f16b.camel@redhat.com> References: <201996A2-79AD-4F0C-8AAA-052AB8381C38@gmail.com> <30f9387dfcc53d0ea797c06026ae235fd3a8f16b.camel@redhat.com> Message-ID: On Tue, Jul 26, 2022 at 8:56 PM Stephen Finucane wrote: > > On Mon, 2022-07-25 at 18:20 +0100, Sergey Drozdov wrote: > > To whom it may concern, > > > > I have previously sen out an email about this topic but just wanted to run this by community again incase it was missed. > > > > I have the following issue. The firm I am currently working at is running OpenStack at sale with circa 6000 different projects. Whenever we try to access a projects tab via horizon we experience a timeout (accessing through the API works fine). > > > > For the horizon team, we were hoping to propose something akin to filtering/dynamic listing in order to rectify the issue. > > I suspect you'll have a difficult time getting this into Horizon since I'm > unsure how many people are actively working on the project. Something AJAX'y > whereby you only load the first N results and insist on more filtering (or use > of the API) for anything more would be a good start? I am not very active in horizon like before right now, so I waited for others reply the thread :p I talked with Sergey on #-horizon IRC channel. As long as keystone does not support paginations for projects (and other resources), perhaps a way to mitigate the issue discussed here would be that horizon emulates the pagination for projects. Horizon fetches all projects from keystone API and horizon server side support pagination by narrowing into a subset of projects. The default implementation of the "Projects" panel is based on Django and the keystone API itself works well with many projects like 6000 projects (according to this thread), so it looks like the bottleneck would be during handling projects inside horizon, perhaps during rendering the project tables. Anyway, it would be nice if keystone supports pagination for projects of course. > > For the keystone team, we were wondering whether there is anything we can do with the API in order to simplify the aforementioned horizon proposal; we are hoping to get both teams on board. > > Based on Julia's reply, it seems pagination for the "users" API might still be a > non-starter. You could probably look to implement it for other resources like > projects and domains though. Once available in Keystone, it should be pretty > easy to add to Horizon (pagination is available elsewhere). I know Keystone is > undergoing a bit of a revival right now so there's a chance this will get more > traction...though of course you'll still need to add stuff to Horizon at a later > date. It is good to hear that a kind of revival of keystone. How is it likely that keystone pagination support happens? AFAIK, keystone dropped the pagination support as it is not easy to support pagination for some identity backends, so I am wondering there is a magic to support pagination well soon. if it happens soon, it is better for horizon to wait for the keystone pagination support again. > > > We are ready to open up a bug report and start subsequent development should this be supported by both teams! > > Horizon uses blueprints, as discussed in the Horizon contributors guide [1]. > Creating an initial blueprint and some PoC patches might be a good start? > Keystone uses blueprints and specs [2][3] so if you want to extend this then > you'll probably need to draft a spec then the PoC patches to prove out the idea. Mainly from horizon perspective, the most challenging is how to coordinate the effort between keystone and horizon like whether keystone has a plan to support pagination again. The direction in horizon totally depends on decisions in keystone. If it does not happen soon in keystone, horizon needs to explore a way to improve the performance without keystone pagination support. Thanks, Akihiro Motoki (amotoki) > > Stephen > > [1] https://docs.openstack.org/horizon/latest/contributor/contributing.html > [2] https://docs.openstack.org/keystone/latest/contributor/contributing.html > [3] https://opendev.org/openstack/keystone-specs > > > > > I am available on the both Openstack-horizon and Openstack-keystone irc channels under the nickname sdrozdov, please feel free to reach out. > > Best Regards, > > Sergey Drozdov > > Software Engineer > > The Hut Group > > > > From radoslaw.piliszek at gmail.com Wed Jul 27 14:35:57 2022 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Wed, 27 Jul 2022 16:35:57 +0200 Subject: [kolla][ops] Anyone using the collectd and/or telegraf? Message-ID: Hi Kolla-flavoured OpenStackers, Any of you using the collectd and/or telegraf with Kolla Ansible? The core team is looking to deprecate their support as it's not tested and collectd is now gone from Ubuntu Jammy. Please reply to this mail. Cheers, Radek -yoctozepto From miguel at mlavalle.com Wed Jul 27 14:50:25 2022 From: miguel at mlavalle.com (Miguel Lavalle) Date: Wed, 27 Jul 2022 09:50:25 -0500 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: Ignazio, You might find the following two presentations useful to understand what segments are and how they are used in routed networks: https://www.openstack.org/videos/summits/austin-2016/mapping-real-networks-to-physical-networks-segments-and-logical-networks-in-neutron https://www.openstack.org/videos/summits/barcelona-2016/scaling-up-openstack-networking-with-routed-networks And to summarize what you will find in those presentations: 1) A segment is a single L2 broadcast domain, be it a vlan or a vxlan or any other way to realize a L2 broadcast domain in the networking fabric. 2) A Neutron network can be created stitching together 1 or several segments. If after putting several segments together in a Neutron network they become a single L2 broadcast domain (i.e. they are stitched together via switching) then you have a multi-segment Neutron network. However .... 3) If the segments in a Neutron network are stitched together with L3 routers, then you have a routed provider network. In such networks, each segment is a separate L2 broadcast domain, which should provide higher levels of scalability 4) To better understand the terminology, you may also find it useful to understand the distinction between "provider networks" and "tenant networks". A provider network is one that was mapped explicitly at creation by a cloud admin to specific segments, most likely to achieve certain performance / scalability goals. A tenant network is one for which, at creation, Neutron assigned automatically a segment Best regards Miguel On Wed, Jul 27, 2022 at 3:01 AM Ignazio Cassano wrote: > Hello, thanks for your reply. > The segment id is the vlan id (in your example 101) ? > My understanding is that some compute nodes in a rack are connected to a > vlan, and other on another vlan. > Then I can create a network (segmentation1) and scheduler put the vm on > the compute node where vlan is present. > So for users exists only segmentaion1 network and they do not know it is > splitted in more vlans. > Is it correct ? > Ignazio > > Il giorno mer 27 lug 2022 alle ore 09:27 Lajos Katona < > katonalala at gmail.com> ha scritto: > >> Hi, >> I suppose you referenced this document: >> >> https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html >> >> In Neutron terminology segments appear on different layers, on the API a >> segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). >> What is interesting here that this segment has to be a representation on >> the compute where l2-agent (ovs-agent) can know which segment is the one it >> can bind ports. >> That cfg option is in ml2_conf.ini, and bridge_mappings, where the >> admin/deployer can state which bridge (like br-ex) is connected to which >> provider network (out of Openstack's control). >> So for example a sample config in ml_conf.ini like this: >> >> bridge_mappings = public:br-ex,physnet1:br0 >> >> Means that on that compute VM ports can be bound which has a network >> segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: >> 101, network_id: 1234-56..) >> More computes can have the same bridge-physnet mapping, the deployer's >> responsibility is to have these connected to the same switch, whatever. >> >> [1]: >> https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments >> >> Ignazio Cassano ezt ?rta (id?pont: 2022. j?l. >> 26., K, 21:04): >> >>> Hello All, I am reading documentation about routed provider network. >>> It reports: " >>> Routed provider networks imply that compute nodes reside on different >>> segments. " >>> >>> What does mean ? >>> What is a segment it this case ? >>> Thanks for helping me" >>> Ignazio >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Wed Jul 27 15:22:39 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Wed, 27 Jul 2022 17:22:39 +0200 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: Thanks, Miguel. Ignazio Il giorno mer 27 lug 2022 alle ore 16:50 Miguel Lavalle ha scritto: > Ignazio, > > You might find the following two presentations useful to understand what > segments are and how they are used in routed networks: > > > https://www.openstack.org/videos/summits/austin-2016/mapping-real-networks-to-physical-networks-segments-and-logical-networks-in-neutron > > https://www.openstack.org/videos/summits/barcelona-2016/scaling-up-openstack-networking-with-routed-networks > > And to summarize what you will find in those presentations: > > 1) A segment is a single L2 broadcast domain, be it a vlan or a vxlan or > any other way to realize a L2 broadcast domain in the networking fabric. > 2) A Neutron network can be created stitching together 1 or several > segments. If after putting several segments together in a Neutron network > they become a single L2 broadcast domain (i.e. they are stitched together > via switching) then you have a multi-segment Neutron network. However .... > 3) If the segments in a Neutron network are stitched together with L3 > routers, then you have a routed provider network. In such networks, each > segment is a separate L2 broadcast domain, which should provide higher > levels of scalability > 4) To better understand the terminology, you may also find it useful to > understand the distinction between "provider networks" and "tenant > networks". A provider network is one that was mapped explicitly at creation > by a cloud admin to specific segments, most likely to achieve certain > performance / scalability goals. A tenant network is one for which, at > creation, Neutron assigned automatically a segment > > Best regards > > Miguel > > On Wed, Jul 27, 2022 at 3:01 AM Ignazio Cassano > wrote: > >> Hello, thanks for your reply. >> The segment id is the vlan id (in your example 101) ? >> My understanding is that some compute nodes in a rack are connected to a >> vlan, and other on another vlan. >> Then I can create a network (segmentation1) and scheduler put the vm on >> the compute node where vlan is present. >> So for users exists only segmentaion1 network and they do not know it is >> splitted in more vlans. >> Is it correct ? >> Ignazio >> >> Il giorno mer 27 lug 2022 alle ore 09:27 Lajos Katona < >> katonalala at gmail.com> ha scritto: >> >>> Hi, >>> I suppose you referenced this document: >>> >>> https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html >>> >>> In Neutron terminology segments appear on different layers, on the API a >>> segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). >>> What is interesting here that this segment has to be a representation on >>> the compute where l2-agent (ovs-agent) can know which segment is the one it >>> can bind ports. >>> That cfg option is in ml2_conf.ini, and bridge_mappings, where the >>> admin/deployer can state which bridge (like br-ex) is connected to which >>> provider network (out of Openstack's control). >>> So for example a sample config in ml_conf.ini like this: >>> >>> bridge_mappings = public:br-ex,physnet1:br0 >>> >>> Means that on that compute VM ports can be bound which has a network >>> segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: >>> 101, network_id: 1234-56..) >>> More computes can have the same bridge-physnet mapping, the deployer's >>> responsibility is to have these connected to the same switch, whatever. >>> >>> [1]: >>> https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments >>> >>> Ignazio Cassano ezt ?rta (id?pont: 2022. >>> j?l. 26., K, 21:04): >>> >>>> Hello All, I am reading documentation about routed provider network. >>>> It reports: " >>>> Routed provider networks imply that compute nodes reside on different >>>> segments. " >>>> >>>> What does mean ? >>>> What is a segment it this case ? >>>> Thanks for helping me" >>>> Ignazio >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From mahendra.paipuri at cnrs.fr Wed Jul 27 15:32:00 2022 From: mahendra.paipuri at cnrs.fr (Mahendra Paipuri) Date: Wed, 27 Jul 2022 17:32:00 +0200 Subject: RAM and Storage requirements for Openstack cloud In-Reply-To: <20220727130256.4d5zrwhg2e7wxae6@yuggoth.org> References: <9fed5a49-0813-4ed2-0619-1640ff6e2164@debian.org> <20220727130256.4d5zrwhg2e7wxae6@yuggoth.org> Message-ID: <7f1e4c8a-a7c9-ef85-6a2b-fe0bb7a57dbe@cnrs.fr> Thanks a lot Thomas and Jeremy. Given that we are new to the community, these are very useful inputs. I will look into SIGs that are relevant to us. Cheers Mahendra On 27/07/2022 15:02, Jeremy Stanley wrote: > On 7/22/22 08:49, Mahendra Paipuri wrote: >> We are going to deploy Openstack cloud with researchers as primary >> target users. > [...] > > Aside from the technical recommendations, you might consider > participating in the Scientific SIG or (depending on the areas of > research you and your users are involved in) the Cloud Research SIG: > > https://governance.openstack.org/sigs/ > > If you're interested, you'll probably want to reach out to the > chairs at the addresses listed there since their meetings may not be > held with as much regularity as the linked documents suggest. From levonmelikbekjan at yahoo.de Wed Jul 27 14:47:06 2022 From: levonmelikbekjan at yahoo.de (Levon Melikbekjan) Date: Wed, 27 Jul 2022 14:47:06 +0000 Subject: AW: Customization of scheduler manager In-Reply-To: <7a64ec3a3c847563f34556dc7eb16745489484f0.camel@redhat.com> References: <7a64ec3a3c847563f34556dc7eb16745489484f0.camel@redhat.com> Message-ID: Amazing! Thank you for the hint. The shelve function is exactly what I was looking for. I have already created a workflow architecture that describes the new functionality of your select_destination python function that is located in the manager.py. The manager.py can be found in the path ?/usr/lib/python2.7/site-packages/nova/scheduler?. My intention is to extend the manager.py script with a priority queue. The extension will automatically look for (the best match -> hosts with most unused resources) unused resources to reallocate shelved VMs from the priority queue. A user is the only instance who can delete his VMs completely. For me it is important not to lose the calculations performed on these VMs by processes, when the VMs are automatically shelved by my automated extension. The automated process knows if a user is an owner and which hosts he owns, because the host aggregate id is always selected from the description attribute of the user object. If this field is empty, then the user is not an owner. This is the way how my process determines the priority status of a user. Von: Sean Mooney Datum: Dienstag, 26. Juli 2022 um 18:43 An: Levon Melikbekjan , openstack at lists.openstack.org Betreff: Re: Customization of scheduler manager On Tue, 2022-07-26 at 15:59 +0000, Levon Melikbekjan wrote: > Hi all, > > as part of my thesis, I modified the Openstack version Train with the intention of sharing resources, which are not in use with other users to ensure maximum utilization. So far everything is working fine except for the last step. Let me first explain my work. > > The system works according to the following rules: > > > 1. Users who own compute hosts within the private cloud have the highest priority on their hosts. > 2. Users who do not own hosts within the private cloud are low priority users who can instantiate their virtual machine on unused resources (on the hosts that have an owner). > 3. If the owner wants to use his resources that are currently occupied, the foreign VM must be suspended to free up resources for the owners VM. > 4. An owner is a low priority user on foreign hosts. > > Everything works automatically and generically, but in step 3 I do not suspend those VMs, I delete them. I want the VMs to be suspended to be able to restart them with the intention of being able to continue processes that are paused, and I know there is maybe a REST API functions that provides this functionality. A user should be able to continue his work after resources become free again. It would be annoying if long-running processes were killed. > > My question is this: > Is the suspend function the right choice? Are the resources released when I use the suspend function? no the resouces are not releassed when you suspend. if i was to do this i woudl shelve the instance so that the user can unshelve it to a differnt host if needed. what you are discirbing is somthing we have previously considerd call premetible instances or spot instnaces to use aws terminology. shelve will preserve the vms ports, volumes and root disk creating a snapthot storign it to glance. when the user wants to resume there low priority instance they can unshleve it and it will go to a differnt host. note that due to how nova and placment works you cant share resouce in nova the way you are trying to do becasue placment will still prevent the oversubsctiion and in traint placment is not optional. so you will never exceed the overallocation ratio unless you have altered that by say setting it very high or not creating allcoations for the low priority instances. > > Thank you & Best regards, > > Levon Melikbekjan -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Wed Jul 27 16:55:34 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Wed, 27 Jul 2022 22:25:34 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Team, I tried again with DNS enabled, but the error remains the same. tone_resources : Create identity public endpoint | undercloud | 0:24:59.456181 | 2.31s 2022-07-27 15:20:48.735838 | 5254006e-bbd1-cd20-647c-00000000736c | TASK | Create identity internal endpoint 2022-07-27 15:20:51.227000 | 5254006e-bbd1-cd20-647c-00000000736c | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://overcloud-publ ic.myhsc.com:13000/v3/services, The request you have made requires authentication."} Checking further in the keystone logs in container: 2022-07-27 19:35:37.447 33 WARNING keystone.server.flask.application [req-bb4621d8-73ad-4bad-831f-5c2370e92e71 - - - - -] Authorization failed. The request you have made requires authentication. from fd00:fd00:fd00:9900::29: keystone.exception.Unauthorized: The request you have made requires authentication. 2022-07-27 19:35:37.998 26 WARNING py.warnings [req-54d44e3a-5e34-4e40-b2dc-e8213353ea05 ab5e9670632544f8a8c7e1b3ac175bcd e4185872cadb442aa9a59980b3227941 - default default] /usr/lib/python3.6/site-packages/oslo_policy/policy.py:1065: UserWarning: Policy identity:list_projects failed scope check. The token used to make the request was project scoped but the policy requires ['system', 'domain'] scope. This behavior may change in the future where using the intended scope is required I am kind of blocked now, any lead would let me understand the problem more and maybe it can solve the issue. Best Regards, Lokendra On Mon, Jul 25, 2022 at 3:12 PM Lokendra Rathour wrote: > Hi Brendan, > Apologies for this delay, i had to redo the setup to reach this point, > and also this time just to eliminate my Doubt i removed SSL for overcloud. > Now I am only using DNS Server. In this case also I am getting the same > error. > > | 0:13:20.198877 | 1.86s > 2022-07-25 14:37:29.657118 | 525400a7-0932-2ed1-d313-000000007193 | > TASK | Create identity internal endpoint > 2022-07-25 14:37:31.995131 | 525400a7-0932-2ed1-d313-000000007193 | > FATAL | Create identity internal endpoint | undercloud | error={"changed": > false, "extra_data": {"data": null, "details": "The request you have made > requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list > services: Client Error for url: http://[fd00:fd00:fd00:9900::a0]:5000/v3/services, > The request you have made requires authentication."} > > > To answer your question please note: > > "OS_CLOUD=overcloud openstack endpoint list" > > [root at GGNLABPM4 ~]# ssh stack at 10.0.1.29 > stack at 10.0.1.29's password: > Activate the web console with: systemctl enable --now cockpit.socket > > Last login: Mon Jul 25 14:38:44 2022 from 10.0.1.4 > [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint list > > +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ > | ID | Region | Service Name | Service > Type | Enabled | Interface | URL | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ > | 1ecd328b5ea1426bb411d157b8339dd2 | regionOne | keystone | identity > | True | public | http://[fd00:fd00:fd00:9900::a0]:5000 | > | 518cfa0f2ece43b684710006c9fa5b25 | regionOne | keystone | identity > | True | admin | http://30.30.30.181:35357 | > | 8cda413052c24718b073578bb497f483 | regionOne | keystone | identity > | True | internal | http://[fd00:fd00:fd00:2000::a0]:5000 | > > +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ > [stack at undercloud ~]$ > > > it is giving us only keystone endpoints. > > Also note that I am trying to deploy the end to end setup with FQDN only. > and in this case as well I am facing the same issue as old. > > thanks once again for your inputs. > > -Lokendra > > > > On Wed, Jul 20, 2022 at 3:07 PM Brendan Shephard > wrote: > >> Hey, >> >> I think it's weird that you got a response at all when you run the >> openstack endpoint list, since you said haproxy isn't running. So there >> should be nothing serving that endpoint. >> >> I noticed you have the stackrc file sourced. Try it again without that >> file sourced, so: >> $ su - stack >> $ OS_CLOUD=overcloud openstack endpoint list >> >> I would suspect that nothing should be responding. It could be the >> stackrc file causing issues with some of the environment variables. If the >> above command doesn't return anything, then my suggestion would be to >> re-run the deployment like this: >> >> $ su - stack >> $ export OS_CLOUD=undercloud >> # Then run your deployment script again >> $ bash overcloud_deploy.sh >> >> The OS_CLOUD variable tells the openstackclient to lookup the details >> about that cloud from your clouds.yaml file. Which will be located in >> /home/stack/.config/openstack/clouds.yaml. >> >> This method is preferable to the sourcing of RC files. >> >> Reference: >> >> https://docs.openstack.org/openstacksdk/latest/user/guides/connect_from_config.html >> >> Regarding the HAProxy warnings. I don't think they should be fatal. >> afaik, HAProxy should still be starting. If it's not, there might be >> another error that you will need to look for in the log files under >> /var/log/containers/haproxy/ >> >> I wasn't able to reproduce that warning by following the documentation >> for enabling TLS though. So it seems like an odd error to be getting. >> >> Brendan Shephard >> >> Software Engineer >> >> Red Hat APAC >> >> 193 N Quay >> >> Brisbane City QLD 4000 >> @RedHat Red Hat >> Red Hat >> >> >> >> >> >> On Wed, Jul 20, 2022 at 7:02 PM Lokendra Rathour < >> lokendrarathour at gmail.com> wrote: >> >>> Hi Brendan / Team, >>> Any lead for the issue raised? >>> >>> -Lokendra >>> >>> >>> >>> On Tue, Jul 19, 2022 at 11:46 AM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Brendan,, >>>> Thanks for the inputs. >>>> when i run the command as you suggested I get this: >>>> >>>> (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack >>>> endpoint list >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>>> | ID | Region | Service Name | Service >>>> Type | Enabled | Interface | URL | >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>>> | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | >>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>> | >>>> | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | >>>> identity | True | admin | http://30.30.30.173:35357 >>>> | >>>> | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | >>>> identity | True | public | https://overcloud-hsc.com:13000 >>>> | >>>> >>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>>> (undercloud) [stack at undercloud ~]$ >>>> >>>> >>>> On the other note that i notices was as below: >>>> >>>> - HAproxy container is not running. >>>> - [root at overcloud-controller-2 stdouts]# podman ps -a | grep >>>> haproxy >>>> e91dbde042db >>>> undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo >>>> 24 hours ago Exited (1) Less than a >>>> second ago container-puppet-haproxy\ >>>> - Checking logs: >>>> - 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= >>>> 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] >>>> 2022-07-19T08:47:00.496323705+05:30 stderr F + . >>>> kolla_extend_start >>>> 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running >>>> command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper >>>> ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; >>>> else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' >>>> 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: >>>> 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec >>>> /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec >>>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' >>>> 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c '$*' >>>> -- eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec >>>> /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec >>>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi >>>> 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind >>>> fd00:fd00:fd00:9900::81:13776' : >>>> 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> automatically2022-07-19T08:47:00.513967576+05:30 stderr F >>>> [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind >>>> fd00:fd00:fd00:9900::81:13292' : >>>> 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind >>>> fd00:fd00:fd00:9900::81:13004' : >>>> 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind >>>> fd00:fd00:fd00:9900::81:13005' : >>>> 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind >>>> fd00:fd00:fd00:2000::326:443' : >>>> - 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind >>>> fd00:fd00:fd00:9900::81:13000' : >>>> 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind >>>> fd00:fd00:fd00:9900::81:13696' : >>>> 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind >>>> fd00:fd00:fd00:9900::81:13080' : >>>> 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind >>>> fd00:fd00:fd00:9900::81:13774' : >>>> 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind >>>> fd00:fd00:fd00:9900::81:13778' : >>>> 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] >>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind >>>> fd00:fd00:fd00:9900::81:13808' : >>>> 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load >>>> default 1024 bits DH parameter for certificate >>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>> 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library >>>> will use an automatically generated DH parameter. >>>> 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] >>>> 199/084700 (7) : Setting tune.ssl.default-dh-param to 1024 by default, if >>>> your workload permits it you should set it to at least 2048. Please set a >>>> value >= 1024 to make this warning disappear. >>>> - pcs status also show that proxy is down for the controller >>>> with VIP: >>>> - Failed Resource Actions: >>>> * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 >>>> 'error' (1): call=139, status='complete', exitreason='podman failed to >>>> launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', >>>> queued=0ms, exec=1222ms >>>> * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 >>>> 'error' (1): call=191, status='complete', exitreason='podman failed to >>>> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', >>>> queued=0ms, exec=1171ms >>>> * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 >>>> 'error' (1): call=193, status='complete', exitreason='podman failed to >>>> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', >>>> queued=0ms, exec=1256ms >>>> >>>> do let me know in case we need anything more around it. >>>> thanks once again for the support. >>>> -Lokendra >>>> >>>> On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard >>>> wrote: >>>> >>>>> Hey, >>>>> >>>>> Doesn't look like there is anything wrong with the certificate there. >>>>> You would be getting a TLS error if that was the problem. >>>>> >>>>> What does your clouds.yaml file look like now? What happens if you run >>>>> this command from the Undercloud node: >>>>> $ OS_CLOUD=overcloud openstack endpoint list >>>>> >>>>> Do you get the same error? >>>>> >>>>> Brendan Shephard >>>>> >>>>> Software Engineer >>>>> >>>>> Red Hat APAC >>>>> >>>>> 193 N Quay >>>>> >>>>> Brisbane City QLD 4000 >>>>> @RedHat Red Hat >>>>> Red Hat >>>>> >>>>> >>>>> >>>>> >>>>> >>>>> On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour < >>>>> lokendrarathour at gmail.com> wrote: >>>>> >>>>>> Hi Swogat and Vikarna, >>>>>> We have tried adding the DNS entry for the overcloud domain. we are >>>>>> getting the same error: >>>>>> >>>>>> 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | >>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>> undercloud | 0:11:18.785769 | 2.16s >>>>>> 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | >>>>>> TASK | Create identity internal endpoint >>>>>> 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | >>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>> request you have made requires authentication.", "response": >>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>> services: Client Error for url: >>>>>> https://overcloud-hsc.com:13000/v3/services, The request you have >>>>>> made requires authentication."} >>>>>> 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | >>>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint | >>>>>> undercloud | 0:11:21.074605 | 2. >>>>>> >>>>>> >>>>>> Certificate configs: >>>>>> >>>>>> [stack at undercloud oc-domain-name]$ cat server.csr.cnf >>>>>> [req] >>>>>> default_bits = 2048 >>>>>> prompt = no >>>>>> default_md = sha256 >>>>>> distinguished_name = dn >>>>>> [dn] >>>>>> C=IN >>>>>> ST=UTTAR PRADESH >>>>>> L=NOIDA >>>>>> O=HSC >>>>>> OU=HSC >>>>>> emailAddress=demo at demo.com >>>>>> CN=overcloud-hsc.com >>>>>> [stack at undercloud oc-domain-name]$ cat v3.ext >>>>>> authorityKeyIdentifier=keyid,issuer >>>>>> basicConstraints=CA:FALSE >>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>> dataEncipherment >>>>>> subjectAltName = @alt_names >>>>>> [alt_names] >>>>>> DNS.1=overcloud-hsc.com >>>>>> [stack at undercloud oc-domain-name]$ >>>>>> >>>>>> the difference we see from others is that we are using self-signed >>>>>> certificates. >>>>>> >>>>>> please let me know in case we need to check something else. Somehow >>>>>> this issue remains stuck. >>>>>> >>>>>> >>>>>> On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan < >>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>> >>>>>>> I was facing a similar kind of issue. >>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >>>>>>> Here is the solution that helped me fix it. >>>>>>> Also make sure the cn that you will use is reachable from undercloud >>>>>>> (maybe) script should take care of it. >>>>>>> >>>>>>> Also please follow Mr. Tathe's mail to add the cn first. >>>>>>> >>>>>>> With regards >>>>>>> Swogat Pradhan >>>>>>> >>>>>>> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe < >>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>> >>>>>>>> Hi Lokendra, >>>>>>>> >>>>>>>> The CN field is missing. Can you add that and generate the >>>>>>>> certificate again. >>>>>>>> >>>>>>>> CN=ipaddress >>>>>>>> >>>>>>>> Also add dns.1=ipaddress under alt_names for precaution. >>>>>>>> >>>>>>>> Vikarna >>>>>>>> >>>>>>>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, < >>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>> >>>>>>>>> HI Vikarna, >>>>>>>>> Thanks for the inputs. >>>>>>>>> I am note able to access any tabs in GUI. >>>>>>>>> [image: image.png] >>>>>>>>> >>>>>>>>> to re-state, we are failing at the time of deployment at step4 : >>>>>>>>> >>>>>>>>> >>>>>>>>> PLAY [External deployment step 4] >>>>>>>>> ********************************************** >>>>>>>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 >>>>>>>>> | TASK | External deployment step 4 >>>>>>>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 >>>>>>>>> | OK | External deployment step 4 | undercloud -> localhost | >>>>>>>>> result={ >>>>>>>>> "changed": false, >>>>>>>>> "msg": "Use --start-at-task 'External deployment step 4' to >>>>>>>>> resume from this task" >>>>>>>>> } >>>>>>>>> [WARNING]: ('undercloud -> localhost', >>>>>>>>> '525400ae-089b-870a-fab6-0000000000d7') >>>>>>>>> missing from stats >>>>>>>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 >>>>>>>>> | TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>>>>>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 >>>>>>>>> | INCLUDED | >>>>>>>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>>>>>>> | undercloud >>>>>>>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>> | TASK | Clean up legacy Cinder keystone catalog entries >>>>>>>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>> | OK | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>>>>>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | 0:11:24.204562 | 2.48s >>>>>>>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>> | OK | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>>>>>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | 0:11:26.122584 | 4.40s >>>>>>>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>> | 0:11:26.124296 | 4.40s >>>>>>>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c >>>>>>>>> | TASK | Manage Keystone resources for OpenStack services >>>>>>>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c >>>>>>>>> | TIMING | Manage Keystone resources for OpenStack services | >>>>>>>>> undercloud | 0:11:26.169842 | 0.03s >>>>>>>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b >>>>>>>>> | TASK | Gather variables for each operating system >>>>>>>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b >>>>>>>>> | TIMING | tripleo_keystone_resources : Gather variables for each >>>>>>>>> operating system | undercloud | 0:11:26.253383 | 0.04s >>>>>>>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c >>>>>>>>> | TASK | Create Keystone Admin resources >>>>>>>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c >>>>>>>>> | TIMING | tripleo_keystone_resources : Create Keystone Admin resources >>>>>>>>> | undercloud | 0:11:26.299608 | 0.03s >>>>>>>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 >>>>>>>>> | INCLUDED | >>>>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>>>>>>> undercloud >>>>>>>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad >>>>>>>>> | TASK | Create default domain >>>>>>>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad >>>>>>>>> | OK | Create default domain | undercloud >>>>>>>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad >>>>>>>>> | TIMING | tripleo_keystone_resources : Create default domain | >>>>>>>>> undercloud | 0:11:28.437360 | 2.09s >>>>>>>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae >>>>>>>>> | TASK | Create admin and service projects >>>>>>>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae >>>>>>>>> | TIMING | tripleo_keystone_resources : Create admin and service >>>>>>>>> projects | undercloud | 0:11:28.483468 | 0.03s >>>>>>>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 >>>>>>>>> | INCLUDED | >>>>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>>>>>>> undercloud >>>>>>>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>> | TASK | Async creation of Keystone project >>>>>>>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>> | CHANGED | Async creation of Keystone project | undercloud | item=admin >>>>>>>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>> | TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>>> project | undercloud | 0:11:29.238078 | 0.72s >>>>>>>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>> | CHANGED | Async creation of Keystone project | undercloud | >>>>>>>>> item=service >>>>>>>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>> | TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>>> project | undercloud | 0:11:29.586587 | 1.06s >>>>>>>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>> | TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>>> project | undercloud | 0:11:29.587916 | 1.07s >>>>>>>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | TASK | Check Keystone project status >>>>>>>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | WAITING | Check Keystone project status | undercloud | 30 retries left >>>>>>>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | OK | Check Keystone project status | undercloud | item=admin >>>>>>>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>>> undercloud | 0:11:35.260666 | 5.66s >>>>>>>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | OK | Check Keystone project status | undercloud | item=service >>>>>>>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>>> undercloud | 0:11:35.494729 | 5.89s >>>>>>>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>> | TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>>> undercloud | 0:11:35.498771 | 5.89s >>>>>>>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af >>>>>>>>> | TASK | Create admin role >>>>>>>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af >>>>>>>>> | OK | Create admin role | undercloud >>>>>>>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af >>>>>>>>> | TIMING | tripleo_keystone_resources : Create admin role | undercloud >>>>>>>>> | 0:11:37.725949 | 2.20s >>>>>>>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 >>>>>>>>> | TASK | Create _member_ role >>>>>>>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 >>>>>>>>> | SKIPPED | Create _member_ role | undercloud >>>>>>>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 >>>>>>>>> | TIMING | tripleo_keystone_resources : Create _member_ role | >>>>>>>>> undercloud | 0:11:37.783369 | 0.04s >>>>>>>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 >>>>>>>>> | TASK | Create admin user >>>>>>>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 >>>>>>>>> | CHANGED | Create admin user | undercloud >>>>>>>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 >>>>>>>>> | TIMING | tripleo_keystone_resources : Create admin user | undercloud >>>>>>>>> | 0:11:41.145472 | 3.34s >>>>>>>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 >>>>>>>>> | TASK | Assign admin role to admin project for admin user >>>>>>>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 >>>>>>>>> | OK | Assign admin role to admin project for admin user | >>>>>>>>> undercloud >>>>>>>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 >>>>>>>>> | TIMING | tripleo_keystone_resources : Assign admin role to admin >>>>>>>>> project for admin user | undercloud | 0:11:44.288848 | 3.13s >>>>>>>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 >>>>>>>>> | TASK | Assign _member_ role to admin project for admin user >>>>>>>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 >>>>>>>>> | SKIPPED | Assign _member_ role to admin project for admin user | >>>>>>>>> undercloud >>>>>>>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 >>>>>>>>> | TIMING | tripleo_keystone_resources : Assign _member_ role to admin >>>>>>>>> project for admin user | undercloud | 0:11:44.346479 | 0.04s >>>>>>>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 >>>>>>>>> | TASK | Create identity service >>>>>>>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 >>>>>>>>> | OK | Create identity service | undercloud >>>>>>>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 >>>>>>>>> | TIMING | tripleo_keystone_resources : Create identity service | >>>>>>>>> undercloud | 0:11:46.022362 | 1.66s >>>>>>>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 >>>>>>>>> | TASK | Create identity public endpoint >>>>>>>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 >>>>>>>>> | OK | Create identity public endpoint | undercloud >>>>>>>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 >>>>>>>>> | TIMING | tripleo_keystone_resources : Create identity public endpoint >>>>>>>>> | undercloud | 0:11:48.233349 | 2.19s >>>>>>>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 >>>>>>>>> | TASK | Create identity internal endpoint >>>>>>>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 >>>>>>>>> | FATAL | Create identity internal endpoint | undercloud | >>>>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>>>> request you have made requires authentication.", "response": >>>>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>>> The request you have made requires authentication."} >>>>>>>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 >>>>>>>>> | TIMING | tripleo_keystone_resources : Create identity internal >>>>>>>>> endpoint | undercloud | 0:11:50.660654 | 2.41s >>>>>>>>> >>>>>>>>> PLAY RECAP >>>>>>>>> ********************************************************************* >>>>>>>>> localhost : ok=1 changed=0 unreachable=0 >>>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>> undercloud : ok=39 changed=7 unreachable=0 >>>>>>>>> failed=1 skipped=6 rescued=0 ignored=0 >>>>>>>>> >>>>>>>>> Also : >>>>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>>>>>>> [req] >>>>>>>>> default_bits = 2048 >>>>>>>>> prompt = no >>>>>>>>> default_md = sha256 >>>>>>>>> distinguished_name = dn >>>>>>>>> [dn] >>>>>>>>> C=IN >>>>>>>>> ST=UTTAR PRADESH >>>>>>>>> L=NOIDA >>>>>>>>> O=HSC >>>>>>>>> OU=HSC >>>>>>>>> emailAddress=demo at demo.com >>>>>>>>> >>>>>>>>> v3.ext: >>>>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>>>>>>> authorityKeyIdentifier=keyid,issuer >>>>>>>>> basicConstraints=CA:FALSE >>>>>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>>>>> dataEncipherment >>>>>>>>> subjectAltName = @alt_names >>>>>>>>> [alt_names] >>>>>>>>> IP.1=fd00:fd00:fd00:9900::81 >>>>>>>>> >>>>>>>>> Using these files we create other certificates. >>>>>>>>> Please check and let me know in case we need anything else. >>>>>>>>> >>>>>>>>> >>>>>>>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe < >>>>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> Hi Lokendra, >>>>>>>>>> >>>>>>>>>> Are you able to access all the tabs in the OpenStack dashboard >>>>>>>>>> without any error? If not, please retry generating the certificate. Also, >>>>>>>>>> share the openssl.cnf or server.cnf. >>>>>>>>>> >>>>>>>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Team, >>>>>>>>>>> Any input on this case raised. >>>>>>>>>>> >>>>>>>>>>> Thanks, >>>>>>>>>>> Lokendra >>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Shephard/Swogat, >>>>>>>>>>>> I tried changing the setting as suggested and it looks like it >>>>>>>>>>>> has failed at step 4 with error: >>>>>>>>>>>> >>>>>>>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | >>>>>>>>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>>>>>>>> undercloud | 0:24:47.736198 | 2.21s >>>>>>>>>>>> 2022-07-12 21:31:32.185594 | >>>>>>>>>>>> 525400ae-089b-fb79-67ac-0000000072cf | TASK | Create identity >>>>>>>>>>>> internal endpoint >>>>>>>>>>>> 2022-07-12 21:31:34.468996 | >>>>>>>>>>>> 525400ae-089b-fb79-67ac-0000000072cf | FATAL | Create identity >>>>>>>>>>>> internal endpoint | undercloud | error={"changed": false, "extra_data": >>>>>>>>>>>> {"data": null, "details": "The request you have made requires >>>>>>>>>>>> authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The >>>>>>>>>>>> request you have made requires >>>>>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>>>>> services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, >>>>>>>>>>>> The request you have made requires authentication."} >>>>>>>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> Checking further the endpoint list: >>>>>>>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>>>>>>> >>>>>>>>>>>> DeprecationWarning >>>>>>>>>>>> >>>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>>> | ID | Region | Service Name | >>>>>>>>>>>> Service Type | Enabled | Interface | URL >>>>>>>>>>>> | >>>>>>>>>>>> >>>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | >>>>>>>>>>>> identity | True | admin | http://30.30.30.173:35357 >>>>>>>>>>>> | >>>>>>>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | >>>>>>>>>>>> identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 >>>>>>>>>>>> | >>>>>>>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | >>>>>>>>>>>> identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 >>>>>>>>>>>> | >>>>>>>>>>>> >>>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> it looks like something related to the SSL, we have also >>>>>>>>>>>> verified that the GUI login screen shows that Certificates are applied. >>>>>>>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>>>>>>> observation would be of great help. >>>>>>>>>>>> thanks again for the support. >>>>>>>>>>>> >>>>>>>>>>>> Best Regards, >>>>>>>>>>>> Lokendra >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> I had faced a similar kind of issue, for ip based setup you >>>>>>>>>>>>> need to specify the domain name as the ip that you are going to use, this >>>>>>>>>>>>> error is showing up because the ssl is ip based but the fqdns seems to be >>>>>>>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>>>>>>> >>>>>>>>>>>>> And will it work if we specify clouddomain parameter to the IP >>>>>>>>>>>>> address for overcloud? because it seems he has not specified the >>>>>>>>>>>>> clouddomain parameter and overcloud.example.com is the >>>>>>>>>>>>> default domain for overcloud.example.com. >>>>>>>>>>>>> >>>>>>>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> What is the domain name you have specified in the >>>>>>>>>>>>>> undercloud.conf file? >>>>>>>>>>>>>> And what is the fqdn name used for the generation of the SSL >>>>>>>>>>>>>> cert? >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>> We were trying to install overcloud with SSL enabled for >>>>>>>>>>>>>>> which the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> ERROR >>>>>>>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): >>>>>>>>>>>>>>> Max retries exceeded with url: / (Caused by >>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", >>>>>>>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>>>> 2022-07-08 17:03:23.606739 | >>>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder >>>>>>>>>>>>>>> keystone catalog entries | undercloud | item={'service_name': 'cinderv3', >>>>>>>>>>>>>>> 'service_type': 'volume'} | error={"ansible_index_var": >>>>>>>>>>>>>>> "cinder_api_service", "ansible_loop_var": "item", "changed": false, >>>>>>>>>>>>>>> "cinder_api_service": 1, "item": {"service_name": "cinderv3", >>>>>>>>>>>>>>> "service_type": "volume"}, "module_stderr": "Failed to discover available >>>>>>>>>>>>>>> identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. >>>>>>>>>>>>>>> Attempting to parse version from URL.\nTraceback (most recent call last):\n >>>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line >>>>>>>>>>>>>>> 600, in urlopen\n chunked=chunked)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>>>>>>> server_hostname)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>>>>>>> (most recent call last):\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>>> last):\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>>> last):\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>>>>>>> in _send_request\n raise >>>>>>>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>>>>>>> exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>>>>>>> run_globals)\n File >>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>>> line 185, in \n File >>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>>> line 181, in main\n File >>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>>>>>>> line 407, in __call__\n File >>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>>> line 141, in run\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>>>>>>> File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. >>>>>>>>>>>>>>> %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not >>>>>>>>>>>>>>> find versioned identity endpoints when attempting to authenticate. Please >>>>>>>>>>>>>>> check that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": >>>>>>>>>>>>>>> "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>>>> 2022-07-08 17:03:23.609354 | >>>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s >>>>>>>>>>>>>>> 2022-07-08 17:03:23.611094 | >>>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> PLAY RECAP >>>>>>>>>>>>>>> ********************************************************************* >>>>>>>>>>>>>>> localhost : ok=0 changed=0 >>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 >>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 >>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 >>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 >>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>>>>>>> undercloud : ok=28 changed=7 >>>>>>>>>>>>>>> unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>>>>>>> 2022-07-08 17:03:23.647270 | >>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information >>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>>> 2022-07-08 17:03:23.647907 | >>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 1373 >>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> in the deploy.sh: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>> --networks-file >>>>>>>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>>>>>>> --baremetal-deployment >>>>>>>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>>>>>>> --network-config \ >>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>>>>>>> modifications: >>>>>>>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>>>>>>> Passed as is in the defaults. >>>>>>>>>>>>>>> enable-tls.yaml: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # >>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>> # This file was created automatically by the sample >>>>>>>>>>>>>>> environment >>>>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>>>> update it. >>>>>>>>>>>>>>> # Users are recommended to make changes to a copy of the >>>>>>>>>>>>>>> file instead >>>>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>>>> # >>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>>>>>>> # description: | >>>>>>>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>>>>>>> deployments. >>>>>>>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>>>>>>> # environments must also be used. >>>>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon >>>>>>>>>>>>>>> # Type: boolean >>>>>>>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>>>>>>> services in the public network. >>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>> PublicTLSCAFile: >>>>>>>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>>>>>>> format. >>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> SSLCertificate: | >>>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>>> # The content of an SSL intermediate CA certificate in PEM >>>>>>>>>>>>>>> format. >>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>> SSLKey: | >>>>>>>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>>>> # The filepath of the certificate as it will be stored in >>>>>>>>>>>>>>> the controller. >>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # ********************* >>>>>>>>>>>>>>> # End static parameters >>>>>>>>>>>>>>> # ********************* >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> # >>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>> # This file was created automatically by the sample >>>>>>>>>>>>>>> environment >>>>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>>>> update it. >>>>>>>>>>>>>>> # Users are recommended to make changes to a copy of the >>>>>>>>>>>>>>> file instead >>>>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>>>> # >>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>>>>>>> # description: | >>>>>>>>>>>>>>> # When using an SSL certificate signed by a CA that is not >>>>>>>>>>>>>>> in the default >>>>>>>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>>>>>>> certificate to >>>>>>>>>>>>>>> # the overcloud nodes. >>>>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>>>> # The content of a CA's SSL certificate file in PEM >>>>>>>>>>>>>>> format. This is evaluated on the client side. >>>>>>>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> resource_registry: >>>>>>>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation >>>>>>>>>>>>>>> (openstack.org) >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>> skype: lokendrarathour >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> >>>>>>>>>> >>>>>>>>> >>>>>>>>> -- >>>>>>>>> ~ Lokendra >>>>>>>>> skype: lokendrarathour >>>>>>>>> >>>>>>>>> >>>>>>>>> >>>>>> >>>>>> -- >>>>>> ~ Lokendra >>>>>> skype: lokendrarathour >>>>>> >>>>>> >>>>>> >>>> >>>> -- >>>> ~ Lokendra >>>> skype: lokendrarathour >>>> >>>> >>>> >>> >>> -- >>> ~ Lokendra >>> skype: lokendrarathour >>> >>> >>> > > -- > ~ Lokendra > skype: lokendrarathour > > > -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 81010 bytes Desc: not available URL: From mmilan2006 at gmail.com Wed Jul 27 17:34:42 2022 From: mmilan2006 at gmail.com (Vaibhav) Date: Wed, 27 Jul 2022 23:04:42 +0530 Subject: Manila/NFS support for Zun In-Reply-To: References: Message-ID: Dear Hongbin, I am able to successfully do the following. It allows me to mount the NFS volume. But there is no way to pass this information from an openstack client. Is there any way to send this information from openstack client to docker? Is there any hook or plugin available which can be called before starting the container? I can exploit it until a permanent solution is available, export NFS_VOL_NAME=mynfs export NFS_LOCAL_MNT=/mnt/mynfs export NFS_SERVER=my.nfs.server.com export NFS_SHARE=/my/server/path export NFS_OPTS=vers=4,soft docker run --mount \ "src=$NFS_VOL_NAME,dst=$NFS_LOCAL_MNT,volume-opt=device=:$NFS_SHARE,\"volume-opt=o=addr=$NFS_SERVER,$NFS_OPTS\",type=volume,volume-driver=local,volume-opt=type=nfs" \ busybox ls $NFS_LOCAL_MNT Alternatively, you can create the volume before the container: docker volume create \ --driver local \ --opt type=nfs \ --opt o=addr=$NFS_SERVER,$NFS_OPTS \ --opt device=:$NFS_SHARE \ $NFS_VOL_NAME docker run --rm -v $NFS_VOL_NAME:$NFS_LOCAL_MNT busybox ls $NFS_LOCAL_MNT . On Thu, Jul 21, 2022 at 10:37 PM Vaibhav wrote: > Thank you for your response. > > > On Thu, Jul 21, 2022 at 1:25 AM Carlos Silva > wrote: > >> Hello, sorry for the late reply >> >> Em s?b., 16 de jul. de 2022 ?s 13:28, Vaibhav >> escreveu: >> >>> Hi, >>> >>> I want to mount my Manila shares on containers managed by Zun. >>> >>> I can see a Fuxi project and driver for this but it is discontinued now. >>> >>> I want to have a shared file system to be mounted on multiple containers >>> simultaneously, it is not possible with cinder. >>> >>> Is there any alternative to Fuxi? >>> >> I can't think of many by looking at the use case. Isn't there anything on >> Zun itself that allows the shares to be mounted directly in the containers? >> > No, Zun gives only option of cinder volumes to be mounted. With cinder I > am not able to have a shared file system among the containers, which I want > to have. > >> Or can it work with yoga release ? >>> >>> I would not say so, as the project is no longer maintained and the >> latest commit is from 2017. >> >>> Please advise and give a suggestion. >>> >>> Regards, >>> Vaibhav >>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From p.aminian.server at gmail.com Thu Jul 28 04:48:14 2022 From: p.aminian.server at gmail.com (Parsa Aminian) Date: Thu, 28 Jul 2022 09:18:14 +0430 Subject: openstack router with flat In-Reply-To: <2271172.GsbXASJkJ2@p1> References: <2271172.GsbXASJkJ2@p1> Message-ID: thanks for your reply yes I need public IP address (external) directly in my instance but with router . I need a router because it adds many features including High Availability If I don't use a router how can I have HA on my network ? On Wed, Jul 27, 2022 at 1:58 PM Slawek Kaplonski wrote: > Hi, > > Dnia ?roda, 27 lipca 2022 09:55:12 CEST Parsa Aminian pisze: > > hello > > is it possible to assign valid ip to openstack router directly without > nat > > ? as I know openstack need invalid fix ip when using router and valid ip > > should be added as float ip . > > > > I don't understand what do You mean by "invalid" and "valid" IP address. > Can You explain it? > If You want to have public IP address (external) directly in Your > instnance, You can plug Your instance directly into the provider network, > without using neutron router at all. > > -- > Slawek Kaplonski > Principal Software Engineer > Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaplons at redhat.com Thu Jul 28 07:06:51 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 Jul 2022 09:06:51 +0200 Subject: openstack router with flat In-Reply-To: References: <2271172.GsbXASJkJ2@p1> Message-ID: <5745346.JktxKh7kmM@p1> Hi, Dnia czwartek, 28 lipca 2022 06:48:14 CEST Parsa Aminian pisze: > thanks for your reply > yes I need public IP address (external) directly in my instance but with > router . You can have provider network with public IPs in it and connect it to the router as any other tenant network. That should works, but please keep in mind that it will by default allocate your "gateway_ip" address to that port and this may break Your network if this gateway IP is already configured on some Your external device. To avoid that, You can first create port in that network and then plug that port into the router You want to. > I need a router because it adds many features including High > Availability > If I don't use a router how can I have HA on my network ? What kind of HA You are asking for? > > > On Wed, Jul 27, 2022 at 1:58 PM Slawek Kaplonski > wrote: > > > Hi, > > > > Dnia ?roda, 27 lipca 2022 09:55:12 CEST Parsa Aminian pisze: > > > hello > > > is it possible to assign valid ip to openstack router directly without > > nat > > > ? as I know openstack need invalid fix ip when using router and valid ip > > > should be added as float ip . > > > > > > > I don't understand what do You mean by "invalid" and "valid" IP address. > > Can You explain it? > > If You want to have public IP address (external) directly in Your > > instnance, You can plug Your instance directly into the provider network, > > without using neutron router at all. > > > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From skaplons at redhat.com Thu Jul 28 10:51:45 2022 From: skaplons at redhat.com (Slawek Kaplonski) Date: Thu, 28 Jul 2022 12:51:45 +0200 Subject: [all][TC] Bare rechecks stats week of 25.07 Message-ID: <15225123.PIt3FUKRBJ@p1> Hi, Here are fresh data about bare rechecks. +--------------------+---------------+--------------+-------------------+ | Team | Bare rechecks | All Rechecks | Bare rechecks [%] | +--------------------+---------------+--------------+-------------------+ | requirements | 2 | 2 | 100.0 | | keystone | 1 | 1 | 100.0 | | OpenStackSDK | 3 | 3 | 100.0 | | tacker | 3 | 3 | 100.0 | | freezer | 1 | 1 | 100.0 | | rally | 1 | 1 | 100.0 | | OpenStack-Helm | 14 | 15 | 93.33 | | cinder | 6 | 7 | 85.71 | | kolla | 20 | 25 | 80.0 | | neutron | 4 | 5 | 80.0 | | Puppet OpenStack | 10 | 13 | 76.92 | | OpenStack Charms | 3 | 6 | 50.0 | | nova | 11 | 22 | 50.0 | | tripleo | 8 | 19 | 42.11 | | ironic | 1 | 3 | 33.33 | | Quality Assurance | 1 | 3 | 33.33 | | octavia | 0 | 1 | 0.0 | | OpenStackAnsible | 0 | 1 | 0.0 | | designate | 0 | 1 | 0.0 | | Release Management | 0 | 1 | 0.0 | | horizon | 0 | 1 | 0.0 | +--------------------+---------------+--------------+-------------------+ Those data should be more accurate as my script now counts only comments from last 7 days, not all comments from patches updated in last 7 days. Reminder: "bare rechecks" are recheck comments without any reason given. If You need to do recheck for patch due to failed job(s), please first check such failed job and try to identify what was the issue there. Maybe there is already opened bug for that or maybe You can open new one and add it as explanation in the recheck comment. Or maybe it was some infra issue, in such case short explanation in the comment would also be enough. -- Slawek Kaplonski Principal Software Engineer Red Hat -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: This is a digitally signed message part. URL: From iurygregory at gmail.com Thu Jul 28 11:56:10 2022 From: iurygregory at gmail.com (Iury Gregory) Date: Thu, 28 Jul 2022 08:56:10 -0300 Subject: [ironic] Revise Ironic Vision Meeting In-Reply-To: References: Message-ID: Hello Everyone, During the upstream meeting we decided to try to schedule a second meeting so others could attend. Please vote in https://doodle.com/meeting/organize/id/eZ4L5Xgb Thanks! Em seg., 18 de jul. de 2022 ?s 14:36, Iury Gregory escreveu: > Hello ironicers, > > During the upstream meeting today we have scheduled the meeting to revise > the vision of our project. The meeting will happen tomorrow at 15:00 UTC, > details about the meeting are in the etherpad [1]. > > See you tomorrow! > > [1] https://etherpad.opendev.org/p/revise-ironic-vision > > -- > *Att[]'s* > > *Iury Gregory Melo Ferreira * > *MSc in Computer Science at UFCG* > *Ironic PTL * > *Senior Software Engineer at Red Hat Brazil* > *Social*: https://www.linkedin.com/in/iurygregory > *E-mail: iurygregory at gmail.com * > -- *Att[]'s* *Iury Gregory Melo Ferreira * *MSc in Computer Science at UFCG* *Ironic PTL * *Senior Software Engineer at Red Hat Brazil* *Social*: https://www.linkedin.com/in/iurygregory *E-mail: iurygregory at gmail.com * -------------- next part -------------- An HTML attachment was scrubbed... URL: From adivya1.singh at gmail.com Thu Jul 28 12:41:54 2022 From: adivya1.singh at gmail.com (Adivya Singh) Date: Thu, 28 Jul 2022 18:11:54 +0530 Subject: Regarding Application Credential in Open Stack XENA In-Reply-To: References: Message-ID: HI Team, Any feedback on this, How can it actually Work as a member will have the same scope as earlier, but how to design it for the user to actually work in a real scenario. On Sat, Jul 23, 2022 at 12:38 AM Adivya Singh wrote: > hi Team, > > There is a use case, where a user wants to Create CI/CD pipeline using > Application Credentials of Open Stack, Henceforth we tried Create an > application with a role with a secret Key. > > but it is failing with the below output, The user wants to push the qcow2 > image from his system to Open Stack. > > Using auth plugin: v3 application credential > /usr/lib/python3.7/socket.py:660: ResourceWarning: unclosed > self._sock = None > ResourceWarning: Enable tracemalloc to get the object allocation traceback > Not Found (HTTP 404) (Request-ID: req-217c9f28-d6a6-4649-adf9-2d9a4b965b3f) > END return value: 1 > > it failed with a below result > > Regards > > Adivya Singh > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bshephar at redhat.com Thu Jul 28 00:04:45 2022 From: bshephar at redhat.com (Brendan Shephard) Date: Thu, 28 Jul 2022 10:04:45 +1000 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hey, It?s probably best that you raise a bug here at this stage: https://bugs.launchpad.net/tripleo Can you attach all of the templates you?re using to that bug, the overcloud deploy command script that you?re running and also the log files that you have shared here? I wasn?t able to reproduce your issue, but if you raise a bug we can direct it to the right team who can help out with your keystone errors. Brendan Shephard Senior Software Engineer Brisbane, Australia > On 28 Jul 2022, at 2:55 am, Lokendra Rathour wrote: > > Hi Team, > I tried again with DNS enabled, but the error remains the same. > tone_resources : Create identity public endpoint | undercloud | 0:24:59.456181 | 2.31s > 2022-07-27 15:20:48.735838 | 5254006e-bbd1-cd20-647c-00000000736c | TASK | Create identity internal endpoint > 2022-07-27 15:20:51.227000 | 5254006e-bbd1-cd20-647c-00000000736c | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://overcloud-publ ic.myhsc.com:13000/v3/services , The request you have made requires authentication."} > Checking further in the keystone logs in container: > > 2022-07-27 19:35:37.447 33 WARNING keystone.server.flask.application [req-bb4621d8-73ad-4bad-831f-5c2370e92e71 - - - - -] Authorization failed. The request you have made requires authentication. from fd00:fd00:fd00:9900::29: keystone.exception.Unauthorized: The request you have made requires authentication. > 2022-07-27 19:35:37.998 26 WARNING py.warnings [req-54d44e3a-5e34-4e40-b2dc-e8213353ea05 ab5e9670632544f8a8c7e1b3ac175bcd e4185872cadb442aa9a59980b3227941 - default default] /usr/lib/python3.6/site-packages/oslo_policy/policy.py:1065: UserWarning: Policy identity:list_projects failed scope check. The token used to make the request was project scoped but the policy requires ['system', 'domain'] scope. This behavior may change in the future where using the intended scope is required > > I am kind of blocked now, any lead would let me understand the problem more and maybe it can solve the issue. > > Best Regards, > Lokendra > > On Mon, Jul 25, 2022 at 3:12 PM Lokendra Rathour > wrote: > Hi Brendan, > Apologies for this delay, i had to redo the setup to reach this point, and also this time just to eliminate my Doubt i removed SSL for overcloud. Now I am only using DNS Server. In this case also I am getting the same error. > | 0:13:20.198877 | 1.86s > 2022-07-25 14:37:29.657118 | 525400a7-0932-2ed1-d313-000000007193 | TASK | Create identity internal endpoint > 2022-07-25 14:37:31.995131 | 525400a7-0932-2ed1-d313-000000007193 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: http://[fd00:fd00:fd00:9900::a0]:5000/v3/services, The request you have made requires authentication."} > > To answer your question please note: > > "OS_CLOUD=overcloud openstack endpoint list" > [root at GGNLABPM4 ~]# ssh stack at 10.0.1.29 > stack at 10.0.1.29 's password: > Activate the web console with: systemctl enable --now cockpit.socket > > Last login: Mon Jul 25 14:38:44 2022 from 10.0.1.4 > [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint list > +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ > | ID | Region | Service Name | Service Type | Enabled | Interface | URL | > +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ > | 1ecd328b5ea1426bb411d157b8339dd2 | regionOne | keystone | identity | True | public | http://[fd00:fd00:fd00:9900::a0]:5000 | > | 518cfa0f2ece43b684710006c9fa5b25 | regionOne | keystone | identity | True | admin | http://30.30.30.181:35357 | > | 8cda413052c24718b073578bb497f483 | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::a0]:5000 | > +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ > [stack at undercloud ~]$ > > it is giving us only keystone endpoints. > > Also note that I am trying to deploy the end to end setup with FQDN only. and in this case as well I am facing the same issue as old. > > thanks once again for your inputs. > > -Lokendra > > > > On Wed, Jul 20, 2022 at 3:07 PM Brendan Shephard > wrote: > Hey, > > I think it's weird that you got a response at all when you run the openstack endpoint list, since you said haproxy isn't running. So there should be nothing serving that endpoint. > > I noticed you have the stackrc file sourced. Try it again without that file sourced, so: > $ su - stack > $ OS_CLOUD=overcloud openstack endpoint list > > I would suspect that nothing should be responding. It could be the stackrc file causing issues with some of the environment variables. If the above command doesn't return anything, then my suggestion would be to re-run the deployment like this: > > $ su - stack > $ export OS_CLOUD=undercloud > # Then run your deployment script again > $ bash overcloud_deploy.sh > > The OS_CLOUD variable tells the openstackclient to lookup the details about that cloud from your clouds.yaml file. Which will be located in /home/stack/.config/openstack/clouds.yaml. > > This method is preferable to the sourcing of RC files. > > Reference: > https://docs.openstack.org/openstacksdk/latest/user/guides/connect_from_config.html > > Regarding the HAProxy warnings. I don't think they should be fatal. afaik, HAProxy should still be starting. If it's not, there might be another error that you will need to look for in the log files under /var/log/containers/haproxy/ > > I wasn't able to reproduce that warning by following the documentation for enabling TLS though. So it seems like an odd error to be getting. > > Brendan Shephard > Software Engineer > Red Hat APAC > 193 N Quay > Brisbane City QLD 4000 > @RedHat Red Hat Red Hat > > > > On Wed, Jul 20, 2022 at 7:02 PM Lokendra Rathour > wrote: > Hi Brendan / Team, > Any lead for the issue raised? > > -Lokendra > > > > On Tue, Jul 19, 2022 at 11:46 AM Lokendra Rathour > wrote: > Hi Brendan,, > Thanks for the inputs. > when i run the command as you suggested I get this: > (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint list > +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ > | ID | Region | Service Name | Service Type | Enabled | Interface | URL | > +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ > | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | > | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | identity | True | admin | http://30.30.30.173:35357 | > | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | identity | True | public | https://overcloud-hsc.com:13000 | > +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ > (undercloud) [stack at undercloud ~]$ > > On the other note that i notices was as below: > HAproxy container is not running. > [root at overcloud-controller-2 stdouts]# podman ps -a | grep haproxy > e91dbde042db undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo 24 hours ago Exited (1) Less than a second ago container-puppet-haproxy\ > Checking logs: > 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= > 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] > 2022-07-19T08:47:00.496323705+05:30 stderr F + . kolla_extend_start > 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' > 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' > 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c '$*' -- eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi > 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind fd00:fd00:fd00:9900::81:13776' : > 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library will use an automatically generated DH parameter. > automatically2022-07-19T08:47:00.513967576+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind fd00:fd00:fd00:9900::81:13292' : > 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind fd00:fd00:fd00:9900::81:13004' : > 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind fd00:fd00:fd00:9900::81:13005' : > 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind fd00:fd00:fd00:2000::326:443' : > 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind fd00:fd00:fd00:9900::81:13000' : > 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind fd00:fd00:fd00:9900::81:13696' : > 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind fd00:fd00:fd00:9900::81:13080' : > 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind fd00:fd00:fd00:9900::81:13774' : > 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind fd00:fd00:fd00:9900::81:13778' : > 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind fd00:fd00:fd00:9900::81:13808' : > 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load default 1024 bits DH parameter for certificate '/etc/pki/tls/private/overcloud_endpoint.pem'. > 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library will use an automatically generated DH parameter. > 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] 199/084700 (7) : Setting tune.ssl.default-dh-param to 1024 by default, if your workload permits it you should set it to at least 2048. Please set a value >= 1024 to make this warning disappear. > pcs status also show that proxy is down for the controller with VIP: > Failed Resource Actions: > * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 'error' (1): call=139, status='complete', exitreason='podman failed to launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', queued=0ms, exec=1222ms > * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 'error' (1): call=191, status='complete', exitreason='podman failed to launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', queued=0ms, exec=1171ms > * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 'error' (1): call=193, status='complete', exitreason='podman failed to launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', queued=0ms, exec=1256ms > do let me know in case we need anything more around it. > thanks once again for the support. > -Lokendra > > On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard > wrote: > Hey, > > Doesn't look like there is anything wrong with the certificate there. You would be getting a TLS error if that was the problem. > > What does your clouds.yaml file look like now? What happens if you run this command from the Undercloud node: > $ OS_CLOUD=overcloud openstack endpoint list > > Do you get the same error? > > Brendan Shephard > Software Engineer > Red Hat APAC > 193 N Quay > Brisbane City QLD 4000 > @RedHat Red Hat Red Hat > > > > On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour > wrote: > Hi Swogat and Vikarna, > We have tried adding the DNS entry for the overcloud domain. we are getting the same error: > > 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:11:18.785769 | 2.16s > 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | TASK | Create identity internal endpoint > 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://overcloud-hsc.com:13000/v3/services , The request you have made requires authentication."} > 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | TIMING | tripleo_keystone_resources : Create identity internal endpoint | undercloud | 0:11:21.074605 | 2. > > Certificate configs: > > [stack at undercloud oc-domain-name]$ cat server.csr.cnf > [req] > default_bits = 2048 > prompt = no > default_md = sha256 > distinguished_name = dn > [dn] > C=IN > ST=UTTAR PRADESH > L=NOIDA > O=HSC > OU=HSC > emailAddress=demo at demo.com > CN=overcloud-hsc.com > [stack at undercloud oc-domain-name]$ cat v3.ext > authorityKeyIdentifier=keyid,issuer > basicConstraints=CA:FALSE > keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment > subjectAltName = @alt_names > [alt_names] > DNS.1=overcloud-hsc.com > [stack at undercloud oc-domain-name]$ > > the difference we see from others is that we are using self-signed certificates. > > please let me know in case we need to check something else. Somehow this issue remains stuck. > > > On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan > wrote: > I was facing a similar kind of issue. > https://bugzilla.redhat.com/show_bug.cgi?id=2089442 > Here is the solution that helped me fix it. > Also make sure the cn that you will use is reachable from undercloud (maybe) script should take care of it. > > Also please follow Mr. Tathe's mail to add the cn first. > > With regards > Swogat Pradhan > > On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe > wrote: > Hi Lokendra, > > The CN field is missing. Can you add that and generate the certificate again. > > CN=ipaddress > > Also add dns.1=ipaddress under alt_names for precaution. > > Vikarna > > On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, > wrote: > HI Vikarna, > Thanks for the inputs. > I am note able to access any tabs in GUI. > > > to re-state, we are failing at the time of deployment at step4 : > > PLAY [External deployment step 4] ********************************************** > 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 | TASK | External deployment step 4 > 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 | OK | External deployment step 4 | undercloud -> localhost | result={ > "changed": false, > "msg": "Use --start-at-task 'External deployment step 4' to resume from this task" > } > [WARNING]: ('undercloud -> localhost', '525400ae-089b-870a-fab6-0000000000d7') > missing from stats > 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 | TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s > 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 | INCLUDED | /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml | undercloud > 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 | TASK | Clean up legacy Cinder keystone catalog entries > 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 | OK | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv2', 'service_type': 'volumev2'} > 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:24.204562 | 2.48s > 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 | OK | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} > 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:26.122584 | 4.40s > 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:26.124296 | 4.40s > 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c | TASK | Manage Keystone resources for OpenStack services > 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c | TIMING | Manage Keystone resources for OpenStack services | undercloud | 0:11:26.169842 | 0.03s > 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b | TASK | Gather variables for each operating system > 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b | TIMING | tripleo_keystone_resources : Gather variables for each operating system | undercloud | 0:11:26.253383 | 0.04s > 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c | TASK | Create Keystone Admin resources > 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c | TIMING | tripleo_keystone_resources : Create Keystone Admin resources | undercloud | 0:11:26.299608 | 0.03s > 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 | INCLUDED | /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | undercloud > 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad | TASK | Create default domain > 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad | OK | Create default domain | undercloud > 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad | TIMING | tripleo_keystone_resources : Create default domain | undercloud | 0:11:28.437360 | 2.09s > 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae | TASK | Create admin and service projects > 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae | TIMING | tripleo_keystone_resources : Create admin and service projects | undercloud | 0:11:28.483468 | 0.03s > 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 | INCLUDED | /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | undercloud > 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 | TASK | Async creation of Keystone project > 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 | CHANGED | Async creation of Keystone project | undercloud | item=admin > 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.238078 | 0.72s > 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 | CHANGED | Async creation of Keystone project | undercloud | item=service > 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.586587 | 1.06s > 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 | TIMING | tripleo_keystone_resources : Async creation of Keystone project | undercloud | 0:11:29.587916 | 1.07s > 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 | TASK | Check Keystone project status > 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 | WAITING | Check Keystone project status | undercloud | 30 retries left > 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 | OK | Check Keystone project status | undercloud | item=admin > 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.260666 | 5.66s > 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 | OK | Check Keystone project status | undercloud | item=service > 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.494729 | 5.89s > 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 | TIMING | tripleo_keystone_resources : Check Keystone project status | undercloud | 0:11:35.498771 | 5.89s > 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af | TASK | Create admin role > 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af | OK | Create admin role | undercloud > 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af | TIMING | tripleo_keystone_resources : Create admin role | undercloud | 0:11:37.725949 | 2.20s > 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 | TASK | Create _member_ role > 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 | SKIPPED | Create _member_ role | undercloud > 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 | TIMING | tripleo_keystone_resources : Create _member_ role | undercloud | 0:11:37.783369 | 0.04s > 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 | TASK | Create admin user > 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 | CHANGED | Create admin user | undercloud > 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 | TIMING | tripleo_keystone_resources : Create admin user | undercloud | 0:11:41.145472 | 3.34s > 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 | TASK | Assign admin role to admin project for admin user > 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 | OK | Assign admin role to admin project for admin user | undercloud > 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 | TIMING | tripleo_keystone_resources : Assign admin role to admin project for admin user | undercloud | 0:11:44.288848 | 3.13s > 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 | TASK | Assign _member_ role to admin project for admin user > 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 | SKIPPED | Assign _member_ role to admin project for admin user | undercloud > 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 | TIMING | tripleo_keystone_resources : Assign _member_ role to admin project for admin user | undercloud | 0:11:44.346479 | 0.04s > 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 | TASK | Create identity service > 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 | OK | Create identity service | undercloud > 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 | TIMING | tripleo_keystone_resources : Create identity service | undercloud | 0:11:46.022362 | 1.66s > 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 | TASK | Create identity public endpoint > 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 | OK | Create identity public endpoint | undercloud > 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:11:48.233349 | 2.19s > 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 | TASK | Create identity internal endpoint > 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request you have made requires authentication."} > 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 | TIMING | tripleo_keystone_resources : Create identity internal endpoint | undercloud | 0:11:50.660654 | 2.41s > > PLAY RECAP ********************************************************************* > localhost : ok=1 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 > overcloud-controller-0 : ok=437 changed=103 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-1 : ok=435 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-2 : ok=432 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 > overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 > undercloud : ok=39 changed=7 unreachable=0 failed=1 skipped=6 rescued=0 ignored=0 > > Also : > (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf > [req] > default_bits = 2048 > prompt = no > default_md = sha256 > distinguished_name = dn > [dn] > C=IN > ST=UTTAR PRADESH > L=NOIDA > O=HSC > OU=HSC > emailAddress=demo at demo.com > > v3.ext: > (undercloud) [stack at undercloud oc-cert]$ cat v3.ext > authorityKeyIdentifier=keyid,issuer > basicConstraints=CA:FALSE > keyUsage = digitalSignature, nonRepudiation, keyEncipherment, dataEncipherment > subjectAltName = @alt_names > [alt_names] > IP.1=fd00:fd00:fd00:9900::81 > > Using these files we create other certificates. > Please check and let me know in case we need anything else. > > > On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe > wrote: > Hi Lokendra, > > Are you able to access all the tabs in the OpenStack dashboard without any error? If not, please retry generating the certificate. Also, share the openssl.cnf or server.cnf. > > On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour > wrote: > Hi Team, > Any input on this case raised. > > Thanks, > Lokendra > > > On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour > wrote: > Hi Shephard/Swogat, > I tried changing the setting as suggested and it looks like it has failed at step 4 with error: > > :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | TIMING | tripleo_keystone_resources : Create identity public endpoint | undercloud | 0:24:47.736198 | 2.21s > 2022-07-12 21:31:32.185594 | 525400ae-089b-fb79-67ac-0000000072cf | TASK | Create identity internal endpoint > 2022-07-12 21:31:34.468996 | 525400ae-089b-fb79-67ac-0000000072cf | FATAL | Create identity internal endpoint | undercloud | error={"changed": false, "extra_data": {"data": null, "details": "The request you have made requires authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The request you have made requires authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list services: Client Error for url: https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request you have made requires authentication."} > 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 > > Checking further the endpoint list: > I see only one endpoint for keystone is gettin created. > DeprecationWarning > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > | ID | Region | Service Name | Service Type | Enabled | Interface | URL | > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone | identity | True | admin | http://30.30.30.173:35357 | > | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone | identity | True | internal | http://[fd00:fd00:fd00:2000::326]:5000 | > | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone | identity | True | public | https://[fd00:fd00:fd00:9900::81]:13000 | > +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ > > it looks like something related to the SSL, we have also verified that the GUI login screen shows that Certificates are applied. > exploring more in logs, meanwhile any suggestions or know observation would be of great help. > thanks again for the support. > > Best Regards, > Lokendra > > > On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan > wrote: > I had faced a similar kind of issue, for ip based setup you need to specify the domain name as the ip that you are going to use, this error is showing up because the ssl is ip based but the fqdns seems to be undercloud.com or overcloud.example.com . > I think for undercloud you can change the undercloud.conf. > > And will it work if we specify clouddomain parameter to the IP address for overcloud? because it seems he has not specified the clouddomain parameter and overcloud.example.com is the default domain for overcloud.example.com . > > On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, > wrote: > What is the domain name you have specified in the undercloud.conf file? > And what is the fqdn name used for the generation of the SSL cert? > > On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, > wrote: > Hi Team, > We were trying to install overcloud with SSL enabled for which the UC is installed, but OC install is getting failed at step 4: > > ERROR > :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com '\",),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} > 2022-07-08 17:03:23.606739 | 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder keystone catalog entries | undercloud | item={'service_name': 'cinderv3', 'service_type': 'volume'} | error={"ansible_index_var": "cinder_api_service", "ansible_loop_var": "item", "changed": false, "cinder_api_service": 1, "item": {"service_name": "cinderv3", "service_type": "volume"}, "module_stderr": "Failed to discover available identity versions when contacting https://[fd00:fd00:fd00:9900::2ef]:13000. Attempting to parse version from URL.\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 600, in urlopen\n chunked=chunked)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, in _make_request\n self._validate_conn(conn)\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, in _validate_conn\n conn.connect()\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in connect\n _match_hostname(cert, self.assert_hostname or server_hostname)\n File \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in _match_hostname\n match_hostname(cert, asserted_hostname)\n File \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % (hostname, dnsnames[0]))\nssl.CertificateError: hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com '\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in send\n timeout=timeout\n File \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, in urlopen\n _stacktrace=sys.exc_info()[2])\n File \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in increment\n raise MaxRetryError(_pool, url, error or ResponseError(cause))\nurllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com '\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, in _send_request\n resp = self.session.request(method, url, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, in request\n resp = self.send(prep, **send_kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in send\n r = adapter.send(request, **kwargs)\n File \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com '\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 138, in _do_create_plugin\n authenticated=False)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 610, in get_discovery\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, in get_discovery\n disc = Discover(session, url, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, in __init__\n authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, in get_version_data\n resp = session.get(url, headers=headers, authenticated=authenticated)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, in get\n return self.request(url, 'GET', **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in request\n resp = send(**kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, in _send_request\n raise exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com '\",),))\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File \"\", line 102, in \n File \"\", line 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n return _run_module_code(code, init_globals, run_name, mod_spec)\n File \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n mod_name, mod_spec, pkg_name, script_name)\n File \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, run_globals)\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 185, in \n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 181, in main\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", line 407, in __call__\n File \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", line 141, in run\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 517, in search_services\n services = self.list_services()\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 492, in list_services\n if self._is_client_version('identity', 2):\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 460, in _is_client_version\n client = getattr(self, client_name)\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line 32, in _identity_client\n 'identity', min_version=2, max_version='3.latest')\n File \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in get_endpoint\n return self.session.get_endpoint(auth or self.auth, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 380, in get_endpoint\n allow_version_hack=allow_version_hack, **kwargs)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 271, in get_endpoint_data\n service_catalog = self.get_access(session).service_catalog\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 206, in get_auth_ref\n self._plugin = self._do_create_plugin(session)\n File \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", line 161, in _do_create_plugin\n 'auth_url is correct. %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not find versioned identity endpoints when attempting to authenticate. Please check that your auth_url is correct. SSL exception connecting to https://[fd00:fd00:fd00:9900::2ef]:13000: HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max retries exceeded with url: / (Caused by SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't match 'overcloud.example.com '\",),))\n", "module_stdout": "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} > 2022-07-08 17:03:23.609354 | 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s > 2022-07-08 17:03:23.611094 | 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s > > PLAY RECAP ********************************************************************* > localhost : ok=0 changed=0 unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 > overcloud-controller-0 : ok=437 changed=104 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-1 : ok=436 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 > overcloud-controller-2 : ok=431 changed=101 unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 > overcloud-novacompute-0 : ok=345 changed=83 unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 > undercloud : ok=28 changed=7 unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 > 2022-07-08 17:03:23.647270 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > 2022-07-08 17:03:23.647907 | ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 1373 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ > > > in the deploy.sh: > > openstack overcloud deploy --templates \ > -r /home/stack/templates/roles_data.yaml \ > --networks-file /home/stack/templates/custom_network_data.yaml \ > --vip-file /home/stack/templates/custom_vip_data.yaml \ > --baremetal-deployment /home/stack/templates/overcloud-baremetal-deploy.yaml \ > --network-config \ > -e /home/stack/templates/environment.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml \ > -e /home/stack/templates/ironic-config.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ > -e /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ > -e /home/stack/containers-prepare-parameter.yaml > > Addition lines as highlighted in yellow were passed with modifications: > tls-endpoints-public-ip.yaml: > Passed as is in the defaults. > enable-tls.yaml: > # ******************************************************************* > # This file was created automatically by the sample environment > # generator. Developers should use `tox -e genconfig` to update it. > # Users are recommended to make changes to a copy of the file instead > # of the original, if any customizations are needed. > # ******************************************************************* > # title: Enable SSL on OpenStack Public Endpoints > # description: | > # Use this environment to pass in certificates for SSL deployments. > # For these values to take effect, one of the tls-endpoints-*.yaml > # environments must also be used. > parameter_defaults: > # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in Horizon > # Type: boolean > HorizonSecureCookies: True > > # Specifies the default CA cert to use if TLS is used for services in the public network. > # Type: string > PublicTLSCAFile: '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' > > # The content of the SSL certificate (without Key) in PEM format. > # Type: string > SSLRootCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > > SSLCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > # The content of an SSL intermediate CA certificate in PEM format. > # Type: string > SSLIntermediateCertificate: '' > > # The content of the SSL Key in PEM format. > # Type: string > SSLKey: | > -----BEGIN PRIVATE KEY----- > ----*** CERTICATELINES TRIMMED ** > -----END PRIVATE KEY----- > > # ****************************************************** > # Static parameters - these are values that must be > # included in the environment but should not be changed. > # ****************************************************** > # The filepath of the certificate as it will be stored in the controller. > # Type: string > DeployedSSLCertificatePath: /etc/pki/tls/private/overcloud_endpoint.pem > > # ********************* > # End static parameters > # ********************* > inject-trust-anchor.yaml > # ******************************************************************* > # This file was created automatically by the sample environment > # generator. Developers should use `tox -e genconfig` to update it. > # Users are recommended to make changes to a copy of the file instead > # of the original, if any customizations are needed. > # ******************************************************************* > # title: Inject SSL Trust Anchor on Overcloud Nodes > # description: | > # When using an SSL certificate signed by a CA that is not in the default > # list of CAs, this environment allows adding a custom CA certificate to > # the overcloud nodes. > parameter_defaults: > # The content of a CA's SSL certificate file in PEM format. This is evaluated on the client side. > # Mandatory. This parameter must be set by the user. > # Type: string > SSLRootCertificate: | > -----BEGIN CERTIFICATE----- > ----*** CERTICATELINES TRIMMED ** > -----END CERTIFICATE----- > > resource_registry: > OS::TripleO::NodeTLSCAData: ../../puppet/extraconfig/tls/ca-inject.yaml > > > > The procedure to create such files was followed using: > Deploying with SSL ? TripleO 3.0.0 documentation (openstack.org) > > Idea is to deploy overcloud with SSL enabled i.e Self-signed IP-based certificate, without DNS. > > Any idea around this error would be of great help. > > -- > skype: lokendrarathour > > > > > > > > -- > > > -- > ~ Lokendra > skype: lokendrarathour > > > > > -- > ~ Lokendra > skype: lokendrarathour > > > > > -- > ~ Lokendra > skype: lokendrarathour > > > > > -- > ~ Lokendra > skype: lokendrarathour > > > > > -- > ~ Lokendra > skype: lokendrarathour > > > > > -- > ~ Lokendra > skype: lokendrarathour > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lokendrarathour at gmail.com Thu Jul 28 04:32:07 2022 From: lokendrarathour at gmail.com (Lokendra Rathour) Date: Thu, 28 Jul 2022 10:02:07 +0530 Subject: [Triple0 - Wallaby] Overcloud deployment getting failed with SSL In-Reply-To: References: Message-ID: Hi Brendan, Thanks for the advice. bug is reported: https://bugs.launchpad.net/tripleo/+bug/1982996 On Thu, Jul 28, 2022 at 5:34 AM Brendan Shephard wrote: > Hey, > > It?s probably best that you raise a bug here at this stage: > https://bugs.launchpad.net/tripleo > > Can you attach all of the templates you?re using to that bug, the > overcloud deploy command script that you?re running and also the log files > that you have shared here? > > I wasn?t able to reproduce your issue, but if you raise a bug we can > direct it to the right team who can help out with your keystone errors. > > Brendan Shephard > Senior Software Engineer > Brisbane, Australia > > > > On 28 Jul 2022, at 2:55 am, Lokendra Rathour > wrote: > > Hi Team, > I tried again with DNS enabled, but the error remains the same. > > tone_resources : Create identity public endpoint | undercloud | > 0:24:59.456181 | 2.31s > 2022-07-27 15:20:48.735838 | 5254006e-bbd1-cd20-647c-00000000736c | > TASK | Create identity internal endpoint > 2022-07-27 15:20:51.227000 | 5254006e-bbd1-cd20-647c-00000000736c | > FATAL | Create identity internal endpoint | undercloud | error={"changed": > false, "extra_data": {"data": null, "details": "The request you have made > requires authentication.", "response": > "{\"error\":{\"code\":401,\"message\":\"The request you have made requires > authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to > list services: Client Error for url: https://overcloud-publ > ic.myhsc.com:13000/v3/services, The request you have made requires > authentication."} > > Checking further in the keystone logs in container: > > > 2022-07-27 19:35:37.447 33 WARNING keystone.server.flask.application > [req-bb4621d8-73ad-4bad-831f-5c2370e92e71 - - - - -] Authorization failed. > The request you have made requires authentication. from > fd00:fd00:fd00:9900::29: keystone.exception.Unauthorized: The request you > have made requires authentication. > 2022-07-27 19:35:37.998 26 WARNING py.warnings > [req-54d44e3a-5e34-4e40-b2dc-e8213353ea05 ab5e9670632544f8a8c7e1b3ac175bcd > e4185872cadb442aa9a59980b3227941 - default default] > /usr/lib/python3.6/site-packages/oslo_policy/policy.py:1065: UserWarning: > Policy identity:list_projects failed scope check. The token used to make > the request was project scoped but the policy requires ['system', 'domain'] > scope. This behavior may change in the future where using the intended > scope is required > > I am kind of blocked now, any lead would let me understand the problem > more and maybe it can solve the issue. > > Best Regards, > Lokendra > > On Mon, Jul 25, 2022 at 3:12 PM Lokendra Rathour < > lokendrarathour at gmail.com> wrote: > >> Hi Brendan, >> Apologies for this delay, i had to redo the setup to reach this point, >> and also this time just to eliminate my Doubt i removed SSL for overcloud. >> Now I am only using DNS Server. In this case also I am getting the same >> error. >> >> | 0:13:20.198877 | 1.86s >> 2022-07-25 14:37:29.657118 | 525400a7-0932-2ed1-d313-000000007193 | >> TASK | Create identity internal endpoint >> 2022-07-25 14:37:31.995131 | 525400a7-0932-2ed1-d313-000000007193 | >> FATAL | Create identity internal endpoint | undercloud | error={"changed": >> false, "extra_data": {"data": null, "details": "The request you have made >> requires authentication.", "response": >> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >> services: Client Error for url: >> http://[fd00:fd00:fd00:9900::a0]:5000/v3/services, The request you have >> made requires authentication."} >> >> >> To answer your question please note: >> >> "OS_CLOUD=overcloud openstack endpoint list" >> >> [root at GGNLABPM4 ~]# ssh stack at 10.0.1.29 >> stack at 10.0.1.29's password: >> Activate the web console with: systemctl enable --now cockpit.socket >> >> Last login: Mon Jul 25 14:38:44 2022 from 10.0.1.4 >> [stack at undercloud ~]$ OS_CLOUD=overcloud openstack endpoint list >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ >> | ID | Region | Service Name | Service >> Type | Enabled | Interface | URL | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ >> | 1ecd328b5ea1426bb411d157b8339dd2 | regionOne | keystone | identity >> | True | public | http://[fd00:fd00:fd00:9900::a0]:5000 | >> | 518cfa0f2ece43b684710006c9fa5b25 | regionOne | keystone | identity >> | True | admin | http://30.30.30.181:35357 | >> | 8cda413052c24718b073578bb497f483 | regionOne | keystone | identity >> | True | internal | http://[fd00:fd00:fd00:2000::a0]:5000 | >> >> +----------------------------------+-----------+--------------+--------------+---------+-----------+---------------------------------------+ >> [stack at undercloud ~]$ >> >> >> it is giving us only keystone endpoints. >> >> Also note that I am trying to deploy the end to end setup with FQDN only. >> and in this case as well I am facing the same issue as old. >> >> thanks once again for your inputs. >> >> -Lokendra >> >> >> >> On Wed, Jul 20, 2022 at 3:07 PM Brendan Shephard >> wrote: >> >>> Hey, >>> >>> I think it's weird that you got a response at all when you run the >>> openstack endpoint list, since you said haproxy isn't running. So there >>> should be nothing serving that endpoint. >>> >>> I noticed you have the stackrc file sourced. Try it again without that >>> file sourced, so: >>> $ su - stack >>> $ OS_CLOUD=overcloud openstack endpoint list >>> >>> I would suspect that nothing should be responding. It could be the >>> stackrc file causing issues with some of the environment variables. If the >>> above command doesn't return anything, then my suggestion would be to >>> re-run the deployment like this: >>> >>> $ su - stack >>> $ export OS_CLOUD=undercloud >>> # Then run your deployment script again >>> $ bash overcloud_deploy.sh >>> >>> The OS_CLOUD variable tells the openstackclient to lookup the details >>> about that cloud from your clouds.yaml file. Which will be located in >>> /home/stack/.config/openstack/clouds.yaml. >>> >>> This method is preferable to the sourcing of RC files. >>> >>> Reference: >>> >>> https://docs.openstack.org/openstacksdk/latest/user/guides/connect_from_config.html >>> >>> Regarding the HAProxy warnings. I don't think they should be fatal. >>> afaik, HAProxy should still be starting. If it's not, there might be >>> another error that you will need to look for in the log files under >>> /var/log/containers/haproxy/ >>> >>> I wasn't able to reproduce that warning by following the documentation >>> for enabling TLS though. So it seems like an odd error to be getting. >>> >>> Brendan Shephard >>> Software Engineer >>> >>> Red Hat APAC >>> 193 N Quay >>> Brisbane City QLD 4000 >>> @RedHat Red Hat >>> Red Hat >>> >>> >>> >>> >>> >>> On Wed, Jul 20, 2022 at 7:02 PM Lokendra Rathour < >>> lokendrarathour at gmail.com> wrote: >>> >>>> Hi Brendan / Team, >>>> Any lead for the issue raised? >>>> >>>> -Lokendra >>>> >>>> >>>> >>>> On Tue, Jul 19, 2022 at 11:46 AM Lokendra Rathour < >>>> lokendrarathour at gmail.com> wrote: >>>> >>>>> Hi Brendan,, >>>>> Thanks for the inputs. >>>>> when i run the command as you suggested I get this: >>>>> >>>>> (undercloud) [stack at undercloud ~]$ OS_CLOUD=overcloud openstack >>>>> endpoint list >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>>>> | ID | Region | Service Name | >>>>> Service Type | Enabled | Interface | URL >>>>> | >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>>>> | 1bfe43c9cf174bd8a01a3a681538766a | regionOne | keystone | >>>>> identity | True | internal | >>>>> http://[fd00:fd00:fd00:2000::326]:5000 | >>>>> | 707e92fc11df4a74bceb5e48f2561357 | regionOne | keystone | >>>>> identity | True | admin | http://30.30.30.173:35357 >>>>> | >>>>> | fab4e66170c8402f899c5f43fd4c39fe | regionOne | keystone | >>>>> identity | True | public | https://overcloud-hsc.com:13000 >>>>> | >>>>> >>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------+ >>>>> (undercloud) [stack at undercloud ~]$ >>>>> >>>>> >>>>> On the other note that i notices was as below: >>>>> >>>>> - HAproxy container is not running. >>>>> - [root at overcloud-controller-2 stdouts]# podman ps -a | grep >>>>> haproxy >>>>> e91dbde042db >>>>> undercloud.ctlplane.localdomain:8787/tripleowallaby/openstack-haproxy:current-tripleo >>>>> 24 hours ago Exited (1) Less than a >>>>> second ago container-puppet-haproxy\ >>>>> - Checking logs: >>>>> - 2022-07-19T08:47:00.496212294+05:30 stderr F + ARGS= >>>>> 2022-07-19T08:47:00.496300242+05:30 stderr F + [[ ! -n '' ]] >>>>> 2022-07-19T08:47:00.496323705+05:30 stderr F + . >>>>> kolla_extend_start >>>>> 2022-07-19T08:47:00.496578173+05:30 stderr F + echo 'Running >>>>> command: '\''bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper >>>>> ]; then exec /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; >>>>> else exec /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi'\''' >>>>> 2022-07-19T08:47:00.496605469+05:30 stdout F Running command: >>>>> 'bash -c $* -- eval if [ -f /usr/sbin/haproxy-systemd-wrapper ]; then exec >>>>> /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg; else exec >>>>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws; fi' >>>>> 2022-07-19T08:47:00.496895618+05:30 stderr F + exec bash -c >>>>> '$*' -- eval if '[' -f /usr/sbin/haproxy-systemd-wrapper '];' then exec >>>>> /usr/sbin/haproxy-systemd-wrapper -f '/etc/haproxy/haproxy.cfg;' else exec >>>>> /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg '-Ws;' fi >>>>> 2022-07-19T08:47:00.513182490+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:28] : 'bind >>>>> fd00:fd00:fd00:9900::81:13776' : >>>>> 2022-07-19T08:47:00.513182490+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.513182490+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> automatically2022-07-19T08:47:00.513967576+05:30 stderr F >>>>> [WARNING] 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:45] : 'bind >>>>> fd00:fd00:fd00:9900::81:13292' : >>>>> 2022-07-19T08:47:00.513967576+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.513967576+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.514736662+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:69] : 'bind >>>>> fd00:fd00:fd00:9900::81:13004' : >>>>> 2022-07-19T08:47:00.514736662+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.514736662+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.515461787+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:89] : 'bind >>>>> fd00:fd00:fd00:9900::81:13005' : >>>>> 2022-07-19T08:47:00.515461787+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.515461787+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.516167406+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:108] : 'bind >>>>> fd00:fd00:fd00:2000::326:443' : >>>>> - 2022-07-19T08:47:00.517937930+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.518534123+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:172] : 'bind >>>>> fd00:fd00:fd00:9900::81:13000' : >>>>> 2022-07-19T08:47:00.518534123+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.518534123+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.519127743+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:201] : 'bind >>>>> fd00:fd00:fd00:9900::81:13696' : >>>>> 2022-07-19T08:47:00.519127743+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.519127743+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.519734281+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:233] : 'bind >>>>> fd00:fd00:fd00:9900::81:13080' : >>>>> 2022-07-19T08:47:00.519734281+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.519734281+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.520285158+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:250] : 'bind >>>>> fd00:fd00:fd00:9900::81:13774' : >>>>> 2022-07-19T08:47:00.520285158+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.520285158+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.520830405+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:266] : 'bind >>>>> fd00:fd00:fd00:9900::81:13778' : >>>>> 2022-07-19T08:47:00.520830405+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.520830405+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.521517271+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : parsing [/etc/haproxy/haproxy.cfg:281] : 'bind >>>>> fd00:fd00:fd00:9900::81:13808' : >>>>> 2022-07-19T08:47:00.521517271+05:30 stderr F unable to load >>>>> default 1024 bits DH parameter for certificate >>>>> '/etc/pki/tls/private/overcloud_endpoint.pem'. >>>>> 2022-07-19T08:47:00.521517271+05:30 stderr F , SSL library >>>>> will use an automatically generated DH parameter. >>>>> 2022-07-19T08:47:00.524065508+05:30 stderr F [WARNING] >>>>> 199/084700 (7) : Setting tune.ssl.default-dh-param to 1024 by default, if >>>>> your workload permits it you should set it to at least 2048. Please set a >>>>> value >= 1024 to make this warning disappear. >>>>> - pcs status also show that proxy is down for the controller >>>>> with VIP: >>>>> - Failed Resource Actions: >>>>> * haproxy-bundle-podman-2_start_0 on overcloud-controller-2 >>>>> 'error' (1): call=139, status='complete', exitreason='podman failed to >>>>> launch container (rc: 1)', last-rc-change='Mon Jul 18 15:14:34 2022', >>>>> queued=0ms, exec=1222ms >>>>> * haproxy-bundle-podman-1_start_0 on overcloud-controller-1 >>>>> 'error' (1): call=191, status='complete', exitreason='podman failed to >>>>> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:17 2022', >>>>> queued=0ms, exec=1171ms >>>>> * haproxy-bundle-podman-2_start_0 on overcloud-controller-1 >>>>> 'error' (1): call=193, status='complete', exitreason='podman failed to >>>>> launch container (rc: 1)', last-rc-change='Mon Jul 18 23:54:20 2022', >>>>> queued=0ms, exec=1256ms >>>>> >>>>> do let me know in case we need anything more around it. >>>>> thanks once again for the support. >>>>> -Lokendra >>>>> >>>>> On Tue, Jul 19, 2022 at 11:07 AM Brendan Shephard >>>>> wrote: >>>>> >>>>>> Hey, >>>>>> >>>>>> Doesn't look like there is anything wrong with the certificate there. >>>>>> You would be getting a TLS error if that was the problem. >>>>>> >>>>>> What does your clouds.yaml file look like now? What happens if you >>>>>> run this command from the Undercloud node: >>>>>> $ OS_CLOUD=overcloud openstack endpoint list >>>>>> >>>>>> Do you get the same error? >>>>>> >>>>>> Brendan Shephard >>>>>> Software Engineer >>>>>> >>>>>> Red Hat APAC >>>>>> 193 N Quay >>>>>> Brisbane City QLD 4000 >>>>>> @RedHat Red Hat >>>>>> Red Hat >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> >>>>>> On Tue, Jul 19, 2022 at 1:28 PM Lokendra Rathour < >>>>>> lokendrarathour at gmail.com> wrote: >>>>>> >>>>>>> Hi Swogat and Vikarna, >>>>>>> We have tried adding the DNS entry for the overcloud domain. we are >>>>>>> getting the same error: >>>>>>> >>>>>>> 022-07-19 00:09:41.491498 | 525400ae-089b-c832-8e34-00000000704f | >>>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>>> undercloud | 0:11:18.785769 | 2.16s >>>>>>> 2022-07-19 00:09:41.507319 | 525400ae-089b-c832-8e34-000000007050 | >>>>>>> TASK | Create identity internal endpoint >>>>>>> 2022-07-19 00:09:43.778910 | 525400ae-089b-c832-8e34-000000007050 | >>>>>>> FATAL | Create identity internal endpoint | undercloud | >>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>> request you have made requires authentication.", "response": >>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>> services: Client Error for url: >>>>>>> https://overcloud-hsc.com:13000/v3/services, The request you have >>>>>>> made requires authentication."} >>>>>>> 2022-07-19 00:09:43.780306 | 525400ae-089b-c832-8e34-000000007050 | >>>>>>> TIMING | tripleo_keystone_resources : Create identity internal endpoint >>>>>>> | undercloud | 0:11:21.074605 | 2. >>>>>>> >>>>>>> >>>>>>> Certificate configs: >>>>>>> >>>>>>> [stack at undercloud oc-domain-name]$ cat server.csr.cnf >>>>>>> [req] >>>>>>> default_bits = 2048 >>>>>>> prompt = no >>>>>>> default_md = sha256 >>>>>>> distinguished_name = dn >>>>>>> [dn] >>>>>>> C=IN >>>>>>> ST=UTTAR PRADESH >>>>>>> L=NOIDA >>>>>>> O=HSC >>>>>>> OU=HSC >>>>>>> emailAddress=demo at demo.com >>>>>>> CN=overcloud-hsc.com >>>>>>> [stack at undercloud oc-domain-name]$ cat v3.ext >>>>>>> authorityKeyIdentifier=keyid,issuer >>>>>>> basicConstraints=CA:FALSE >>>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>>> dataEncipherment >>>>>>> subjectAltName = @alt_names >>>>>>> [alt_names] >>>>>>> DNS.1=overcloud-hsc.com >>>>>>> [stack at undercloud oc-domain-name]$ >>>>>>> >>>>>>> the difference we see from others is that we are using self-signed >>>>>>> certificates. >>>>>>> >>>>>>> please let me know in case we need to check something else. Somehow >>>>>>> this issue remains stuck. >>>>>>> >>>>>>> >>>>>>> On Fri, Jul 15, 2022 at 2:17 AM Swogat Pradhan < >>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>> >>>>>>>> I was facing a similar kind of issue. >>>>>>>> https://bugzilla.redhat.com/show_bug.cgi?id=2089442 >>>>>>>> Here is the solution that helped me fix it. >>>>>>>> Also make sure the cn that you will use is reachable from >>>>>>>> undercloud (maybe) script should take care of it. >>>>>>>> >>>>>>>> Also please follow Mr. Tathe's mail to add the cn first. >>>>>>>> >>>>>>>> With regards >>>>>>>> Swogat Pradhan >>>>>>>> >>>>>>>> On Thu, Jul 14, 2022 at 8:49 AM Vikarna Tathe < >>>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>>> >>>>>>>>> Hi Lokendra, >>>>>>>>> >>>>>>>>> The CN field is missing. Can you add that and generate the >>>>>>>>> certificate again. >>>>>>>>> >>>>>>>>> CN=ipaddress >>>>>>>>> >>>>>>>>> Also add dns.1=ipaddress under alt_names for precaution. >>>>>>>>> >>>>>>>>> Vikarna >>>>>>>>> >>>>>>>>> On Wed, 13 Jul, 2022, 23:02 Lokendra Rathour, < >>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>> >>>>>>>>>> HI Vikarna, >>>>>>>>>> Thanks for the inputs. >>>>>>>>>> I am note able to access any tabs in GUI. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> to re-state, we are failing at the time of deployment at step4 : >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> PLAY [External deployment step 4] >>>>>>>>>> ********************************************** >>>>>>>>>> 2022-07-13 21:35:22.505148 | 525400ae-089b-870a-fab6-0000000000d7 >>>>>>>>>> | TASK | External deployment step 4 >>>>>>>>>> 2022-07-13 21:35:22.534899 | 525400ae-089b-870a-fab6-0000000000d7 >>>>>>>>>> | OK | External deployment step 4 | undercloud -> localhost | >>>>>>>>>> result={ >>>>>>>>>> "changed": false, >>>>>>>>>> "msg": "Use --start-at-task 'External deployment step 4' to >>>>>>>>>> resume from this task" >>>>>>>>>> } >>>>>>>>>> [WARNING]: ('undercloud -> localhost', >>>>>>>>>> '525400ae-089b-870a-fab6-0000000000d7') >>>>>>>>>> missing from stats >>>>>>>>>> 2022-07-13 21:35:22.591268 | 525400ae-089b-870a-fab6-0000000000d8 >>>>>>>>>> | TIMING | include_tasks | undercloud | 0:11:21.683453 | 0.04s >>>>>>>>>> 2022-07-13 21:35:22.605901 | f29c4b58-75a5-4993-97b8-3921a49d79d7 >>>>>>>>>> | INCLUDED | >>>>>>>>>> /home/stack/overcloud-deploy/overcloud/config-download/overcloud/external_deploy_steps_tasks_step4.yaml >>>>>>>>>> | undercloud >>>>>>>>>> 2022-07-13 21:35:22.627112 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>>> | TASK | Clean up legacy Cinder keystone catalog entries >>>>>>>>>> 2022-07-13 21:35:25.110635 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>>> | OK | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | item={'service_name': 'cinderv2', 'service_type': 'volumev2'} >>>>>>>>>> 2022-07-13 21:35:25.112368 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | 0:11:24.204562 | 2.48s >>>>>>>>>> 2022-07-13 21:35:27.029270 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>>> | OK | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | item={'service_name': 'cinderv3', 'service_type': 'volume'} >>>>>>>>>> 2022-07-13 21:35:27.030383 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | 0:11:26.122584 | 4.40s >>>>>>>>>> 2022-07-13 21:35:27.032091 | 525400ae-089b-870a-fab6-000000007239 >>>>>>>>>> | TIMING | Clean up legacy Cinder keystone catalog entries | undercloud >>>>>>>>>> | 0:11:26.124296 | 4.40s >>>>>>>>>> 2022-07-13 21:35:27.047913 | 525400ae-089b-870a-fab6-00000000723c >>>>>>>>>> | TASK | Manage Keystone resources for OpenStack services >>>>>>>>>> 2022-07-13 21:35:27.077672 | 525400ae-089b-870a-fab6-00000000723c >>>>>>>>>> | TIMING | Manage Keystone resources for OpenStack services | >>>>>>>>>> undercloud | 0:11:26.169842 | 0.03s >>>>>>>>>> 2022-07-13 21:35:27.120270 | 525400ae-089b-870a-fab6-00000000726b >>>>>>>>>> | TASK | Gather variables for each operating system >>>>>>>>>> 2022-07-13 21:35:27.161225 | 525400ae-089b-870a-fab6-00000000726b >>>>>>>>>> | TIMING | tripleo_keystone_resources : Gather variables for each >>>>>>>>>> operating system | undercloud | 0:11:26.253383 | 0.04s >>>>>>>>>> 2022-07-13 21:35:27.177798 | 525400ae-089b-870a-fab6-00000000726c >>>>>>>>>> | TASK | Create Keystone Admin resources >>>>>>>>>> 2022-07-13 21:35:27.207430 | 525400ae-089b-870a-fab6-00000000726c >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create Keystone Admin resources >>>>>>>>>> | undercloud | 0:11:26.299608 | 0.03s >>>>>>>>>> 2022-07-13 21:35:27.230985 | 46e05e2d-2e9c-467b-ac4f-c5f0bc7286b3 >>>>>>>>>> | INCLUDED | >>>>>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/admin.yml | >>>>>>>>>> undercloud >>>>>>>>>> 2022-07-13 21:35:27.256076 | 525400ae-089b-870a-fab6-0000000072ad >>>>>>>>>> | TASK | Create default domain >>>>>>>>>> 2022-07-13 21:35:29.343399 | 525400ae-089b-870a-fab6-0000000072ad >>>>>>>>>> | OK | Create default domain | undercloud >>>>>>>>>> 2022-07-13 21:35:29.345172 | 525400ae-089b-870a-fab6-0000000072ad >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create default domain | >>>>>>>>>> undercloud | 0:11:28.437360 | 2.09s >>>>>>>>>> 2022-07-13 21:35:29.361643 | 525400ae-089b-870a-fab6-0000000072ae >>>>>>>>>> | TASK | Create admin and service projects >>>>>>>>>> 2022-07-13 21:35:29.391295 | 525400ae-089b-870a-fab6-0000000072ae >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create admin and service >>>>>>>>>> projects | undercloud | 0:11:28.483468 | 0.03s >>>>>>>>>> 2022-07-13 21:35:29.402539 | af7a4a76-4998-4679-ac6f-58acc0867554 >>>>>>>>>> | INCLUDED | >>>>>>>>>> /usr/share/ansible/roles/tripleo_keystone_resources/tasks/projects.yml | >>>>>>>>>> undercloud >>>>>>>>>> 2022-07-13 21:35:29.428918 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>>> | TASK | Async creation of Keystone project >>>>>>>>>> 2022-07-13 21:35:30.144295 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>>> | CHANGED | Async creation of Keystone project | undercloud | item=admin >>>>>>>>>> 2022-07-13 21:35:30.145884 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>>>> project | undercloud | 0:11:29.238078 | 0.72s >>>>>>>>>> 2022-07-13 21:35:30.493458 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>>> | CHANGED | Async creation of Keystone project | undercloud | >>>>>>>>>> item=service >>>>>>>>>> 2022-07-13 21:35:30.494386 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>>>> project | undercloud | 0:11:29.586587 | 1.06s >>>>>>>>>> 2022-07-13 21:35:30.495729 | 525400ae-089b-870a-fab6-000000007304 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Async creation of Keystone >>>>>>>>>> project | undercloud | 0:11:29.587916 | 1.07s >>>>>>>>>> 2022-07-13 21:35:30.511748 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | TASK | Check Keystone project status >>>>>>>>>> 2022-07-13 21:35:30.908189 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | WAITING | Check Keystone project status | undercloud | 30 retries left >>>>>>>>>> 2022-07-13 21:35:36.166541 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | OK | Check Keystone project status | undercloud | item=admin >>>>>>>>>> 2022-07-13 21:35:36.168506 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>>>> undercloud | 0:11:35.260666 | 5.66s >>>>>>>>>> 2022-07-13 21:35:36.400914 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | OK | Check Keystone project status | undercloud | item=service >>>>>>>>>> 2022-07-13 21:35:36.402534 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>>>> undercloud | 0:11:35.494729 | 5.89s >>>>>>>>>> 2022-07-13 21:35:36.406576 | 525400ae-089b-870a-fab6-000000007306 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Check Keystone project status | >>>>>>>>>> undercloud | 0:11:35.498771 | 5.89s >>>>>>>>>> 2022-07-13 21:35:36.427719 | 525400ae-089b-870a-fab6-0000000072af >>>>>>>>>> | TASK | Create admin role >>>>>>>>>> 2022-07-13 21:35:38.632266 | 525400ae-089b-870a-fab6-0000000072af >>>>>>>>>> | OK | Create admin role | undercloud >>>>>>>>>> 2022-07-13 21:35:38.633754 | 525400ae-089b-870a-fab6-0000000072af >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create admin role | undercloud >>>>>>>>>> | 0:11:37.725949 | 2.20s >>>>>>>>>> 2022-07-13 21:35:38.649721 | 525400ae-089b-870a-fab6-0000000072b0 >>>>>>>>>> | TASK | Create _member_ role >>>>>>>>>> 2022-07-13 21:35:38.689773 | 525400ae-089b-870a-fab6-0000000072b0 >>>>>>>>>> | SKIPPED | Create _member_ role | undercloud >>>>>>>>>> 2022-07-13 21:35:38.691172 | 525400ae-089b-870a-fab6-0000000072b0 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create _member_ role | >>>>>>>>>> undercloud | 0:11:37.783369 | 0.04s >>>>>>>>>> 2022-07-13 21:35:38.706920 | 525400ae-089b-870a-fab6-0000000072b1 >>>>>>>>>> | TASK | Create admin user >>>>>>>>>> 2022-07-13 21:35:42.051623 | 525400ae-089b-870a-fab6-0000000072b1 >>>>>>>>>> | CHANGED | Create admin user | undercloud >>>>>>>>>> 2022-07-13 21:35:42.053285 | 525400ae-089b-870a-fab6-0000000072b1 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create admin user | undercloud >>>>>>>>>> | 0:11:41.145472 | 3.34s >>>>>>>>>> 2022-07-13 21:35:42.069370 | 525400ae-089b-870a-fab6-0000000072b2 >>>>>>>>>> | TASK | Assign admin role to admin project for admin user >>>>>>>>>> 2022-07-13 21:35:45.194891 | 525400ae-089b-870a-fab6-0000000072b2 >>>>>>>>>> | OK | Assign admin role to admin project for admin user | >>>>>>>>>> undercloud >>>>>>>>>> 2022-07-13 21:35:45.196669 | 525400ae-089b-870a-fab6-0000000072b2 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Assign admin role to admin >>>>>>>>>> project for admin user | undercloud | 0:11:44.288848 | 3.13s >>>>>>>>>> 2022-07-13 21:35:45.212674 | 525400ae-089b-870a-fab6-0000000072b3 >>>>>>>>>> | TASK | Assign _member_ role to admin project for admin user >>>>>>>>>> 2022-07-13 21:35:45.252884 | 525400ae-089b-870a-fab6-0000000072b3 >>>>>>>>>> | SKIPPED | Assign _member_ role to admin project for admin user | >>>>>>>>>> undercloud >>>>>>>>>> 2022-07-13 21:35:45.254283 | 525400ae-089b-870a-fab6-0000000072b3 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Assign _member_ role to admin >>>>>>>>>> project for admin user | undercloud | 0:11:44.346479 | 0.04s >>>>>>>>>> 2022-07-13 21:35:45.270310 | 525400ae-089b-870a-fab6-0000000072b4 >>>>>>>>>> | TASK | Create identity service >>>>>>>>>> 2022-07-13 21:35:46.928715 | 525400ae-089b-870a-fab6-0000000072b4 >>>>>>>>>> | OK | Create identity service | undercloud >>>>>>>>>> 2022-07-13 21:35:46.930167 | 525400ae-089b-870a-fab6-0000000072b4 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create identity service | >>>>>>>>>> undercloud | 0:11:46.022362 | 1.66s >>>>>>>>>> 2022-07-13 21:35:46.946797 | 525400ae-089b-870a-fab6-0000000072b5 >>>>>>>>>> | TASK | Create identity public endpoint >>>>>>>>>> 2022-07-13 21:35:49.139298 | 525400ae-089b-870a-fab6-0000000072b5 >>>>>>>>>> | OK | Create identity public endpoint | undercloud >>>>>>>>>> 2022-07-13 21:35:49.141158 | 525400ae-089b-870a-fab6-0000000072b5 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create identity public endpoint >>>>>>>>>> | undercloud | 0:11:48.233349 | 2.19s >>>>>>>>>> 2022-07-13 21:35:49.157768 | 525400ae-089b-870a-fab6-0000000072b6 >>>>>>>>>> | TASK | Create identity internal endpoint >>>>>>>>>> 2022-07-13 21:35:51.566826 | 525400ae-089b-870a-fab6-0000000072b6 >>>>>>>>>> | FATAL | Create identity internal endpoint | undercloud | >>>>>>>>>> error={"changed": false, "extra_data": {"data": null, "details": "The >>>>>>>>>> request you have made requires authentication.", "response": >>>>>>>>>> "{\"error\":{\"code\":401,\"message\":\"The request you have made requires >>>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>>> services: Client Error for url: >>>>>>>>>> https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The request >>>>>>>>>> you have made requires authentication."} >>>>>>>>>> 2022-07-13 21:35:51.568473 | 525400ae-089b-870a-fab6-0000000072b6 >>>>>>>>>> | TIMING | tripleo_keystone_resources : Create identity internal >>>>>>>>>> endpoint | undercloud | 0:11:50.660654 | 2.41s >>>>>>>>>> >>>>>>>>>> PLAY RECAP >>>>>>>>>> ********************************************************************* >>>>>>>>>> localhost : ok=1 changed=0 unreachable=0 >>>>>>>>>> failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>> overcloud-controller-0 : ok=437 changed=103 unreachable=0 >>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>> overcloud-controller-1 : ok=435 changed=101 unreachable=0 >>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>> overcloud-controller-2 : ok=432 changed=101 unreachable=0 >>>>>>>>>> failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=82 unreachable=0 >>>>>>>>>> failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>> undercloud : ok=39 changed=7 unreachable=0 >>>>>>>>>> failed=1 skipped=6 rescued=0 ignored=0 >>>>>>>>>> >>>>>>>>>> Also : >>>>>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat server.csr.cnf >>>>>>>>>> [req] >>>>>>>>>> default_bits = 2048 >>>>>>>>>> prompt = no >>>>>>>>>> default_md = sha256 >>>>>>>>>> distinguished_name = dn >>>>>>>>>> [dn] >>>>>>>>>> C=IN >>>>>>>>>> ST=UTTAR PRADESH >>>>>>>>>> L=NOIDA >>>>>>>>>> O=HSC >>>>>>>>>> OU=HSC >>>>>>>>>> emailAddress=demo at demo.com >>>>>>>>>> >>>>>>>>>> v3.ext: >>>>>>>>>> (undercloud) [stack at undercloud oc-cert]$ cat v3.ext >>>>>>>>>> authorityKeyIdentifier=keyid,issuer >>>>>>>>>> basicConstraints=CA:FALSE >>>>>>>>>> keyUsage = digitalSignature, nonRepudiation, keyEncipherment, >>>>>>>>>> dataEncipherment >>>>>>>>>> subjectAltName = @alt_names >>>>>>>>>> [alt_names] >>>>>>>>>> IP.1=fd00:fd00:fd00:9900::81 >>>>>>>>>> >>>>>>>>>> Using these files we create other certificates. >>>>>>>>>> Please check and let me know in case we need anything else. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> On Wed, Jul 13, 2022 at 10:00 PM Vikarna Tathe < >>>>>>>>>> vikarnatathe at gmail.com> wrote: >>>>>>>>>> >>>>>>>>>>> Hi Lokendra, >>>>>>>>>>> >>>>>>>>>>> Are you able to access all the tabs in the OpenStack dashboard >>>>>>>>>>> without any error? If not, please retry generating the certificate. Also, >>>>>>>>>>> share the openssl.cnf or server.cnf. >>>>>>>>>>> >>>>>>>>>>> On Wed, 13 Jul 2022 at 18:18, Lokendra Rathour < >>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>> >>>>>>>>>>>> Hi Team, >>>>>>>>>>>> Any input on this case raised. >>>>>>>>>>>> >>>>>>>>>>>> Thanks, >>>>>>>>>>>> Lokendra >>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> On Tue, Jul 12, 2022 at 10:18 PM Lokendra Rathour < >>>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>>> >>>>>>>>>>>>> Hi Shephard/Swogat, >>>>>>>>>>>>> I tried changing the setting as suggested and it looks like it >>>>>>>>>>>>> has failed at step 4 with error: >>>>>>>>>>>>> >>>>>>>>>>>>> :31:32.169420 | 525400ae-089b-fb79-67ac-0000000072ce | >>>>>>>>>>>>> TIMING | tripleo_keystone_resources : Create identity public endpoint | >>>>>>>>>>>>> undercloud | 0:24:47.736198 | 2.21s >>>>>>>>>>>>> 2022-07-12 21:31:32.185594 | >>>>>>>>>>>>> 525400ae-089b-fb79-67ac-0000000072cf | TASK | Create identity >>>>>>>>>>>>> internal endpoint >>>>>>>>>>>>> 2022-07-12 21:31:34.468996 | >>>>>>>>>>>>> 525400ae-089b-fb79-67ac-0000000072cf | FATAL | Create identity >>>>>>>>>>>>> internal endpoint | undercloud | error={"changed": false, "extra_data": >>>>>>>>>>>>> {"data": null, "details": "The request you have made requires >>>>>>>>>>>>> authentication.", "response": "{\"error\":{\"code\":401,\"message\":\"The >>>>>>>>>>>>> request you have made requires >>>>>>>>>>>>> authentication.\",\"title\":\"Unauthorized\"}}\n"}, "msg": "Failed to list >>>>>>>>>>>>> services: Client Error for url: >>>>>>>>>>>>> https://[fd00:fd00:fd00:9900::81]:13000/v3/services, The >>>>>>>>>>>>> request you have made requires authentication."} >>>>>>>>>>>>> 2022-07-12 21:31:34.470415 | 525400ae-089b-fb79-67ac-000000 >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> Checking further the endpoint list: >>>>>>>>>>>>> I see only one endpoint for keystone is gettin created. >>>>>>>>>>>>> >>>>>>>>>>>>> DeprecationWarning >>>>>>>>>>>>> >>>>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>>>> | ID | Region | Service Name >>>>>>>>>>>>> | Service Type | Enabled | Interface | URL >>>>>>>>>>>>> | >>>>>>>>>>>>> >>>>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>>>> | 4378dc0a4d8847ee87771699fc7b995e | regionOne | keystone >>>>>>>>>>>>> | identity | True | admin | >>>>>>>>>>>>> http://30.30.30.173:35357 | >>>>>>>>>>>>> | 67c829e126944431a06ed0c2b97a295f | regionOne | keystone >>>>>>>>>>>>> | identity | True | internal | >>>>>>>>>>>>> http://[fd00:fd00:fd00:2000::326]:5000 | >>>>>>>>>>>>> | 8a9a3de4993c4ff7903caf95b8ae40fa | regionOne | keystone >>>>>>>>>>>>> | identity | True | public | >>>>>>>>>>>>> https://[fd00:fd00:fd00:9900::81]:13000 | >>>>>>>>>>>>> >>>>>>>>>>>>> +----------------------------------+-----------+--------------+--------------+---------+-----------+-----------------------------------------+ >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> it looks like something related to the SSL, we have also >>>>>>>>>>>>> verified that the GUI login screen shows that Certificates are applied. >>>>>>>>>>>>> exploring more in logs, meanwhile any suggestions or know >>>>>>>>>>>>> observation would be of great help. >>>>>>>>>>>>> thanks again for the support. >>>>>>>>>>>>> >>>>>>>>>>>>> Best Regards, >>>>>>>>>>>>> Lokendra >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> On Sat, Jul 9, 2022 at 11:24 AM Swogat Pradhan < >>>>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>>>> >>>>>>>>>>>>>> I had faced a similar kind of issue, for ip based setup you >>>>>>>>>>>>>> need to specify the domain name as the ip that you are going to use, this >>>>>>>>>>>>>> error is showing up because the ssl is ip based but the fqdns seems to be >>>>>>>>>>>>>> undercloud.com or overcloud.example.com. >>>>>>>>>>>>>> I think for undercloud you can change the undercloud.conf. >>>>>>>>>>>>>> >>>>>>>>>>>>>> And will it work if we specify clouddomain parameter to the >>>>>>>>>>>>>> IP address for overcloud? because it seems he has not specified the >>>>>>>>>>>>>> clouddomain parameter and overcloud.example.com is the >>>>>>>>>>>>>> default domain for overcloud.example.com. >>>>>>>>>>>>>> >>>>>>>>>>>>>> On Fri, 8 Jul 2022, 6:01 pm Swogat Pradhan, < >>>>>>>>>>>>>> swogatpradhan22 at gmail.com> wrote: >>>>>>>>>>>>>> >>>>>>>>>>>>>>> What is the domain name you have specified in the >>>>>>>>>>>>>>> undercloud.conf file? >>>>>>>>>>>>>>> And what is the fqdn name used for the generation of the SSL >>>>>>>>>>>>>>> cert? >>>>>>>>>>>>>>> >>>>>>>>>>>>>>> On Fri, 8 Jul 2022, 5:38 pm Lokendra Rathour, < >>>>>>>>>>>>>>> lokendrarathour at gmail.com> wrote: >>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Hi Team, >>>>>>>>>>>>>>>> We were trying to install overcloud with SSL enabled for >>>>>>>>>>>>>>>> which the UC is installed, but OC install is getting failed at step 4: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> ERROR >>>>>>>>>>>>>>>> :nectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): >>>>>>>>>>>>>>>> Max retries exceeded with url: / (Caused by >>>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n", "module_stdout": "", >>>>>>>>>>>>>>>> "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>>>>> 2022-07-08 17:03:23.606739 | >>>>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | FATAL | Clean up legacy Cinder >>>>>>>>>>>>>>>> keystone catalog entries | undercloud | item={'service_name': 'cinderv3', >>>>>>>>>>>>>>>> 'service_type': 'volume'} | error={"ansible_index_var": >>>>>>>>>>>>>>>> "cinder_api_service", "ansible_loop_var": "item", "changed": false, >>>>>>>>>>>>>>>> "cinder_api_service": 1, "item": {"service_name": "cinderv3", >>>>>>>>>>>>>>>> "service_type": "volume"}, "module_stderr": "Failed to discover available >>>>>>>>>>>>>>>> identity versions when contacting >>>>>>>>>>>>>>>> https://[fd00:fd00:fd00:9900::2ef]:13000. Attempting to >>>>>>>>>>>>>>>> parse version from URL.\nTraceback (most recent call last):\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 600, >>>>>>>>>>>>>>>> in urlopen\n chunked=chunked)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 343, >>>>>>>>>>>>>>>> in _make_request\n self._validate_conn(conn)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 839, >>>>>>>>>>>>>>>> in _validate_conn\n conn.connect()\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 378, in >>>>>>>>>>>>>>>> connect\n _match_hostname(cert, self.assert_hostname or >>>>>>>>>>>>>>>> server_hostname)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connection.py\", line 388, in >>>>>>>>>>>>>>>> _match_hostname\n match_hostname(cert, asserted_hostname)\n File >>>>>>>>>>>>>>>> \"/usr/lib64/python3.6/ssl.py\", line 291, in match_hostname\n % >>>>>>>>>>>>>>>> (hostname, dnsnames[0]))\nssl.CertificateError: hostname >>>>>>>>>>>>>>>> 'fd00:fd00:fd00:9900::2ef' doesn't match 'undercloud.com'\n\nDuring >>>>>>>>>>>>>>>> handling of the above exception, another exception occurred:\n\nTraceback >>>>>>>>>>>>>>>> (most recent call last):\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 449, in >>>>>>>>>>>>>>>> send\n timeout=timeout\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/connectionpool.py\", line 638, >>>>>>>>>>>>>>>> in urlopen\n _stacktrace=sys.exc_info()[2])\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/urllib3/util/retry.py\", line 399, in >>>>>>>>>>>>>>>> increment\n raise MaxRetryError(_pool, url, error or >>>>>>>>>>>>>>>> ResponseError(cause))\nurllib3.exceptions.MaxRetryError: >>>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>>>> last):\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1022, >>>>>>>>>>>>>>>> in _send_request\n resp = self.session.request(method, url, **kwargs)\n >>>>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 533, >>>>>>>>>>>>>>>> in request\n resp = self.send(prep, **send_kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/sessions.py\", line 646, in >>>>>>>>>>>>>>>> send\n r = adapter.send(request, **kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/requests/adapters.py\", line 514, in >>>>>>>>>>>>>>>> send\n raise SSLError(e, request=request)\nrequests.exceptions.SSLError: >>>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>>>> last):\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>>>> line 138, in _do_create_plugin\n authenticated=False)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>>> 610, in get_discovery\n authenticated=authenticated)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 1452, >>>>>>>>>>>>>>>> in get_discovery\n disc = Discover(session, url, >>>>>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 536, >>>>>>>>>>>>>>>> in __init__\n authenticated=authenticated)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/discover.py\", line 102, >>>>>>>>>>>>>>>> in get_version_data\n resp = session.get(url, headers=headers, >>>>>>>>>>>>>>>> authenticated=authenticated)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1141, >>>>>>>>>>>>>>>> in get\n return self.request(url, 'GET', **kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 931, in >>>>>>>>>>>>>>>> request\n resp = send(**kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1026, >>>>>>>>>>>>>>>> in _send_request\n raise >>>>>>>>>>>>>>>> exceptions.SSLError(msg)\nkeystoneauth1.exceptions.connection.SSLError: SSL >>>>>>>>>>>>>>>> exception connecting to >>>>>>>>>>>>>>>> https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>>> match 'undercloud.com'\",),))\n\nDuring handling of the >>>>>>>>>>>>>>>> above exception, another exception occurred:\n\nTraceback (most recent call >>>>>>>>>>>>>>>> last):\n File \"\", line 102, in \n File \"\", line >>>>>>>>>>>>>>>> 94, in _ansiballz_main\n File \"\", line 40, in invoke_module\n >>>>>>>>>>>>>>>> File \"/usr/lib64/python3.6/runpy.py\", line 205, in run_module\n >>>>>>>>>>>>>>>> return _run_module_code(code, init_globals, run_name, mod_spec)\n File >>>>>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 96, in _run_module_code\n >>>>>>>>>>>>>>>> mod_name, mod_spec, pkg_name, script_name)\n File >>>>>>>>>>>>>>>> \"/usr/lib64/python3.6/runpy.py\", line 85, in _run_code\n exec(code, >>>>>>>>>>>>>>>> run_globals)\n File >>>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>>>> line 185, in \n File >>>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>>>> line 181, in main\n File >>>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/module_utils/openstack.py\", >>>>>>>>>>>>>>>> line 407, in __call__\n File >>>>>>>>>>>>>>>> \"/tmp/ansible_openstack.cloud.catalog_service_payload_7ikyjf7t/ansible_openstack.cloud.catalog_service_payload.zip/ansible_collections/openstack/cloud/plugins/modules/catalog_service.py\", >>>>>>>>>>>>>>>> line 141, in run\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>>>>> 517, in search_services\n services = self.list_services()\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", line >>>>>>>>>>>>>>>> 492, in list_services\n if self._is_client_version('identity', 2):\n >>>>>>>>>>>>>>>> File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>>>>> line 460, in _is_client_version\n client = getattr(self, client_name)\n >>>>>>>>>>>>>>>> File \"/usr/lib/python3.6/site-packages/openstack/cloud/_identity.py\", >>>>>>>>>>>>>>>> line 32, in _identity_client\n 'identity', min_version=2, >>>>>>>>>>>>>>>> max_version='3.latest')\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/openstack/cloud/openstackcloud.py\", >>>>>>>>>>>>>>>> line 407, in _get_versioned_client\n if adapter.get_endpoint():\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/adapter.py\", line 291, in >>>>>>>>>>>>>>>> get_endpoint\n return self.session.get_endpoint(auth or self.auth, >>>>>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/session.py\", line 1243, >>>>>>>>>>>>>>>> in get_endpoint\n return auth.get_endpoint(self, **kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>>> 380, in get_endpoint\n allow_version_hack=allow_version_hack, >>>>>>>>>>>>>>>> **kwargs)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>>> 271, in get_endpoint_data\n service_catalog = >>>>>>>>>>>>>>>> self.get_access(session).service_catalog\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/base.py\", line >>>>>>>>>>>>>>>> 134, in get_access\n self.auth_ref = self.get_auth_ref(session)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>>>> line 206, in get_auth_ref\n self._plugin = >>>>>>>>>>>>>>>> self._do_create_plugin(session)\n File >>>>>>>>>>>>>>>> \"/usr/lib/python3.6/site-packages/keystoneauth1/identity/generic/base.py\", >>>>>>>>>>>>>>>> line 161, in _do_create_plugin\n 'auth_url is correct. >>>>>>>>>>>>>>>> %s' % e)\nkeystoneauth1.exceptions.discovery.DiscoveryFailure: Could not >>>>>>>>>>>>>>>> find versioned identity endpoints when attempting to authenticate. Please >>>>>>>>>>>>>>>> check that your auth_url is correct. SSL exception connecting to >>>>>>>>>>>>>>>> https://[fd00:fd00:fd00:9900::2ef]:13000: >>>>>>>>>>>>>>>> HTTPSConnectionPool(host='fd00:fd00:fd00:9900::2ef', port=13000): Max >>>>>>>>>>>>>>>> retries exceeded with url: / (Caused by >>>>>>>>>>>>>>>> SSLError(CertificateError(\"hostname 'fd00:fd00:fd00:9900::2ef' doesn't >>>>>>>>>>>>>>>> match 'overcloud.example.com'\",),))\n", "module_stdout": >>>>>>>>>>>>>>>> "", "msg": "MODULE FAILURE\nSee stdout/stderr for the exact error", "rc": 1} >>>>>>>>>>>>>>>> 2022-07-08 17:03:23.609354 | >>>>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.271914 | 2.47s >>>>>>>>>>>>>>>> 2022-07-08 17:03:23.611094 | >>>>>>>>>>>>>>>> 5254009a-6a3c-adb1-f96f-0000000072ac | TIMING | Clean up legacy Cinder >>>>>>>>>>>>>>>> keystone catalog entries | undercloud | 0:11:01.273659 | 2.47s >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> PLAY RECAP >>>>>>>>>>>>>>>> ********************************************************************* >>>>>>>>>>>>>>>> localhost : ok=0 changed=0 >>>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=2 rescued=0 ignored=0 >>>>>>>>>>>>>>>> overcloud-controller-0 : ok=437 changed=104 >>>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>>>> overcloud-controller-1 : ok=436 changed=101 >>>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>>>> overcloud-controller-2 : ok=431 changed=101 >>>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=214 rescued=0 ignored=0 >>>>>>>>>>>>>>>> overcloud-novacompute-0 : ok=345 changed=83 >>>>>>>>>>>>>>>> unreachable=0 failed=0 skipped=198 rescued=0 ignored=0 >>>>>>>>>>>>>>>> undercloud : ok=28 changed=7 >>>>>>>>>>>>>>>> unreachable=0 failed=1 skipped=3 rescued=0 ignored=0 >>>>>>>>>>>>>>>> 2022-07-08 17:03:23.647270 | >>>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Summary Information >>>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>>>> 2022-07-08 17:03:23.647907 | >>>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Total Tasks: 1373 >>>>>>>>>>>>>>>> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> in the deploy.sh: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> openstack overcloud deploy --templates \ >>>>>>>>>>>>>>>> -r /home/stack/templates/roles_data.yaml \ >>>>>>>>>>>>>>>> --networks-file >>>>>>>>>>>>>>>> /home/stack/templates/custom_network_data.yaml \ >>>>>>>>>>>>>>>> --vip-file /home/stack/templates/custom_vip_data.yaml \ >>>>>>>>>>>>>>>> --baremetal-deployment >>>>>>>>>>>>>>>> /home/stack/templates/overcloud-baremetal-deploy.yaml \ >>>>>>>>>>>>>>>> --network-config \ >>>>>>>>>>>>>>>> -e /home/stack/templates/environment.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-conductor.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-inspector.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ironic-overcloud.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e /home/stack/templates/ironic-config.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/external-ceph.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/services/ptp.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/enable-tls.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/tls-endpoints-public-ip.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/ssl/inject-trust-anchor.yaml >>>>>>>>>>>>>>>> \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/docker-ha.yaml \ >>>>>>>>>>>>>>>> -e >>>>>>>>>>>>>>>> /usr/share/openstack-tripleo-heat-templates/environments/podman.yaml \ >>>>>>>>>>>>>>>> -e /home/stack/containers-prepare-parameter.yaml >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Addition lines as highlighted in yellow were passed with >>>>>>>>>>>>>>>> modifications: >>>>>>>>>>>>>>>> tls-endpoints-public-ip.yaml: >>>>>>>>>>>>>>>> Passed as is in the defaults. >>>>>>>>>>>>>>>> enable-tls.yaml: >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # >>>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>>> # This file was created automatically by the sample >>>>>>>>>>>>>>>> environment >>>>>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>>>>> update it. >>>>>>>>>>>>>>>> # Users are recommended to make changes to a copy of the >>>>>>>>>>>>>>>> file instead >>>>>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>>>>> # >>>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>>> # title: Enable SSL on OpenStack Public Endpoints >>>>>>>>>>>>>>>> # description: | >>>>>>>>>>>>>>>> # Use this environment to pass in certificates for SSL >>>>>>>>>>>>>>>> deployments. >>>>>>>>>>>>>>>> # For these values to take effect, one of the >>>>>>>>>>>>>>>> tls-endpoints-*.yaml >>>>>>>>>>>>>>>> # environments must also be used. >>>>>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>>>>> # Set CSRF_COOKIE_SECURE / SESSION_COOKIE_SECURE in >>>>>>>>>>>>>>>> Horizon >>>>>>>>>>>>>>>> # Type: boolean >>>>>>>>>>>>>>>> HorizonSecureCookies: True >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # Specifies the default CA cert to use if TLS is used for >>>>>>>>>>>>>>>> services in the public network. >>>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>>> PublicTLSCAFile: >>>>>>>>>>>>>>>> '/etc/pki/ca-trust/source/anchors/overcloud-cacert.pem' >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # The content of the SSL certificate (without Key) in PEM >>>>>>>>>>>>>>>> format. >>>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> SSLCertificate: | >>>>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>>>> # The content of an SSL intermediate CA certificate in >>>>>>>>>>>>>>>> PEM format. >>>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>>> SSLIntermediateCertificate: '' >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # The content of the SSL Key in PEM format. >>>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>>> SSLKey: | >>>>>>>>>>>>>>>> -----BEGIN PRIVATE KEY----- >>>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>>> -----END PRIVATE KEY----- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>>>>> # Static parameters - these are values that must be >>>>>>>>>>>>>>>> # included in the environment but should not be changed. >>>>>>>>>>>>>>>> # ****************************************************** >>>>>>>>>>>>>>>> # The filepath of the certificate as it will be stored in >>>>>>>>>>>>>>>> the controller. >>>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>>> DeployedSSLCertificatePath: >>>>>>>>>>>>>>>> /etc/pki/tls/private/overcloud_endpoint.pem >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # ********************* >>>>>>>>>>>>>>>> # End static parameters >>>>>>>>>>>>>>>> # ********************* >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> inject-trust-anchor.yaml >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> # >>>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>>> # This file was created automatically by the sample >>>>>>>>>>>>>>>> environment >>>>>>>>>>>>>>>> # generator. Developers should use `tox -e genconfig` to >>>>>>>>>>>>>>>> update it. >>>>>>>>>>>>>>>> # Users are recommended to make changes to a copy of the >>>>>>>>>>>>>>>> file instead >>>>>>>>>>>>>>>> # of the original, if any customizations are needed. >>>>>>>>>>>>>>>> # >>>>>>>>>>>>>>>> ******************************************************************* >>>>>>>>>>>>>>>> # title: Inject SSL Trust Anchor on Overcloud Nodes >>>>>>>>>>>>>>>> # description: | >>>>>>>>>>>>>>>> # When using an SSL certificate signed by a CA that is >>>>>>>>>>>>>>>> not in the default >>>>>>>>>>>>>>>> # list of CAs, this environment allows adding a custom CA >>>>>>>>>>>>>>>> certificate to >>>>>>>>>>>>>>>> # the overcloud nodes. >>>>>>>>>>>>>>>> parameter_defaults: >>>>>>>>>>>>>>>> # The content of a CA's SSL certificate file in PEM >>>>>>>>>>>>>>>> format. This is evaluated on the client side. >>>>>>>>>>>>>>>> # Mandatory. This parameter must be set by the user. >>>>>>>>>>>>>>>> # Type: string >>>>>>>>>>>>>>>> SSLRootCertificate: | >>>>>>>>>>>>>>>> -----BEGIN CERTIFICATE----- >>>>>>>>>>>>>>>> ----*** CERTICATELINES TRIMMED ** >>>>>>>>>>>>>>>> -----END CERTIFICATE----- >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> resource_registry: >>>>>>>>>>>>>>>> OS::TripleO::NodeTLSCAData: >>>>>>>>>>>>>>>> ../../puppet/extraconfig/tls/ca-inject.yaml >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> The procedure to create such files was followed using: >>>>>>>>>>>>>>>> Deploying with SSL ? TripleO 3.0.0 documentation >>>>>>>>>>>>>>>> (openstack.org) >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Idea is to deploy overcloud with SSL enabled i.e* Self-signed >>>>>>>>>>>>>>>> IP-based certificate, without DNS. * >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> Any idea around this error would be of great help. >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> -- >>>>>>>>>>>>>>>> skype: lokendrarathour >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>>> >>>>>>>>>>>> >>>>>>>>>>>> -- >>>>>>>>>>>> >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> ~ Lokendra >>>>>>>>>> skype: lokendrarathour >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>> >>>>>>> -- >>>>>>> ~ Lokendra >>>>>>> skype: lokendrarathour >>>>>>> >>>>>>> >>>>>>> >>>>> >>>>> -- >>>>> ~ Lokendra >>>>> skype: lokendrarathour >>>>> >>>>> >>>>> >>>> >>>> -- >>>> ~ Lokendra >>>> skype: lokendrarathour >>>> >>>> >>>> >> >> -- >> ~ Lokendra >> skype: lokendrarathour >> >> >> > > -- > ~ Lokendra > skype: lokendrarathour > > > > -- ~ Lokendra skype: lokendrarathour -------------- next part -------------- An HTML attachment was scrubbed... URL: From jpodivin at redhat.com Thu Jul 28 14:33:46 2022 From: jpodivin at redhat.com (Jiri Podivin) Date: Thu, 28 Jul 2022 16:33:46 +0200 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: <15225123.PIt3FUKRBJ@p1> References: <15225123.PIt3FUKRBJ@p1> Message-ID: Hi. This is going to be embarrassing, but in my opinion it's better to be embarrassed and corrected than stay in error. What is the preferred way of stating the reason for a recheck? I've kind of assumed that it's something like writing: " recheck: this is broken" but that doesn't trigger the CI. I've tried looking it up in the Zuul and Opendev docs, but couldn't find anything. On Thu, Jul 28, 2022 at 1:02 PM Slawek Kaplonski wrote: > Hi, > > Here are fresh data about bare rechecks. > > +--------------------+---------------+--------------+-------------------+ > | Team | Bare rechecks | All Rechecks | Bare rechecks [%] | > +--------------------+---------------+--------------+-------------------+ > | requirements | 2 | 2 | 100.0 | > | keystone | 1 | 1 | 100.0 | > | OpenStackSDK | 3 | 3 | 100.0 | > | tacker | 3 | 3 | 100.0 | > | freezer | 1 | 1 | 100.0 | > | rally | 1 | 1 | 100.0 | > | OpenStack-Helm | 14 | 15 | 93.33 | > | cinder | 6 | 7 | 85.71 | > | kolla | 20 | 25 | 80.0 | > | neutron | 4 | 5 | 80.0 | > | Puppet OpenStack | 10 | 13 | 76.92 | > | OpenStack Charms | 3 | 6 | 50.0 | > | nova | 11 | 22 | 50.0 | > | tripleo | 8 | 19 | 42.11 | > | ironic | 1 | 3 | 33.33 | > | Quality Assurance | 1 | 3 | 33.33 | > | octavia | 0 | 1 | 0.0 | > | OpenStackAnsible | 0 | 1 | 0.0 | > | designate | 0 | 1 | 0.0 | > | Release Management | 0 | 1 | 0.0 | > | horizon | 0 | 1 | 0.0 | > +--------------------+---------------+--------------+-------------------+ > > Those data should be more accurate as my script now counts only comments > from last 7 days, not all comments from patches updated in last 7 days. > > Reminder: "bare rechecks" are recheck comments without any reason given. > If You need to do recheck for patch due to failed job(s), please first > check such failed job and try to identify what was the issue there. Maybe > there is already opened bug for that or maybe You can open new one and add > it as explanation in the recheck comment. Or maybe it was some infra issue, > in such case short explanation in the comment would also be enough. > > -- > > Slawek Kaplonski > > Principal Software Engineer > > Red Hat > -------------- next part -------------- An HTML attachment was scrubbed... URL: From katonalala at gmail.com Thu Jul 28 14:41:57 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 28 Jul 2022 16:41:57 +0200 Subject: [neutron] Drivers meeting agenda - 29.07.2022. Message-ID: Hi Neutron Drivers, The agenda for tomorrow's drivers meeting is at [1]. * [RFE] OVN E/W routing for external (baremetal) VLAN ports (#link https://bugs.launchpad.net/neutron/+bug/1982541) [1] https://wiki.openstack.org/wiki/Meetings/NeutronDrivers#Agenda See you at the meeting tomorrow. Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Thu Jul 28 14:52:40 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Thu, 28 Jul 2022 20:22:40 +0530 Subject: [all][tc] Technical Committee next weekly meeting on 28 July 2022 at 1500 UTC In-Reply-To: <18236c3c8f5.ba44b1c8695919.8798555011586714253@ghanshyammann.com> References: <18236c3c8f5.ba44b1c8695919.8798555011586714253@ghanshyammann.com> Message-ID: <182454ba0cc.f94a2e8a109891.1124188920769487105@ghanshyammann.com> Hello Everyone, Below is the agenda for Today's TC IRC meeting schedule at 1500 UTC. https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting == Agenda for Today's TC meeting == * Roll call * Follow up on past action items * Gate health check ** Bare 'recheck' state *** https://etherpad.opendev.org/p/recheck-weekly-summary * 2023.1 cycle PTG Planning * RBAC feedback in ops meetup ** https://etherpad.opendev.org/p/rbac-zed-ptg#L171 ** https://review.opendev.org/c/openstack/governance/+/847418 * Open Reviews ** https://review.opendev.org/q/projects:openstack/governance+is:open -gmann ---- On Tue, 26 Jul 2022 00:39:36 +0530 Ghanshyam Mann wrote --- > Hello Everyone, > > The technical Committee's next weekly meeting is scheduled for 28 July 2022, at 1500 UTC. > > If you would like to add topics for discussion, please add them to the below wiki page by > Wednesday, 27 July at 2100 UTC. > > https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting > > -gmann > > From katonalala at gmail.com Thu Jul 28 15:01:28 2022 From: katonalala at gmail.com (Lajos Katona) Date: Thu, 28 Jul 2022 17:01:28 +0200 Subject: [Neutron] Aug 1 - Aug 15 - team and drivers meeting CANCELLED Message-ID: Hi, I will be on PTO for the next 2 weeks, so as we discussed at the last team meeting, let's cancel the meetings for these weeks. Have a great summer time :-) Lajos Katona (lajoskatona) -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jul 28 15:15:34 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 28 Jul 2022 15:15:34 +0000 Subject: [all][TC] Bare rechecks stats week of 25.07 In-Reply-To: References: <15225123.PIt3FUKRBJ@p1> Message-ID: <20220728151534.y336vusncblmmlqz@yuggoth.org> On 2022-07-28 16:33:46 +0200 (+0200), Jiri Podivin wrote: [...] > What is the preferred way of stating the reason for a recheck? I've kind of > assumed that it's something like writing: > > " recheck: this is broken" > > but that doesn't trigger the CI. I've tried looking it up in the Zuul and > Opendev docs, but couldn't find anything. [...] The regular expression for it is configured here: https://opendev.org/openstack/project-config/src/commit/6b12602/zuul.d/pipelines.yaml#L24 What I usually do is "recheck because of nondeterminism in test foo" (or whatever the reason). Don't prepend anything before the word "recheck" and also follow the word with a space before you add any other text. I think Zuul must assume a required space after the regular expression since that doesn't seem to be encoded in the regex you see there. Make sure the "recheck" appears on the very first line of the comment too. Aside from that, you can add anything you like after it. Switching in/out of WIP and leaving votes at the same time as the recheck should also be avoided as they seem to cause the expression not to match. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From elod.illes at est.tech Thu Jul 28 16:04:10 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 28 Jul 2022 18:04:10 +0200 Subject: [all] Proposed Antelope cycle schedule Message-ID: Hi, As we are beyond Zed milestone 2 for more than 2 weeks now, it's time to start planning the next, Antelope cycle and its release schedule: Antelope schedule: https://review.opendev.org/c/openstack/releases/+/850753 (or see its generated page [1]) Feel free to review it and comment on the patch if there is something that should be considered for the schedule. [1] https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_699/850753/5/check/openstack-tox-docs/699fc2d/docs/antelope/schedule.html Thanks, El?d Ill?s irc: elodilles From laurentfdumont at gmail.com Thu Jul 28 16:49:39 2022 From: laurentfdumont at gmail.com (Laurent Dumont) Date: Thu, 28 Jul 2022 12:49:39 -0400 Subject: Regarding Application Credential in Open Stack XENA In-Reply-To: References: Message-ID: The error message is not very useful. - Can you turn on debug at the client level? - Are the application credentials working for other commands? On Thu, Jul 28, 2022 at 8:49 AM Adivya Singh wrote: > HI Team, > > Any feedback on this, How can it actually Work as a member will have the > same scope as earlier, but how to design it for the user to actually work > in a real scenario. > > On Sat, Jul 23, 2022 at 12:38 AM Adivya Singh > wrote: > >> hi Team, >> >> There is a use case, where a user wants to Create CI/CD pipeline using >> Application Credentials of Open Stack, Henceforth we tried Create an >> application with a role with a secret Key. >> >> but it is failing with the below output, The user wants to push the qcow2 >> image from his system to Open Stack. >> >> Using auth plugin: v3 application credential >> /usr/lib/python3.7/socket.py:660: ResourceWarning: unclosed >> self._sock = None >> ResourceWarning: Enable tracemalloc to get the object allocation traceback >> Not Found (HTTP 404) (Request-ID: req-217c9f28-d6a6-4649-adf9-2d9a4b965b3f) >> END return value: 1 >> >> it failed with a below result >> >> Regards >> >> Adivya Singh >> >> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From elod.illes at est.tech Thu Jul 28 18:55:36 2022 From: elod.illes at est.tech (=?UTF-8?B?RWzFkWQgSWxsw6lz?=) Date: Thu, 28 Jul 2022 20:55:36 +0200 Subject: [all][stable] Propose to EOL Pike series In-Reply-To: References: Message-ID: <3e31e9a8-bcc5-e6a1-0dc4-f3d96944f937@est.tech> Hi, As more than a month has passed since my mail without any respond, i'll start to propose pike-eol patches for all the still open projects in the coming days. PTLs/release liaisons, please be ready to approve them as you see fit. Thanks, El?d On 2022. 06. 24. 20:28, El?d Ill?s wrote: > Hi, > > It seems that time has come to (mass)transition Pike to End of Life > due to lack of maintainers (and backports + reviews; see the list of > recent changes [1]). > Some core projects have made this step already. Others are still open > but not really maintained, which can be seen by the number of daily > periodic stable job failures on pike branch, which is around 40 for > several months now. (By the way, periodic stable job failures number > is constantly high [2], due to some projects which seem to be > abandoned completely (murano, *-powervm), but that is another story). > > Please let me know what do you think, or indicate if any of the > projects wants to keep its pike branch open / in Extended Maintenance. > > [1] https://review.opendev.org/q/branch:stable/pike > [2] > http://lists.openstack.org/pipermail/openstack-stable-maint/2022-May/thread.html > > Thanks, > > El?d > From james.slagle at gmail.com Thu Jul 28 19:58:18 2022 From: james.slagle at gmail.com (James Slagle) Date: Thu, 28 Jul 2022 15:58:18 -0400 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> <20220716015210.7pzcrwfyzcho6opc@yuggoth.org> Message-ID: On Mon, Jul 18, 2022 at 8:26 AM C?dric Jeanneret wrote: > > > On 7/16/22 03:52, Jeremy Stanley wrote: > > On 2022-07-09 13:26:36 +0000 (+0000), Jeremy Stanley wrote: > > [...] > >> Apparently, ansible-runner currently depends[3] on python-daemon, > >> which still has a dependency on lockfile[4]. Our uses of > >> ansible-runner seem to be pretty much limited to TripleO > >> repositories (hence tagging them in the subject), so it's possible > >> they could find an alternative to it and solve this dilemma. > >> Optionally, we could try to help the ansible-runner or python-daemon > >> maintainers with new implementations of the problem dependencies as > >> a way out. > > [...] > > > > In the meantime, how does everyone feel about us going ahead and > > removing the "openstackci" account from the maintainers list for > > lockfile on PyPI? We haven't depended on it directly since 2015, and > > it didn't come back into our indirect dependency set until 2018 > > (presumably that's when TripleO started using ansible-runner?). The > > odds that we'll need to fix anything in it in the future are pretty > > small at this point, and if we do we're better off putting that > > effort into helping the ansible-runner or python-daemon maintainers > > move off of it instead. > > That's a question for TripleO PTL - he's on holidays until next week (he > was out also last week). It would be good to get his thoughts before > doing anything, if it's possible. > > On my side, I don't really see any strong objection, but I may as well > miss something important. > I don't have any objection to removing openstackci as a maintainer of lockfile on PypI. -- -- James Slagle -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Thu Jul 28 20:04:18 2022 From: fungi at yuggoth.org (Jeremy Stanley) Date: Thu, 28 Jul 2022 20:04:18 +0000 Subject: [dev][requirements][tripleo] Return of the revenge of lockfile strikes back part II In-Reply-To: References: <20220709132635.v5ljgnc7lsmu25xk@yuggoth.org> <20220716015210.7pzcrwfyzcho6opc@yuggoth.org> Message-ID: <20220728200418.mzghpxdbynlkmboz@yuggoth.org> On 2022-07-28 15:58:18 -0400 (-0400), James Slagle wrote: [...] > I don't have any objection to removing openstackci as a maintainer of > lockfile on PypI. Thanks for confirming! I'll get the ball rolling on that and let the original maintainer know. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From chris.macnaughton at canonical.com Thu Jul 28 20:38:46 2022 From: chris.macnaughton at canonical.com (Chris MacNaughton) Date: Thu, 28 Jul 2022 15:38:46 -0500 Subject: [charms] Team Delegation proposal Message-ID: Hello All, I would like to propose some new ACLs in Gerrit for the openstack-charms project: - openstack-core-charms - ceph-charms - network-charms - stable-maintenance With an increasing focus split among the openstack-charmers team, I'm observing that people are focused on more specific subsets of the charms, and would like to propose that new ACLs are created to allow us to recognize that officially. I've chosen the breakdown above as it aligns neatly with where the focus lines are at this point, letting developers work on their specific focus areas. This proposal would not reduce permissions for anybody who is currently a core on the openstack-charms project and, in fact, future subteam core members could aspire to full openstack-charmers core as well. Ideally, this approach will let us escalate developers to "core" developers for the subteam(s) where they have demonstrated the expertise we expect in a core-charmer. It also allows a more gradual escalation to being a core in the openstack-charms project, making it a progression rather than a single destination. As a related addition, I'm appending to this proposal the creation of a stable-maintenance ACL which would allow members to manage backports without a full core-charmer grant. --- Chris MacNaughton -------------- next part -------------- An HTML attachment was scrubbed... URL: From mihalis68 at gmail.com Thu Jul 28 22:57:46 2022 From: mihalis68 at gmail.com (Chris Morgan) Date: Thu, 28 Jul 2022 18:57:46 -0400 Subject: [ops] tentative plans for ops meetup on Friday after PTG Message-ID: Hi All, Please see this tweet, we are in the early stages of planning an ops meetup on an extra day (Friday) of PTG at the same venue: https://twitter.com/osopsmeetup/status/1552788675632799744?s=20&t=aEBpjhm6tnQQgYn6ie_aYg Event link : https://openinfra.dev/ptg/ early bird pricing currently available! We ask interested openstack operators to follow that twitter account, we keep it lean and spam free. We are also experimenting with also sharing news on LinkedIn if that works better for some of you OpenStack operators. You can connect with me (Chris Morgan, Bloomberg) there if you wish. Chris -- Chris Morgan -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Jul 29 00:53:22 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Thu, 28 Jul 2022 19:53:22 -0500 Subject: [ops] tentative plans for ops meetup on Friday after PTG In-Reply-To: References: Message-ID: Also, To add onto this, all operators are encouraged to attend the PTG all week! We need your involvement in all the other sessions taking place to make the event a success. For more information on how to get the most out of the event as an operator or end user, check out my articles on the OpenStack blog! [1][2]. -Kendall (diablo_rojo) [1] https://www.openstack.org/blog/how-to-have-a-successful-ptg-as-an-openstack-operator/ [2] https://www.openstack.org/blog/what-operators-can-expect-to-discuss-with-the-openstack-nova-team-at-the-ptg/ On Thu, Jul 28, 2022 at 5:59 PM Chris Morgan wrote: > Hi All, > Please see this tweet, we are in the early stages of planning an ops > meetup on an extra day (Friday) of PTG at the same venue: > > > https://twitter.com/osopsmeetup/status/1552788675632799744?s=20&t=aEBpjhm6tnQQgYn6ie_aYg > > Event link : https://openinfra.dev/ptg/ early bird pricing currently > available! > > We ask interested openstack operators to follow that twitter account, we > keep it lean and spam free. We are also experimenting with also sharing > news on LinkedIn if that works better for some of you OpenStack operators. > You can connect with me (Chris Morgan, Bloomberg) there if you wish. > > Chris > -- > Chris Morgan > -------------- next part -------------- An HTML attachment was scrubbed... URL: From derekokeeffe85 at yahoo.ie Fri Jul 29 07:42:26 2022 From: derekokeeffe85 at yahoo.ie (Derek O keeffe) Date: Fri, 29 Jul 2022 07:42:26 +0000 (UTC) Subject: Ansible playbook failing References: <1811919937.5278695.1659080546237.ref@mail.yahoo.com> Message-ID: <1811919937.5278695.1659080546237@mail.yahoo.com> Hi all, I have ran the OSA playbooks to completion on more than one occasion but recently I'm having an issue I can't figure out so if anyone could shed some light on it it would be much appreciated. When running "openstack-ansible setup-openstack.yml" I get to the task named "TASK [python_venv_build : Create wheel directory on the build host]" and I get the following error: fatal: [infra1_gnocchi_container-7c40503c -> infra1_repo_container-017aea8f(xx.xx.xx.xx)]: FAILED! => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "msg": "chown failed: failed to look up user nginx", "owner": "root", "path": "/var/www", "size": 4096, "state": "directory", "uid": 0} I cannot seem to find anything about this and didn't see this error when running the playbooks previously. Thanks in advance for any suggestions. Regards,Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim+openstack.org at coote.org Fri Jul 29 08:49:49 2022 From: tim+openstack.org at coote.org (tim+openstack.org at coote.org) Date: Fri, 29 Jul 2022 09:49:49 +0100 Subject: Ansible playbook failing In-Reply-To: <1811919937.5278695.1659080546237@mail.yahoo.com> References: <1811919937.5278695.1659080546237.ref@mail.yahoo.com> <1811919937.5278695.1659080546237@mail.yahoo.com> Message-ID: <83142CF2-491C-4B92-BFC7-59ABF8E068B9@coote.org> I don?t know this code, but the error message implies that the `nginx` user doesn?t exist on that box. is it visible (e.g in `/etc/passwd`). Assuming that is the case, the question is ?what changed? since the ansible code was last run (if anything - it could be a race condition that happens randomly). I?d guess that nginx installation failed/was removed. > On 29 Jul 2022, at 08:42, Derek O keeffe wrote: > > Hi all, > > I have ran the OSA playbooks to completion on more than one occasion but recently I'm having an issue I can't figure out so if anyone could shed some light on it it would be much appreciated. > > When running "openstack-ansible setup-openstack.yml" I get to the task named "TASK [python_venv_build : Create wheel directory on the build host]" and I get the following error: > > fatal: [infra1_gnocchi_container-7c40503c -> infra1_repo_container-017aea8f(xx.xx.xx.xx)]: FAILED! => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "msg": "chown failed: failed to look up user nginx", "owner": "root", "path": "/var/www", "size": 4096, "state": "directory", "uid": 0} > > I cannot seem to find anything about this and didn't see this error when running the playbooks previously. Thanks in advance for any suggestions. > > Regards, > Derek -------------- next part -------------- An HTML attachment was scrubbed... URL: From noonedeadpunk at gmail.com Fri Jul 29 09:34:59 2022 From: noonedeadpunk at gmail.com (Dmitriy Rabotyagov) Date: Fri, 29 Jul 2022 11:34:59 +0200 Subject: Ansible playbook failing In-Reply-To: <83142CF2-491C-4B92-BFC7-59ABF8E068B9@coote.org> References: <1811919937.5278695.1659080546237.ref@mail.yahoo.com> <1811919937.5278695.1659080546237@mail.yahoo.com> <83142CF2-491C-4B92-BFC7-59ABF8E068B9@coote.org> Message-ID: Hey, >From the error it looks like that previous role, that deploys stuff into repo_container has failed. It should be performed as part of setup-infra.yml. I would suggest re-running repo-install.yml playbook and see if it works and not failing somewhere before further investigation. ??, 29 ???. 2022 ?. ? 10:55, : > > I don?t know this code, but the error message implies that the `nginx` user doesn?t exist on that box. is it visible (e.g in `/etc/passwd`). > > Assuming that is the case, the question is ?what changed? since the ansible code was last run (if anything - it could be a race condition that happens randomly). I?d guess that nginx installation failed/was removed. > > On 29 Jul 2022, at 08:42, Derek O keeffe wrote: > > Hi all, > > I have ran the OSA playbooks to completion on more than one occasion but recently I'm having an issue I can't figure out so if anyone could shed some light on it it would be much appreciated. > > When running "openstack-ansible setup-openstack.yml" I get to the task named "TASK [python_venv_build : Create wheel directory on the build host]" and I get the following error: > > fatal: [infra1_gnocchi_container-7c40503c -> infra1_repo_container-017aea8f(xx.xx.xx.xx)]: FAILED! => {"changed": false, "gid": 0, "group": "root", "mode": "0755", "msg": "chown failed: failed to look up user nginx", "owner": "root", "path": "/var/www", "size": 4096, "state": "directory", "uid": 0} > > I cannot seem to find anything about this and didn't see this error when running the playbooks previously. Thanks in advance for any suggestions. > > Regards, > Derek > > From kkchn.in at gmail.com Fri Jul 29 12:38:53 2022 From: kkchn.in at gmail.com (KK CHN) Date: Fri, 29 Jul 2022 18:08:53 +0530 Subject: Workflow based VM provisioning for Openstack Message-ID: List, 1. Does anybody using the workflow based front end tools for VM requisition, approval and provisioning for automating the requests from various users ? (The users can be or cannot be from backend AD/LDAP) 2. Or do we need to code from scratch to integrate such a mechanism(Do Horizon dashboard need to be customized for this workflow based approval and auto provisioning and de provisioning of VMs ? ) . Where to start ? Pls guide me on how to achieve this functionality for openstack IaaS cloud infrastructure. ( we are using Ussuri version). Thanks in advance, Krish -------------- next part -------------- An HTML attachment was scrubbed... URL: From anyrude10 at gmail.com Fri Jul 29 11:35:52 2022 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Fri, 29 Jul 2022 17:05:52 +0530 Subject: [TripleO] - Overcloud Wallaby Deployment Failing with SSL and DNS Message-ID: Hi Team, I was trying to install Openstack Wallaby Release with Tripleo. I need to enable SSL with DNS. Without DNS, I am able to install overcloud successfully. But using DNS I am facing some authentication issues although the public VIP is successfully resolved by my DNS Server. I found a similar issue at Bugzilla https://bugs.launchpad.net/tripleo/+bug/1982996 Has anyone tried installing Wallaby with SSL and DNS? If yes, can you please suggest some pointers to resolve this? Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipe.reyes at canonical.com Fri Jul 29 15:46:21 2022 From: felipe.reyes at canonical.com (Felipe Reyes) Date: Fri, 29 Jul 2022 11:46:21 -0400 Subject: [charms] Team Delegation proposal In-Reply-To: References: Message-ID: <6d1d530dcd315b0df88d9418cf11cba65acc1acf.camel@canonical.com> Hi Chris, Thanks for bringing this up. On Thu, 2022-07-28 at 15:38 -0500, Chris MacNaughton wrote: > Hello All, > > > I would like to propose some new ACLs in Gerrit for the openstack-charms project: > > - openstack-core-charms > - ceph-charms > - network-charms > - stable-maintenance > with my openstack-charmer hat on: +1! :-) > > With an increasing focus split among the openstack-charmers team, I'm observing that people are > focused on more specific subsets of the charms, and would like to propose that new ACLs are > created to allow us to recognize that officially. I've chosen the breakdown above as it aligns > neatly with where the focus lines are at this point, letting developers work on their specific > focus areas. makes sense, it will give developers a stepping stone. > > This proposal would not reduce permissions for anybody who is currently a core on the openstack- > charms project and, in fact, future subteam core members could aspire to full openstack-charmers > core as well. Ideally, this approach will let us escalate developers to "core" developers for the > subteam(s) where they have demonstrated the expertise we expect in a core-charmer. It also allows > a more gradual escalation to being a core in the openstack-charms project, making it a progression > rather than a single destination. /me nods. > > As a related addition, I'm appending to this proposal the creation of a stable-maintenance ACL > which would allow members to manage backports without a full core-charmer grant. This will be greatly appreciated to give maintenance to our stable charms that are seeing more backports now that we have branches per OpenStack release \o/ -- Felipe Reyes felipe.reyes at canonical.com (GPG:0x9B1FFF39) Launchpad: ~freyes | IRC: freyes From frode.nordahl at canonical.com Sat Jul 30 01:24:27 2022 From: frode.nordahl at canonical.com (Frode Nordahl) Date: Sat, 30 Jul 2022 03:24:27 +0200 Subject: [charms] Team Delegation proposal In-Reply-To: <6d1d530dcd315b0df88d9418cf11cba65acc1acf.camel@canonical.com> References: <6d1d530dcd315b0df88d9418cf11cba65acc1acf.camel@canonical.com> Message-ID: On Fri, Jul 29, 2022 at 5:46 PM Felipe Reyes wrote: > > Hi Chris, > > Thanks for bringing this up. > > On Thu, 2022-07-28 at 15:38 -0500, Chris MacNaughton wrote: > > Hello All, > > > > > > I would like to propose some new ACLs in Gerrit for the openstack-charms project: > > > > - openstack-core-charms > > - ceph-charms > > - network-charms > > - stable-maintenance > > > > with my openstack-charmer hat on: +1! :-) Naming tends to stick, the name will remain long after the thought process and variables present today have diminished, and as such I would like to discuss the naming of these ACLs. `network-chamrs` is too broad. I would settle for `ovn-charms`. > > > > With an increasing focus split among the openstack-charmers team, I'm observing that people are > > focused on more specific subsets of the charms, and would like to propose that new ACLs are > > created to allow us to recognize that officially. I've chosen the breakdown above as it aligns > > neatly with where the focus lines are at this point, letting developers work on their specific > > focus areas. > > makes sense, it will give developers a stepping stone. > > > > This proposal would not reduce permissions for anybody who is currently a core on the openstack- > > charms project and, in fact, future subteam core members could aspire to full openstack-charmers > > core as well. Ideally, this approach will let us escalate developers to "core" developers for the > > subteam(s) where they have demonstrated the expertise we expect in a core-charmer. It also allows > > a more gradual escalation to being a core in the openstack-charms project, making it a progression > > rather than a single destination. > > /me nods. > > > > > As a related addition, I'm appending to this proposal the creation of a stable-maintenance ACL > > which would allow members to manage backports withoutth a full core-charmer grant. > > This will be greatly appreciated to give maintenance to our stable charms that are seeing more > backports now that we have branches per OpenStack release \o/ Apart from the naming / areas of the acls I agree with the sentiments above. -- Frode Nordahl From ignaziocassano at gmail.com Sat Jul 30 18:51:41 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sat, 30 Jul 2022 20:51:41 +0200 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: Hello, sorry but my networking skill is very poor. Let me do explain what I understood With routed provider network I can use a single provider network to represent multiple l2 networks. For example use case 1: Compute node A on vlan 100. Compute node B on vlan 100. I can create more then one segments on vlan 100 with different cidr. Segment 1 with 192.168.100.0/24. Segment 2 with 192.168.101.0/24 Use case case 2: I can also have nodes on different vlan and using aggregates to address vm on compute nodes depending on ip address. Compute node A and B on vlan 100. Compute node C and D on vlan 101. Vm on segments belonging to vlan 100 are addressed on Cimpute node A or B. Vm on segments belonging to vlan 101 are addressed on compute node C or D. In both use case phisical router must be configured because openstack virtual router cannot be used. Please, let me know if I undertood well. Ignazio Il Mer 27 Lug 2022, 16:50 Miguel Lavalle ha scritto: > Ignazio, > > You might find the following two presentations useful to understand what > segments are and how they are used in routed networks: > > > https://www.openstack.org/videos/summits/austin-2016/mapping-real-networks-to-physical-networks-segments-and-logical-networks-in-neutron > > https://www.openstack.org/videos/summits/barcelona-2016/scaling-up-openstack-networking-with-routed-networks > > And to summarize what you will find in those presentations: > > 1) A segment is a single L2 broadcast domain, be it a vlan or a vxlan or > any other way to realize a L2 broadcast domain in the networking fabric. > 2) A Neutron network can be created stitching together 1 or several > segments. If after putting several segments together in a Neutron network > they become a single L2 broadcast domain (i.e. they are stitched together > via switching) then you have a multi-segment Neutron network. However .... > 3) If the segments in a Neutron network are stitched together with L3 > routers, then you have a routed provider network. In such networks, each > segment is a separate L2 broadcast domain, which should provide higher > levels of scalability > 4) To better understand the terminology, you may also find it useful to > understand the distinction between "provider networks" and "tenant > networks". A provider network is one that was mapped explicitly at creation > by a cloud admin to specific segments, most likely to achieve certain > performance / scalability goals. A tenant network is one for which, at > creation, Neutron assigned automatically a segment > > Best regards > > Miguel > > On Wed, Jul 27, 2022 at 3:01 AM Ignazio Cassano > wrote: > >> Hello, thanks for your reply. >> The segment id is the vlan id (in your example 101) ? >> My understanding is that some compute nodes in a rack are connected to a >> vlan, and other on another vlan. >> Then I can create a network (segmentation1) and scheduler put the vm on >> the compute node where vlan is present. >> So for users exists only segmentaion1 network and they do not know it is >> splitted in more vlans. >> Is it correct ? >> Ignazio >> >> Il giorno mer 27 lug 2022 alle ore 09:27 Lajos Katona < >> katonalala at gmail.com> ha scritto: >> >>> Hi, >>> I suppose you referenced this document: >>> >>> https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html >>> >>> In Neutron terminology segments appear on different layers, on the API a >>> segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). >>> What is interesting here that this segment has to be a representation on >>> the compute where l2-agent (ovs-agent) can know which segment is the one it >>> can bind ports. >>> That cfg option is in ml2_conf.ini, and bridge_mappings, where the >>> admin/deployer can state which bridge (like br-ex) is connected to which >>> provider network (out of Openstack's control). >>> So for example a sample config in ml_conf.ini like this: >>> >>> bridge_mappings = public:br-ex,physnet1:br0 >>> >>> Means that on that compute VM ports can be bound which has a network >>> segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: >>> 101, network_id: 1234-56..) >>> More computes can have the same bridge-physnet mapping, the deployer's >>> responsibility is to have these connected to the same switch, whatever. >>> >>> [1]: >>> https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments >>> >>> Ignazio Cassano ezt ?rta (id?pont: 2022. >>> j?l. 26., K, 21:04): >>> >>>> Hello All, I am reading documentation about routed provider network. >>>> It reports: " >>>> Routed provider networks imply that compute nodes reside on different >>>> segments. " >>>> >>>> What does mean ? >>>> What is a segment it this case ? >>>> Thanks for helping me" >>>> Ignazio >>>> >>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From gmann at ghanshyammann.com Sun Jul 31 09:04:50 2022 From: gmann at ghanshyammann.com (Ghanshyam Mann) Date: Sun, 31 Jul 2022 14:34:50 +0530 Subject: [all][tc] What's happening in Technical Committee: summary 2022 July 31: Reading: 5 min Message-ID: <18253804027.ba14e673215293.7154282200664111100@ghanshyammann.com> Hello Everyone, Here is this week's summary of the Technical Committee activities. 1. TC Meetings: ============ * We had this week's meeting on July 28. Most of the meeting discussions are summarized in this email. Meeting full logs are available @https://meetings.opendev.org/meetings/tc/2022/tc.2022-07-28-15.00.log.html * Next TC weekly meeting will be on 4 Aug Thursday at 15:00 UTC, feel free to add the topic on the agenda[1] by Aug 3. 2. What we completed this week: ========================= * Updates to the RBAC community wide for focusing on 'reader' role[2] * Added Skyline as Emerging Technology[3]. 3. Activities In progress: ================== TC Tracker for Zed cycle ------------------------------ * Zed tracker etherpad includes the TC working items[4], Two are completed and others items are in-progress. Open Reviews ----------------- * Four open reviews for ongoing activities[5]. 2021 User Survey TC Question Analysis ----------------------------------------------- No update on this. The survey summary is up for review[6]. Feel free to check and provide feedback. Zed cycle Leaderless projects ---------------------------------- Dale Smith volunteer to be PTL for Adjutant project [7] Fixing Zuul config error ---------------------------- Requesting projects with zuul config error to look into those and fix them which should not take much time[8][9]. Project updates ------------------- * Add charmed k8s operators to OpenStack Charms[10] 4. How to contact the TC: ==================== If you would like to discuss or give feedback to TC, you can reach out to us in multiple ways: 1. Email: you can send the email with tag [tc] on openstack-discuss ML[11]. 2. Weekly meeting: The Technical Committee conduct a weekly meeting every Thursday 15 UTC [12] 3. Ping us using 'tc-members' nickname on #openstack-tc IRC channel. [1] https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee#Next_Meeting [2] https://review.opendev.org/c/openstack/governance/+/847418 [3] https://review.opendev.org/c/openstack/governance/+/849155 [4] https://etherpad.opendev.org/p/tc-zed-tracker [5] https://review.opendev.org/q/projects:openstack/governance+status:open [6] https://review.opendev.org/c/openstack/governance/+/836888 [7] https://review.opendev.org/c/openstack/governance/+/849606 [8] https://etherpad.opendev.org/p/zuul-config-error-openstack [9] http://lists.openstack.org/pipermail/openstack-discuss/2022-May/028603.html [10] https://review.opendev.org/c/openstack/governance/+/849997 [11] http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-discuss [12] http://eavesdrop.openstack.org/#Technical_Committee_Meeting -gmann From miguel at mlavalle.com Sun Jul 31 17:13:32 2022 From: miguel at mlavalle.com (Miguel Lavalle) Date: Sun, 31 Jul 2022 12:13:32 -0500 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: You got it! On Sat, Jul 30, 2022 at 1:51 PM Ignazio Cassano wrote: > Hello, sorry but my networking skill is very poor. > Let me do explain what I understood > With routed provider network I can use a single provider network to > represent multiple l2 networks. > For example use case 1: > Compute node A on vlan 100. > Compute node B on vlan 100. > I can create more then one segments on vlan 100 with different cidr. > Segment 1 with 192.168.100.0/24. > Segment 2 with 192.168.101.0/24 > > Use case case 2: > I can also have nodes on different vlan and using aggregates to address > vm on compute nodes depending on ip address. > Compute node A and B on vlan 100. > Compute node C and D on vlan 101. > Vm on segments belonging to vlan 100 are addressed on Cimpute node A or B. > Vm on segments belonging to vlan 101 are addressed on compute node C or D. > > In both use case phisical router must be configured because openstack > virtual router cannot be used. > Please, let me know if I undertood well. > Ignazio > > Il Mer 27 Lug 2022, 16:50 Miguel Lavalle ha scritto: > >> Ignazio, >> >> You might find the following two presentations useful to understand what >> segments are and how they are used in routed networks: >> >> >> https://www.openstack.org/videos/summits/austin-2016/mapping-real-networks-to-physical-networks-segments-and-logical-networks-in-neutron >> >> https://www.openstack.org/videos/summits/barcelona-2016/scaling-up-openstack-networking-with-routed-networks >> >> And to summarize what you will find in those presentations: >> >> 1) A segment is a single L2 broadcast domain, be it a vlan or a vxlan or >> any other way to realize a L2 broadcast domain in the networking fabric. >> 2) A Neutron network can be created stitching together 1 or several >> segments. If after putting several segments together in a Neutron network >> they become a single L2 broadcast domain (i.e. they are stitched together >> via switching) then you have a multi-segment Neutron network. However .... >> 3) If the segments in a Neutron network are stitched together with L3 >> routers, then you have a routed provider network. In such networks, each >> segment is a separate L2 broadcast domain, which should provide higher >> levels of scalability >> 4) To better understand the terminology, you may also find it useful to >> understand the distinction between "provider networks" and "tenant >> networks". A provider network is one that was mapped explicitly at creation >> by a cloud admin to specific segments, most likely to achieve certain >> performance / scalability goals. A tenant network is one for which, at >> creation, Neutron assigned automatically a segment >> >> Best regards >> >> Miguel >> >> On Wed, Jul 27, 2022 at 3:01 AM Ignazio Cassano >> wrote: >> >>> Hello, thanks for your reply. >>> The segment id is the vlan id (in your example 101) ? >>> My understanding is that some compute nodes in a rack are connected to >>> a vlan, and other on another vlan. >>> Then I can create a network (segmentation1) and scheduler put the vm on >>> the compute node where vlan is present. >>> So for users exists only segmentaion1 network and they do not know it is >>> splitted in more vlans. >>> Is it correct ? >>> Ignazio >>> >>> Il giorno mer 27 lug 2022 alle ore 09:27 Lajos Katona < >>> katonalala at gmail.com> ha scritto: >>> >>>> Hi, >>>> I suppose you referenced this document: >>>> >>>> https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html >>>> >>>> In Neutron terminology segments appear on different layers, on the API >>>> a segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). >>>> What is interesting here that this segment has to be a representation >>>> on the compute where l2-agent (ovs-agent) can know which segment is the one >>>> it can bind ports. >>>> That cfg option is in ml2_conf.ini, and bridge_mappings, where the >>>> admin/deployer can state which bridge (like br-ex) is connected to which >>>> provider network (out of Openstack's control). >>>> So for example a sample config in ml_conf.ini like this: >>>> >>>> bridge_mappings = public:br-ex,physnet1:br0 >>>> >>>> Means that on that compute VM ports can be bound which has a network >>>> segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: >>>> 101, network_id: 1234-56..) >>>> More computes can have the same bridge-physnet mapping, the deployer's >>>> responsibility is to have these connected to the same switch, whatever. >>>> >>>> [1]: >>>> https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments >>>> >>>> Ignazio Cassano ezt ?rta (id?pont: 2022. >>>> j?l. 26., K, 21:04): >>>> >>>>> Hello All, I am reading documentation about routed provider network. >>>>> It reports: " >>>>> Routed provider networks imply that compute nodes reside on different >>>>> segments. " >>>>> >>>>> What does mean ? >>>>> What is a segment it this case ? >>>>> Thanks for helping me" >>>>> Ignazio >>>>> >>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ignaziocassano at gmail.com Sun Jul 31 17:53:19 2022 From: ignaziocassano at gmail.com (Ignazio Cassano) Date: Sun, 31 Jul 2022 19:53:19 +0200 Subject: Openstack routed provider network In-Reply-To: References: Message-ID: Thanks Ignazio Il Dom 31 Lug 2022, 19:13 Miguel Lavalle ha scritto: > You got it! > > On Sat, Jul 30, 2022 at 1:51 PM Ignazio Cassano > wrote: > >> Hello, sorry but my networking skill is very poor. >> Let me do explain what I understood >> With routed provider network I can use a single provider network to >> represent multiple l2 networks. >> For example use case 1: >> Compute node A on vlan 100. >> Compute node B on vlan 100. >> I can create more then one segments on vlan 100 with different cidr. >> Segment 1 with 192.168.100.0/24. >> Segment 2 with 192.168.101.0/24 >> >> Use case case 2: >> I can also have nodes on different vlan and using aggregates to address >> vm on compute nodes depending on ip address. >> Compute node A and B on vlan 100. >> Compute node C and D on vlan 101. >> Vm on segments belonging to vlan 100 are addressed on Cimpute node A or B. >> Vm on segments belonging to vlan 101 are addressed on compute node C or D. >> >> In both use case phisical router must be configured because openstack >> virtual router cannot be used. >> Please, let me know if I undertood well. >> Ignazio >> >> Il Mer 27 Lug 2022, 16:50 Miguel Lavalle ha >> scritto: >> >>> Ignazio, >>> >>> You might find the following two presentations useful to understand what >>> segments are and how they are used in routed networks: >>> >>> >>> https://www.openstack.org/videos/summits/austin-2016/mapping-real-networks-to-physical-networks-segments-and-logical-networks-in-neutron >>> >>> https://www.openstack.org/videos/summits/barcelona-2016/scaling-up-openstack-networking-with-routed-networks >>> >>> And to summarize what you will find in those presentations: >>> >>> 1) A segment is a single L2 broadcast domain, be it a vlan or a vxlan or >>> any other way to realize a L2 broadcast domain in the networking fabric. >>> 2) A Neutron network can be created stitching together 1 or several >>> segments. If after putting several segments together in a Neutron network >>> they become a single L2 broadcast domain (i.e. they are stitched together >>> via switching) then you have a multi-segment Neutron network. However .... >>> 3) If the segments in a Neutron network are stitched together with L3 >>> routers, then you have a routed provider network. In such networks, each >>> segment is a separate L2 broadcast domain, which should provide higher >>> levels of scalability >>> 4) To better understand the terminology, you may also find it useful to >>> understand the distinction between "provider networks" and "tenant >>> networks". A provider network is one that was mapped explicitly at creation >>> by a cloud admin to specific segments, most likely to achieve certain >>> performance / scalability goals. A tenant network is one for which, at >>> creation, Neutron assigned automatically a segment >>> >>> Best regards >>> >>> Miguel >>> >>> On Wed, Jul 27, 2022 at 3:01 AM Ignazio Cassano < >>> ignaziocassano at gmail.com> wrote: >>> >>>> Hello, thanks for your reply. >>>> The segment id is the vlan id (in your example 101) ? >>>> My understanding is that some compute nodes in a rack are connected to >>>> a vlan, and other on another vlan. >>>> Then I can create a network (segmentation1) and scheduler put the vm on >>>> the compute node where vlan is present. >>>> So for users exists only segmentaion1 network and they do not know it >>>> is splitted in more vlans. >>>> Is it correct ? >>>> Ignazio >>>> >>>> Il giorno mer 27 lug 2022 alle ore 09:27 Lajos Katona < >>>> katonalala at gmail.com> ha scritto: >>>> >>>>> Hi, >>>>> I suppose you referenced this document: >>>>> >>>>> https://docs.openstack.org/neutron/latest/admin/config-routed-networks.html >>>>> >>>>> In Neutron terminology segments appear on different layers, on the API >>>>> a segment is a network type / seg. id / phys-net / net uuid tuple (see [1]). >>>>> What is interesting here that this segment has to be a representation >>>>> on the compute where l2-agent (ovs-agent) can know which segment is the one >>>>> it can bind ports. >>>>> That cfg option is in ml2_conf.ini, and bridge_mappings, where the >>>>> admin/deployer can state which bridge (like br-ex) is connected to which >>>>> provider network (out of Openstack's control). >>>>> So for example a sample config in ml_conf.ini like this: >>>>> >>>>> bridge_mappings = public:br-ex,physnet1:br0 >>>>> >>>>> Means that on that compute VM ports can be bound which has a network >>>>> segment like this: ( network_type: vlan, physical_network: *physnet1*, segmentation_id: >>>>> 101, network_id: 1234-56..) >>>>> More computes can have the same bridge-physnet mapping, the deployer's >>>>> responsibility is to have these connected to the same switch, whatever. >>>>> >>>>> [1]: >>>>> https://docs.openstack.org/api-ref/network/v2/index.html?expanded=create-segment-detail#segments >>>>> >>>>> Ignazio Cassano ezt ?rta (id?pont: 2022. >>>>> j?l. 26., K, 21:04): >>>>> >>>>>> Hello All, I am reading documentation about routed provider network. >>>>>> It reports: " >>>>>> Routed provider networks imply that compute nodes reside on different >>>>>> segments. " >>>>>> >>>>>> What does mean ? >>>>>> What is a segment it this case ? >>>>>> Thanks for helping me" >>>>>> Ignazio >>>>>> >>>>> -------------- next part -------------- An HTML attachment was scrubbed... URL: